How Journalists Can Detect Altered Images and AI-Generated Content
In the digital age, visual content plays a significant role in journalism. However, with the rise of advanced image editing software and artificial intelligence (AI) technology, the authenticity and integrity of images can be compromised. As journalists strive to provide accurate and trustworthy information, it is crucial to have the skills and tools to detect altered images and AI-generated content.
But how?
Here’s what you should consider.
FAMILIARIZE YOURSELF WITH EDITING TECHNIQUES
To identify potentially altered images, journalists should have a basic understanding of common image editing techniques. Familiarize yourself with tools like Adobe Photoshop, GIMP, or other popular editing software to learn about the various manipulations that can be applied. This knowledge will aid in recognizing telltale signs of image tampering, such as inconsistent lighting, unnatural shadows, or irregularities in pixel patterns.
EXAMINE METADATA AND SOURCE VERIFICATION
Metadata embedded within digital images can provide valuable information about their origins and any potential alterations. Journalists should extract and analyze the metadata, including date, time, and camera settings, to verify the image's authenticity. Additionally, investigate the image's source and cross-reference it with other reliable sources to establish credibility and confirm its veracity.
UTILIZE FORENSIC TOOLS AND IMAGE ANALYSIS SOFTWARE
To enhance image verification, journalists can utilize forensic tools and image analysis software specifically designed for detecting image manipulation. Tools like Adobe Photoshop's Forensic Toolkit, Google Reverse Image Search, or specialized software such as FotoForensics and Izitru can assist in identifying image alterations, including retouching, cloning, or splicing. These tools utilize various algorithms and techniques to analyze discrepancies or inconsistencies in the image.
PAY ATTENTION TO VISUAL ANOMALIES
Careful scrutiny of the image itself can reveal visual anomalies that may indicate manipulation or AI generation. Look for irregularities in shadows, reflections, or proportions. Unnatural sharpness, mismatched lighting, or peculiar distortions may also be indicators of image alteration. Comparing the image with similar photographs or scenes can help spot inconsistencies.
ANALYZE FACIAL AND OBJECT RECOGNITION
AI technology has advanced to the point where it can generate highly realistic images of non-existent individuals or objects. Journalists should be aware of AI-generated deepfake images, which can be difficult to detect with the naked eye. Utilize facial recognition software or online tools like Deepware.ai or Sensity to identify potential deepfakes or AI-generated faces.
SEEK EXPERT OPINION AND COLLABORATION
When in doubt or dealing with complex image manipulations, it is beneficial to consult experts in the field of digital forensics or image analysis. Collaborating with professionals who specialize in image authentication can provide valuable insights and strengthen the accuracy of your findings. Additionally, reaching out to other journalists or news organizations who have experience in this area can lead to shared knowledge and best practices.
TRANSPARENTLY DOCUMENT AND REPORT FINDINGS
It is essential to document the process and findings of your image analysis thoroughly. Include detailed descriptions, methodologies, and any identified alterations or AI-generated elements. When reporting on images that have been manipulated or are AI-generated, transparently disclose your findings to maintain journalistic integrity and inform readers about the potential inaccuracies or uncertainties surrounding the visuals.
In an era of digital manipulation and AI-generated content, journalists must equip themselves with the necessary tools and knowledge to detect altered images and deepfakes. Vigilance in image verification is crucial for maintaining the integrity of journalistic reporting in an increasingly visual world.
RELATED READING: Spotting and Identifying Deepfakes
Alan Herrera is the Editorial Supervisor for the Association of Foreign Press Correspondents (AFPC-USA), where he oversees the organization’s media platform, foreignpress.org. He previously served as AFPC-USA’s General Secretary from 2019 to 2021 and as its Treasurer until early 2022.
Alan is an editor and reporter who has worked on interviews with such individuals as former White House Communications Director Anthony Scaramucci; Maria Fernanda Espinosa, the former President of the United Nations General Assembly; and Mariangela Zappia, the former Permanent Representative to Italy for the U.N. and current Italian Ambassador to the United States.
Alan has spent his career managing teams as well as commissioning, writing, and editing pieces on subjects like sustainable trade, financial markets, climate change, artificial intelligence, threats to the global information environment, and domestic and international politics. Alan began his career writing film criticism for fun and later worked as the Editor on the content team for Star Trek actor and activist George Takei, where he oversaw the writing team and championed progressive policy initatives, with a particular focus on LGBTQ+ rights advocacy.