What Journalists Should Know About Biden's Order Addressing AI-Generated Content Verification

What Journalists Should Know About Biden's Order Addressing AI-Generated Content Verification

President Joe Biden has signed an executive order addressing some of the critical challenges associated with artificial intelligence (AI), particularly the identification of real and fake images. The order, signed on Monday, includes a call for new government-led standards on watermarking AI-generated content. Similar to watermarks on photographs and currency, digital watermarks aim to help users differentiate between real and fake objects and determine ownership.

“As part of the Biden-Harris Administration’s comprehensive strategy for responsible innovation, the Executive Order builds on previous actions the President has taken, including work that led to voluntary commitments from 15 leading companies to drive safe, secure, and trustworthy development of AI,” the White House said in a fact sheet.

Watermarking technology shows promise, and many leading AI companies are already incorporating it into their products. While some watermarks are simple and easily cropped, others are more persistent and allow authentication even after cropping or resizing. However, experts argue that watermarking alone is not foolproof.

Researchers at the University of Maryland recently demonstrated ways to break current watermarking methods and insert fake watermarks into images, raising concerns about the reliability of this approach. Services like DALL-E and Midjourney have made AI-generated fakes more accessible, leading to the proliferation of deceptive content on the internet.

Soheil Feizi, an associate professor of computer science at the University of Maryland, told The Verge that watermarking generative model outputs might not be a “practical solution” to AI disinformation, as the problem is theoretically impossible to “solve reliably.”

Biden's executive order also instructs the Commerce Department to develop standards for detecting and tracking synthetic content on the web. Adobe has introduced a visual marker to identify an image's provenance, providing information on how an image was produced when hovered over.

Experts suggest that authenticating AI-generated content at scale will require a combination of approaches, although they acknowledge that none will work perfectly. However, they view these methods as part of a “harm reduction” strategy.

While watermarking and tracking AI-generated content can help content creators protect their products, they also raise concerns about user privacy, especially in authoritarian regimes where satirical content can be dangerous.

Creating a reliable system for image authentication will take time, and the impact of the executive order on AI companies and potential government-imposed rules remains uncertain. As the 2024 election approaches, lawmakers and government agencies are expected to play a more central role in addressing the challenges posed by AI-generated content in political campaigns.