AI and Misinformation: Oversight Board Challenges Meta's Approach to Manipulated Media
The Oversight Board responsible for content moderation on Meta platforms, including Facebook and Instagram, has ruled that an altered video of President Joe Biden does not violate Meta's policy on manipulated media.
The decision allows the manipulated video to remain on Facebook because it did not involve the use of artificial intelligence (AI) or machine learning techniques. However, the board criticized Meta's current manipulated media policy, describing it as "incoherent" and lacking persuasive justification, and recommended revisions.
Under Meta's existing policy, content showing a subject saying words they did not say is only disallowed if it is created using AI or machine learning techniques, such as deepfakes. The Oversight Board recommended that Meta expand its manipulated media policy to include audio and visual content showing people doing or saying things they did not do, regardless of the creation method. The revised policy should explicitly state the harms it aims to prevent, citing examples like misinformation and interference with the right to vote.
The board suggested that Meta could implement measures other than removal, such as labeling, to inform users that the content has been heavily altered. The manipulated media in question initially showed President Biden voting in the 2022 midterms, with the altered version falsely depicting inappropriate touching. The altered video accompanied by defamatory comments about the president went viral last month.
While the Oversight Board acknowledged that the altered video did not violate Meta's current policy, it emphasized the need for a broader focus on potential harms resulting from manipulated content, particularly those affecting electoral processes. This recommendation comes amid concerns about the use of AI in creating misleading content, such as fake robocalls during elections and AI-generated videos by political organizations.