Meta Announces Strategic Measures to Confront Disinformation and AI Harms in EU Polls
Meta, the parent company of social media giant Facebook, has unveiled a strategic initiative ahead of the upcoming European Parliament elections. The company plans to establish a specialized team, known as the "EU-specific Elections Operations Center," aimed at combating disinformation and addressing potential risks associated with the misuse of artificial intelligence (AI) during the electoral process.
Marco Pancini, Meta’s head of EU affairs, detailed the company's approach in a blog post on Sunday. He emphasized the significance of the new initiative in mitigating misinformation, countering influence operations, and tackling risks linked to the abuse of AI technologies. Pancini outlined key aspects of Meta's strategy, including the formation of a dedicated team comprising experts from various domains within the company.
“Ahead of the elections period, we will make it easier for all our fact-checking partners across the EU to find and rate content related to the elections because we recognize that speed is especially important during breaking news events,” Pancini said. “We’ll use keyword detection to group related content in one place, making it easy for fact-checkers to find.”
He added:
“We already label photorealistic images created using Meta AI, and we are building tools to label AI generated images from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock that users post to Facebook, Instagram and Threads.”
In addition to addressing misinformation, Meta is actively focusing on the risks posed by the abuse of AI. Pancini revealed that Meta would introduce a feature enabling users to disclose instances when they share AI-generated video or audio content. Moreover, the company is considering implementing penalties for noncompliance, reflecting its commitment to responsible use of AI technologies.
Concerns about the potential impact of AI on elections have grown with the emergence of advanced AI platforms, such as OpenAI's GPT-4 and Google's Gemini. The ability to generate realistic yet fake information, images, and videos using AI has raised alarms about the potential manipulation of public opinion during elections.
The EU parliament elections are set to take place between June 6 and 9, marking a significant event in the context of global elections in 2024, which will include major polls in more than 80 countries.
Last week, AFPC-USA Executive Board Member Sissel McCarthy noted that “AI-generated content is already influencing voters,” adding that “In the 14 months since ChatGPT’s debut, this new AI technology is flooding the internet with lies, reshaping the political landscape and even challenging our concept of reality.”
McCarthy pointed out that two days before primary elections in New Hampshire, “voters received calls from a fake President Biden telling them not to vote.” Additionally, Florida Governor Ron DeSantis, a Republican, “mixed authentic photos with AI-generated images of Donald Trump embracing and even kissing Dr. Anthony Fauci (a deeply loathed figure in the MAGA world) to discredit the former president.”
The changes in the AI and information landscape are also “coming way too fast for government regulators who are struggling to keep pace with artificial intelligence,” warns McCarthy, who said that “there is no federal law prohibiting the use of AI-generated content in national elections, though legislators from both parties have introduced at least four bills targeting deepfakes and other deceitful media.”