AI Image Generators Vulnerable to Manipulation for Misleading Election Images, Report Finds
Tech watchdog, the Center for Countering Digital Hate (CCDH), released a report on Wednesday revealing that leading artificial intelligence image generators can be manipulated to create misleading election-related images. The findings raise concerns despite previous pledges from major AI firms to address risks associated with political misinformation ahead of global elections. The report tested AI image generators, including Midjourney, Stability AI’s DreamStudio, OpenAI’s ChatGPT Plus, and Microsoft Image Creator.
CCDH researchers found that each tool could be prompted to generate misleading images related to US presidential candidates or voting security. The report emphasized that although these platforms implement some content moderation, the existing protections are deemed inadequate. The ease of access to these AI tools allows virtually anyone to create and disseminate election disinformation.
Stability AI, the owner of DreamStudio, updated its policies on March 1 to explicitly prohibit the creation or promotion of fraud and disinformation. Midjourney and OpenAI are actively evolving their moderation systems, while Microsoft has taken new steps to address the issue, including launching a website for reporting deepfakes.
The report comes amid growing concerns about the misuse of AI tools to create misleading content, with recent instances of AI-generated images spreading political misinformation. The CCDH urged AI companies to invest in collaboration with researchers to prevent "jailbreaking" and recommended social media platforms invest in identifying and preventing the spread of potentially misleading AI-generated images.