AI-Driven Misinformation: Concerns Rise Ahead of 2024 Presidential Election
As the 2024 presidential election approaches, the growing influence of artificial intelligence (AI) tools has raised concerns about the potential for the amplification of misinformation on an unprecedented scale. A recent poll conducted by The Associated Press-NORC Center for Public Affairs Research and the University of Chicago Harris School of Public Policy underscores these worries, revealing a notable degree of apprehension among U.S. adults.
The poll's findings are clear: a significant proportion of the American population is deeply concerned about the impact of AI tools on the dissemination of false and misleading information during the upcoming election. Almost six in ten adults (58%) believe that AI tools, with their capacity to micro-target political audiences, generate persuasive content en masse, and produce realistic fake images and videos in mere seconds, will contribute to the spread of misinformation in the 2024 presidential election.
By contrast, a mere 6% of respondents believe that AI will decrease the spread of misinformation, while one-third do not anticipate AI making much of a difference. This heightened concern is not without reason, as 2020 demonstrated the potential for misinformation through social media alone, and the integration of AI tools could exacerbate these issues.
Only 30% of American adults have used AI chatbots or image generators, and fewer than half (46%) have heard or read at least something about AI tools. Nevertheless, there is a broad consensus that 2024 presidential candidates should refrain from utilizing AI in certain ways.
When asked whether it would be advantageous or detrimental for candidates to employ AI in the 2024 presidential election, clear majorities expressed concerns about its use in creating false or misleading media for political ads (83%), editing or enhancing photos or videos for political ads (66%), tailoring political ads to individual voters (62%), and responding to voters' questions via chatbot (56%).
This sentiment is shared by majorities of both Republicans and Democrats, with 85% of Republicans and 90% of Democrats agreeing that it would be disadvantageous for presidential candidates to create false images or videos. Similarly, a majority of both parties (56% of Republicans and 63% of Democrats) believe it would be detrimental for candidates to answer voter questions via AI chatbots.
The poll found that the majority of Americans are similarly skeptical about the accuracy of information provided by AI chatbots, with just 5% expressing high confidence in its factualness. Most adults (61%) are not very or not at all confident in the reliability of information from AI chatbots.
While both major political parties express openness to regulations on AI, they generally favor measures to ban or label AI-generated content. About two-thirds of respondents support government bans on AI-generated content containing false or misleading images in political ads, and a similar number believe that technology companies should label all AI-generated content produced on their platforms.