Democratic Coalition Backs Measure to Curb AI's Disruptive Influence on Elections

Democratic Coalition Backs Measure to Curb AI's Disruptive Influence on Elections

A coalition of House Democrats, spearheaded by Representative Shontel Brown (D-Ohio), has introduced a comprehensive piece of legislation aimed at mitigating the influence of artificial intelligence on U.S. elections. This proposed bill focuses on establishing stringent penalties and clear disclosure requirements for the use of artificial Intelligence (AI) in election-related messaging. One of the key aspects of the bill is the mandate that all AI-generated election content must include a disclaimer to ensure transparency.

The Securing Elections from AI Deception Act

The legislation, co-sponsored by 47 Democrats, explicitly prohibits the use of AI to interfere with voting rights. This includes using AI to disseminate false information about voter registration, eligibility, polling locations, and the counting or canvassing of ballots. Additionally, the bill seeks to prevent the use of AI in creating deceptive endorsements for any person or candidate. It also aims to block AI from being used to deceive, threaten, intimidate, or otherwise interfere with an individual's ability to vote or participate in the election process.

According to the bill text provided by Brown’s office, both developers and users of AI technologies are restricted from using these capabilities in ways that intentionally deprive or defraud individuals of their right to vote in federal, state, or local elections. This legislation represents a valiant effort by House Democrats to protect the integrity of the electoral process in the face of rapidly advancing AI technologies.

The Instances of Election Misinformation the Bill Would Help Prevent

AI, including fake audio and images, has already been used to influence narratives around the 2024 election. Many of these efforts have specifically targeted Black voters, an indispensable demographic for Democrats. In several instances, supporters of former President Donald Trump created doctored images showing him surrounded by Black supporters as part of his strategy to attract Black voters in his rematch against President Joe Biden.

Over the past year, lawmakers have raised concerns about the potential for even more advanced AI tools to spread election misinformation. These concerns were validated when New Hampshire residents received a robocall ahead of the state’s presidential primary in January, featuring an AI-generated voice of President Joe Biden advising voters not to go to the polls. This incident zeroed in on the real and immediate threat posed by AI-generated misinformation. In response to this incident, the Federal Communications Commission (FCC) issued a unanimous ruling in February, making it illegal for robocalls to use AIgenerated voices. This decisive action was aimed at curbing the misuse of AI in automated calls that could mislead and manipulate voters. 

Moreover, the FCC is considering a proposal that would mandate disclosures of AI-generated content in campaign ads on broadcast television and radio, making certain that voters are aware when they are viewing or listening to AI-created material. Representative Brown stressed the urgency of addressing this issue, stating that the threat of AI being weaponized to interfere in elections "is no longer theoretical."

The 2024 election cycle will mark the first time AI-generated ads play such an integral role in campaign advertising. This really brings to the fore the necessity of legislation like the Securing Elections from AI Deception Act. Without such measures, the election could be heavily influenced by deceptive AI content, making the outcome even more unpredictable. While the election will likely still be wrought with chaos and unpredictability, these laws aim to restore some control and integrity to the process.