How an AI Chatbot is Fighting Misinformation

Individuals draw their own conclusions after engaging with news media, particularly as many receive their information through social media algorithms.

In an age of rampant misinformation, it can be hard to convince some people to entertain ideas other than the ones they have solidified from the consistent content on their feed. And if another person is not able to see things in a different light, maybe an AI chatbot can do the job. 

A group of researchers had this idea in an article that was featured on the cover of Science. They compiled 2,190 conspiracy theories that Americans believed and put them through the large language model GPT-4 Turbo, yielding effective results.

suggests that engaging in dialogue with a chatbot can counter disinformation and conspiracy theories. Questions regarding these theories can be easily addressed, leading to clarification. Since the information comes from endless pages of data, the answers cannot be logically refuted by people that have counteracting beliefs. Personal conversations with the AI chatbot could combat fake news and conspiracy theories more effectively than people can. They are equipped to counter people's theories with factual information, often producing a more effective response than what might come from a typical debate with an individual.

The studies showed that the conversations were a valuable tool in swaying some participants’ beliefs. The tool decreased participants' beliefs in their selected conspiracy theory by approximately 20% on average. The researchers also followed up and found that the effect continued without fading for at least two months. The studies combated the idea that people are beyond rescue once they believe in even the most extreme conspiracy theories. The positive impacts of generative AI were demonstrated, as long as these programs are used responsibly. With the use of this AI, as with all AI, it will be important to minimize the opportunities for the technology to be used irresponsibly.

The participants were randomly assigned to engage in either a three-round conversation with the AI about their "preferred conspiracy belief" or a discussion with the AI on a neutral topic. The conversations that the participants had with AI are publicly accessible for people to see for themselves how the study panned out. The participants were organized by conspiracy and sortable based on the effectiveness of the intervention. What might be unexpected is the cordiality with which conversations unfolded when participants engaged with the chatbot. Even those who had an existing distrust of AI tended to have a notable change in their beliefs after their interactions. It helps that the chatbot has a level of “politeness” that builds rapport with the users while presenting pure facts and evidence.

While AI use has been partly responsible for the spread of misinformation, it can be used to set the record straight. As with any tool, it can be used for nefarious reasons, or it can be used for the betterment of the world. But some still have concerns that the use of an AI chatbot will not ultimately help filter misinformation. For individuals who remain skeptical of AI, the rapport that AI develops with users could make them more vulnerable to misinformation. As users begin to view these systems as trustworthy sources of information, their likelihood of accepting potentially misleading content increases. This shows that there will be people who believe this AI chatbot cannot be used in positive ways, despite the study results.

Aaron Dadisman is a contributing writer for the Association of Foreign Press Correspondents in the United States (AFPC-USA) who specializes in music and arts coverage. He has written extensively on issues affecting the journalism community as well as the impact of misinformation and disinformation on the media environment and domestic and international politics. Aaron has also worked as a science writer on climate change, space, and biology pieces.