GPT-4: A More Advanced Generative AI Tool With A Misinformation Problem
The newest AI tool developed by OpenAI, GPT-4, may be more likely to spread misinformation than its predecessor, GPT-3.5, according to a new report by NewsGuard. NewsGuard, a service that rates news and information sites, used a test to compare the response of both AI tools to a series of leading prompts relating to 100 false narratives. The results of the test showed that GPT-4 was more willing to surface prominent false narratives more frequently and more persuasively than GPT-3.5.
The test also found that the items generated by GPT-4 included fewer disclosures, making it better at elevating false narratives in more convincing ways across a variety of formats, including "news articles, Twitter threads, and TV scripts mimicking Russian and Chinese state-run media outlets, health hoax peddlers, and well-known conspiracy theorists."
The false narratives, like conspiracies about the Sandy Hook Elementary School shooting and COVID-19 vaccines, were derived from NewsGuard’s Misinformation Fingerprints, a proprietary database of prominent false narratives that appear online. NewsGuard first tested GPT-3.5 in January, and the chatbot generated 80 of the 100 false narratives. However, in March, when GPT-4 was tested, it responded with false and misleading claims for all 100 of the false narratives.
OpenAI claims that GPT-4 is improving on its predecessors in providing more factual answers and serving up less disallowed content. However, the findings from NewsGuard's report suggest that OpenAI and other generative AI companies may face even greater misinformation problems as their technology gets more sophisticated at delivering answers that look authoritative.
One concern is that bad actors could abuse the technology to spread misinformation on a larger scale. NewsGuard's report suggests that OpenAI has rolled out a more powerful version of the artificial intelligence technology before fixing its most critical flaw: how easily it can be weaponized by malign actors to manufacture misinformation campaigns.
The test conducted by NewsGuard serves as a reminder that new technologies require validation and testing from many sources. While AI has the potential to revolutionize many aspects of our lives, it is critical to be aware of its limitations and potential misuse. As AI tools become more sophisticated, we need to take a proactive approach to ensure that they are being used ethically and responsibly.
AI tools like GPT-4 have the potential to revolutionize the way we consume and share information. However, the risks associated with the spread of misinformation cannot be ignored. It is essential that we continue to evaluate and monitor the use of AI tools to ensure that they are not being used to spread false and misleading information. As we move forward, it will be important to strike a balance between the benefits of AI technology and the potential risks associated with its misuse.
Alan Herrera is the Editorial Supervisor for the Association of Foreign Press Correspondents (AFPC-USA), where he oversees the organization’s media platform, foreignpress.org. He previously served as AFPC-USA’s General Secretary from 2019 to 2021 and as its Treasurer until early 2022.
Alan is an editor and reporter who has worked on interviews with such individuals as former White House Communications Director Anthony Scaramucci; Maria Fernanda Espinosa, the former President of the United Nations General Assembly; and Mariangela Zappia, the former Permanent Representative to Italy for the U.N. and current Italian Ambassador to the United States.
Alan has spent his career managing teams as well as commissioning, writing, and editing pieces on subjects like sustainable trade, financial markets, climate change, artificial intelligence, threats to the global information environment, and domestic and international politics. Alan began his career writing film criticism for fun and later worked as the Editor on the content team for Star Trek actor and activist George Takei, where he oversaw the writing team and championed progressive policy initatives, with a particular focus on LGBTQ+ rights advocacy.