How ChatGPT Can Be Used to Launch Fake News Sites
AI model ChatGPT has been causing a stir in the media landscape as of late. Media companies are ranging from openly welcoming the AI chatbot into their newsrooms to bristling against it completely. Some companies, like Buzzfeed, plan to include AI and ChatGPT itself in their future publications and activities, despite evidence that the AI model keeps making glaring mistakes in its reporting. But apart from that, a new concern has arisen in the shadow of AI’s rapid development: AI could be weaponized to further saturate the media landscape with misinformation and disinformation.
Poynter’s Alex Mahadevan was able to create an entire fictional newspaper and launch a fake news story in less than ten minutes with the AI model. “The names and images [of the staff are] totally made up. Passable bullsh*t on all of the above, that I'd wager most people wouldn't think twice about if they saw them on a pink slime news site,” he tweeted. Indeed, the false accounts generated by the model are fascinating and scary. “Samantha Rodriguez is a skilled journalist and editor with a passion for local news,” wrote the chatbot about a fictional person—Samantha Rodriguez does not exist. “She joined the St. Pete Sentinel [a fictional news publication] in 2012 and quickly moved up the ranks to become the paper’s metro editor.” The AI model then generated a fictional analysis of Rodriguez’s fictional writing style.
This was possible because the ChatGPT’s response model “works by sifting through the internet, accessing vast quantities of information, processing it, and using artificial intelligence to generate new content from user prompts,” wrote Poynter journalist Seth Smalley. “Users can ask it to produce almost any kind of text-based content.” This is dangerous in the wrong hands, according to Mahadevan, because of “pink slime” sites, which are deliberately partisan or misinforming news sources generated as a means to generate revenue without regard for the quality of information. While some dubious news sites used to outsource material to writers outside of the country to make their sites look legitimate, that step is no longer necessary with ChatGPT. Scarier still, the bot can actually generate random headshots for bad actors to use as their “writing staff.”
“Are the barriers to entry getting lower? The answer is yes,” Priyanjana Bengani, senior research fellow at the Tow Center at Columbia Journalism School, said. “Now anybody sitting anywhere can spin one of these things up.” Bengani studies pink slime networks. “I dont think it’s going to be transformative overnight,” she continued, pointing to the advent of deep fakes and DALL-E, another open AI program. However, the situation is concerning due to the already low levels of information literacy throughout the United States—if sites that look legitimate can be generated in just a few minutes, more people will be fooled.
Mike Caulfield, a research scientist at the University of Washington’s Center for an Informed Public who teaches media literacy tactics, echoes this concern. “Previously a well written, well laid out publication with headshots and bylines, etc., meant something,” he said. “Signals of authority were expensive, and that formed a barrier to entry…What has happened over the past 30 years is that the formerly expensive signals — the ones that focused on surface features — have become incredibly cheap, but we are still teaching students to look for those signals. It’s a massive disaster in the making.”
The full consequences of having easily accessible AI technology to generate and build content around the internet remain to be seen. But many journalists are sounding the alarm bells as more and more sites are shifting their business models to keep AI generating more content than just journalism. ChatGPT’s possibilities are endless, and with endless possibilities comes endless outcomes and consequences, not all of which will be healthy for our democracy, our flow of information, or even for society as a whole.