Harnessing Algorithms to Combat the Spread of Mis- and Disinformation
The proliferation of fake news, misinformation, and disinformation in today's digital age has become a pressing concern. Social media platforms and technological advancements have exacerbated the spread of false information, leading to detrimental consequences for individuals, societies, and democratic institutions—especially within the United States, where trust in the media is hovering around 34 percent. However, algorithms and artificial intelligence (AI) may offer valuable tools in the fight against the dissemination of misleading content. If newsrooms can leverage these technologies, it will become possible to detect, control, and counteract the spread of mis- and disinformation more effectively.
The detection of misinformation requires a multifaceted approach that combines algorithms, machine-learning models, and human intervention. Laks V.S. Lakshmanan, a professor of computer science at the University of British Columbia, has developed an algorithm to assist with the detection of misinformation on social media.
Social media companies, as the newly-minted gatekeepers of information dissemination, bear a significant responsibility in controlling the spread of false content within their networks—but because regulation on what kind of information social media sites can allow to spread without consequence does not yet exist, these algorithms act as a proxy-regulator. They can analyze communication networks by tracing who has shared articles containing misinformation, and identify where that information originated, or else make a strong guess as to the origin of misinformation campaigns.
“Since these algorithms rely on communication structure alone, content analysis conducted by algorithms and humans is needed to confirm instances of misinformation,” wrote Lakshmanan. “Detecting manipulated articles takes careful analysis. Our research used a neural network-based approach that combines textual information with an external knowledge base to detect such tampering.”
Detecting misinformation is only the first step; taking decisive action to stop its dissemination is equally crucial. Internet platforms can intervene through various means, such as suspending user accounts or labeling suspicious posts—like Twitter did with former President Donald Trump during the January 6th insurrection. However, as algorithms and AI-powered networks are not infallible, more human eyes need to be on these stories to do analogue research. There is a risk of both mistakenly intervening on true information and failing to intervene on false information by relying solely on algorithms. To address this challenge, Lakshmanan crafted a policy to help users decide when to intervene on false information.
In addition to platform intervention, launching counter-campaigns works to minimize the impact of misinformation campaigns. Counter-campaign strategies must account for the inherent differences in how true and fake news spreads in terms of speed and extent. Furthermore, factors like user reactions, topic relevance, and post length should be considered. Algorithms can analyze these factors and devise efficient counter-campaign strategies that effectively mitigate the propagation of misinformation.
The advent of generative AI, powered by large language models like ChatGPT, presents both opportunities and challenges in the battle against mis- and disinformation. These AI models can create articles at an unprecedented speed and volume, making it more difficult to detect and counteract false information in real-time and at scale. Ongoing research is continuously addressing this challenge: researchers like Lakshmanan are developing novel algorithms and AI-powered techniques to enhance the detection, control, and mitigation of misinformation, recognizing the profound societal impact of this issue and turning a potentially dangerous tool to the cause of strengthening journalistic institutions.
As always though, countering misinformation and disinformation starts on the ground and with human interactions. Sending as much accurate information as is possible into cyberspace must be done by individuals first—with thoughtful, specific language to counter the generalized, biased information provided by bad actors who seek to undermine the trust in institutions in the United States. New technologies may make this easier but they will also present new challenges, and no algorithm can take the place of good journalism and good journalists themselves.