What AI Tools Can Be Used to Help Stop the Spread of Harmful Content?
The ongoing collaboration between artificial intelligence and human intelligence serves as compelling evidence of their potential to work in a synergistic way towards an actual positive outcome. Recent initiatives, such as those undertaken by startups like Logically, demonstrate how AI is being leveraged to combat the growth of misinformation across the internet and social media platforms. While AI possesses the capacity to analyze vast volumes of data at an unprecedented scale, human involvement remains indispensable in the process of fact-checking to uphold credibility.
As noted by Lyric Jain, Logically's founder and CEO, the circulation of harmful news frequently surpasses the circulation of accurate information. By leveraging AI's capabilities, we can enhance our ability to differentiate between truth and falsehood, thereby nurturing a more enlightened and discerning society.
Here are some of many examples of the tools being used to combat the ever-present spread of harmful content.
Logically
Logically uses its algorithms to comprehend and assess textual content. These algorithms categorize the credibility of content sources using a rating system ranging from low to high which determines whether the articles are reliable or unreliable by comparing them to similar content across 100,000 sources. Furthermore, the algorithms extend their searches beyond text, examining metadata and images as well. They are also capable of filtering out profane and obscene material. During India’s recent election campaign, Logically scrutinized over 1 million articles, identifying 50,000 as fraudulent.
Sensity AI
Sensity AI stands at the forefront of combatting a relatively recent form of deceptive content: deepfakes. Unlike written material, identifying deepfakes—manipulated images or videos—can be challenging for those without specialized training. Established in 2018, Sensity AI promises to address the escalating sophistication of deepfakes, which pose risks such as reputation sabotage and misinformation among other malicious activities. By evaluating and identifying "visual threats," Sensity AI's detection API integrates video forensics and computer vision to determine the authenticity of images and videos. It can be much more difficult than one would expect to discern with the naked eye whether content is a deepfake, so Sensity is an invaluable tool.
Bot Sentinel
Founded in 2018, Bot Sentinel launched with the aim of identifying and monitoring trollbots and dubious X profiles, utilizing a combination of machine learning and artificial intelligence technologies. With a notable 95% accuracy rate in bot identification, Bot Sentinel's algorithms are designed around behaviors prohibited by X, analyzing the accounts to ascertain their credibility, and distinguishing between trustworthy and untrustworthy sources. The platform compiles these findings into a comprehensive database, which is updated daily to track the activities of each account.
TrollWall
TrollWall is an innovative Software-as-a-Service (SaaS) solution that utilizes advanced artificial intelligence algorithms to automatically detect and conceal toxic and inappropriate comments across various social media platforms. By capitalizing on TrollWall, organizations can cultivate a secure and respectful online environment that promotes constructive dialogue and engagement. By swiftly removing offensive remarks, TrollWall allows clients to optimize their resources and focus on audience engagement and community development.
This AI-powered tool is designed to protect social media profiles from malicious attacks, catering not only to businesses but also to journalists that want to evaluate the impact of harmful comments on their online pages. TrollWall lightens the load on social media moderators by providing automated detection capabilities, streamlining the otherwise time-consuming process. The tool operates in real-time, which makes it particularly useful for managing live events and newsroom broadcasts.