How Misinformation on Twitter Has Grown Under Musk
Elon Musk’s disastrous acquisition of Twitter has been in the news non-stop since his takeover. Musk completely changed the landscape of the social media giant upon taking over, firing about 3,750 people upon taking over, a number which encompasses roughly half of Twitter staff. The Tesla CEO has also jovially spread misinformation, including about the attack on Paul Pelosi, where he perpetuated the conspiracy theory that Pelosi was attacked by a male lover in the middle of the night.
Misinformation continues to spread on the platform. According to Poynter, data shows that so-called “superspreaders” of misinformation have received more engagement than ever before. These “superspreaders” are defined as “accounts that consistently publish popular tweets containing links to known misinformation.” Since Musk’s takeover, engagement with these accounts and their posts has increased 44 percent. Musk himself has driven some of these numbers: Musk’s personal account interacted with “four out of the five accounts that had gained the most influence” after he acquired Twitter.
“It is notoriously difficult to identify precisely the role of algorithmic amplification versus a theoretical and elusive ‘organic’ baseline of how the content would have spread in the absence of the algorithm,” said Bastien Carniel, data and policy lead at Science Feedback, who conducted the study that yielded this concerning result. “One hypothesis we had was whether one of Elon Musk’s first decisions was to tweak the recommender algorithm to give more voice to superspreaders or remove some sort of ‘reduced reach’ status for these accounts, which would amount to the same thing.”
Musk hasn’t just engaged with these accounts, he has actively given them back their platforms after several were banned, as well as removed some moderation rules and completely destroyed the moderation team’s personnel and code of conduct. And this issue greatly impacts more than simple misinformation—it’s also making users unsure of which information is actually trustworthy.
When an active shooter began terrorizing the University of Virginia (UVA), users were unable to identify accurate updates about the emergency situation on hand due to a number of factors, including false UVA accounts, and the policy change surrounding Twitter verification. Without paying extra money per month, the official account could not be verified with Twitter’s signature “blue check,” making it impossible for users to know which information was accurate and which was bogus.
"Many will be looking for other ways to connect with people and to get information," said Donyale Padgett, a professor of communication studies at Wayne State University in Detroit. "Especially in a crisis situation, it's a way to share information with the greatest number of people. The people whose lives are most affected by the situation might not have a lot of options. They need to get this information and they need to get it quickly….Now it's a free-for-all. "To think that could be compromised? It doesn't make me feel good. It definitely is a breach of confidence in the whole system."
Musk’s ever-shifting, quicksand-style management of the platform has failed its users over and over again—and as long as Musk has final say, it’s clear that reliable information will no longer be easily identifiable and available.