The Impact of Deepfakes on Societies and Institutions

The Impact of Deepfakes on Societies and Institutions

Deepfake is the term used for manipulated footage that can ultimately diminish the reputation of individuals, business institutions, political figures, and media coverage. As false information spreads in an increasing tempo, journalists must be extra cautious about their sources in order to maintain the integrity of their work. With the current rise of technological advances, deepfakes have become even more challenging to locate. In a webinar hosted by The Association of Foreign Press Correspondents U.S.A. in partnership with Microsoft, experts Alyssa Suhm (Project Manager for Microsoft’s Defending Democracy Program) and Ashish Jaiman (Director of Technology and Operations for the Customer Security and Trust organization at Microsoft) shared about the impact of deepfake technology in our society and proposed solutions to mitigate its effect. 

A series of Tik Tok clips that seemed to feature Tom Cruise recently went viral. The meticulous edits made to the videos were so convincing that users began to question whether or not they were real. This is only one of the many examples of deepfake videos spotted by the public eye that are nearly impossible to uncover. Yet, the intent behind this type of footage is what determines if it will cause harmful outcomes. During her presentation, Alyssa Suhm shared about the positive and negative uses of al-generated synthetic media. Under the positive umbrella, she mentioned media resources that are targeted to help accessibility, education, public safety, and innovation.  In regards to the negative usage of deepfake, she highlighted pornographic exploitation, manipulation in elections, and disinformation that threatens journalism’s credibility.  

The later examples are the ones responsible for generating skepticism surrounding media coverage as a whole. According to professors Bobby Chesney and Danielle Citron’s “Liar’s Dividend”, the exponential growth of deepfake makes people question any type of information even if it comes from a trustworthy outlet. In light of this situation, Suhm proposed some effective approaches to combat the spread of altered footage and reduce its exposure. Education on media literacy, reinforcement of platform policies, regulations from individual to legal levels, and technology detection are a couple of the countermeasures that she pointed out. 

Alyssa also talked about Microsoft’s efforts to instruct people to verify the information they are exposed to. The two main initiatives were the Spot The Deepfake quiz and their latest project in partnership with Polifact, the Hybrid Threat Training Curriculum. “This is a resource that is intended to address both cyber-security threats and disinformation threats…This curriculum includes different modules that can be combined and reorganized depending on who the audience is and we specially designed this with some of our high-risk customers in mind…These trainings are aimed at not only explaining these issues but also providing tangible resources for the audience to identify and mitigate the threats.” 

Other than the media literacy tools, Microsoft is also aimed at providing technical approaches to reduce the amount of deepfakes online. Director of Technology and Operations for Customer Security, Ashish Jaiman expanded on the authentication partnerships with the major news outlets, such as BBC and The New York Times, on the Project Origin. “The idea is if we can put a signal, fingerprint, or watermark in the piece of media when it is published, it can ensure that that signal is not tempered with. At the consumption point, we can actually look at that signal and be very confident that this media wasn’t manipulated.”

Based on the information and recommendations proposed by Alyssa Suhm and Ashish Jaiman on how to combat deepfakes, journalists must be careful with the content they share in order to reduce the focus on adulterated media. Through media literacy and technological strategies, the press will be able to maximize its efforts to provide authentic and truthful coverage. The key is to analyze whether a deepfake is positive or negative and respond to each case-by-case scenario with the intent of providing civic education. 

This is a very complex problem. One type of entity won’t be able to solve the problem. It requires a multi-model kind of approach. We talked about media literacy, legislative solution, technology, but also a collaboration with civic society, journalism, with media companies and organizations.
— Ashish Jaiman, Director of Technology & Operations for the Customer Security & Trust organization at Microsoft
 
 

Isabella Soares is a news associate of the Foreign Press.