How Should Journalists Handle Deepfakes?

How Should Journalists Handle Deepfakes?

A now infamous deepfake of actor Tom Cruise.

Deepfakes are a new addition to the zeitgeist of a world in which a photo or video is manipulated to create a false narrative about or around a prominent public figure. Deepfakes are incredibly difficult to spot as they go to lengths to cover themselves up and present as real. This infamous Tom Cruise deepfake shows the truly dangerous potential of creating that false world.

HOW ARE DEEPFAKES MADE?

Plain and simply, technology has advanced too much in order to pinpoint one single cause of a deepfake. However, deepfakes require a wealth of resources in order to correctly create the false world they are presenting. As technology advances further, however, deepfakes will likely become more and more easy to create: with the worst case scenario becoming that anybody at any time will be able to make a deep fake on their mobile devices.

HOW ARE DEEPFAKES USED?

This question is somewhat variable because deepfakes are used for a variety of reasons: everything from the creator’s own amusement to more sinister purposes. Some people believe Deepfakes will have catastrophic consequences in democratic countries around the world, as politicians can be deep faked into taking hard-line anti-public or anti-country stands at the drop of a button. However, a study done in 2019 revealed a disturbing trend: 90-95% of deepfakes are non consensual, and over 90 percent of people targeted are women.

When it comes to journalism, deepfakes are used to silence female journalists. Sexualized deepfakes have been used as revenge porn against women in almost every field, but as journalists, this creates a challenge to their credibility against a largely sex-negative world. Without concrete proof of the sexualized images being faked, journalists are discredited and often fired from their jobs. 

While reporting on the rape of a child, Indian investigative journalist Rana Ayyub drew the ire of men in her country by suggesting India protects abusers. In retaliation, fake tweets about how she “hated India” were circulated, followed closely by her head superimposed on someone else’s nude body. Bharatiya Janata Party, the chief nationalist party of India, shared it on to its official social media page. 

HOW CAN WE FIGHT DEEPFAKES?

Tech is essential to combating deepfakes as tech is the world from which they stemmed. For example, forensic technicians have several tools at their disposal, such as infrared detection, which tries to analyze subtle shifts in color in the skin based on heartbeat. If the face, say, doesn’t match the rest of the skin-it’s a fake. 

“We extract several PPG signals from different parts of the face and look at the spatial and temporal consistency of those signals,” said Ilke Demir, a senior research scientist at Intel. “In deepfakes, there is no consistency for heartbeats and there is no pulse information. For real videos, the blood flow in someone’s left cheek and right cheek — to oversimplify it — agree that they have the same pulse.”

TEACHING THE AUDIENCE

Audiences will naturally grow more skeptical as deepfakes become more and more common. As trust in the media has been internationally shaken over the past few years, audiences will need more and more proof that content is not faked. Without a process to uncover this evidence, we leave that up to chance. The “SIFT” method is one such process. SIFT means: “Stop, Investigate, Find better coverage, and Trace claims, quotes and media to original context.” 

Journalists are often the first line of defense in deepfakes and frequently the targets of them. As major organizations continue to find ways to uncover the truth behind these sinister tools of deception, extra work is necessary for us to uphold journalistic ethics as well as continue to share the truth with the world.