The Rising Concern of Deepfakes in Healthcare

Artificial intelligence, hailed for its potential in revolutionizing healthcare, now raises a daunting specter within the industry—deepfake technology. As AI advances, experts fear the production of sophisticated false content, including images, audio, and videos that are nearly indistinguishable from reality.

This concern stems from the perilous repercussions of misinformation, a hazard starkly highlighted during the COVID-19 pandemic. False information about vaccines, treatments, and masks inundated social media platforms, sowing confusion and eroding public trust in accurate health guidance.

The advent of deepfakes, experts warn, exacerbates the challenges in responding to emerging health threats, safeguarding sensitive patient data, and fortifying hospitals against escalating cyberattacks.

While the threat of deepfakes remains largely hypothetical in healthcare, the industry is proactive in preemptively addressing this peril. Chris Doss of RAND Corporation, leading a study on deepfakes in scientific communication, stressed the necessity to combat this threat while it is still in its infancy. “We really need to be vigilant about it and try to get a hold of it now when it's still a bit nascent,” he told Axios. “We do not want to play catch-up as we have, unfortunately, in the past with, for instance, ransomware attacks.”

Healthcare's apprehensions with deepfakes encompass several key concerns:

  • Misinformation Challenges: False content posing as trustworthy sources can impede the spread of accurate health information, potentially undermining confidence in credible sources. A deepfake video featuring a figure like Anthony Fauci delivering misleading vaccination advice could have a profound impact.

  • Enhanced Phishing Threats: Scammers might leverage convincing audio and visuals to deceive patients, posing as healthcare professionals or insurers to acquire sensitive data, including financial and health-related information.

  • Heightened Cybersecurity Risks: Hackers could exploit synthesized audio to breach hospital systems. For instance, a hacker might use fabricated audio of a hospital's CEO to manipulate the organization's help desk for unauthorized access.

However, alongside these concerns, healthcare also recognizes the potential benefits of generative AI, even within the realm of deepfakes. Early experiments with AI, like ChatGPT, show promise in providing empathetic responses, while researchers explore applications such as improving emotion recognition and drug development.

Yet, a recent RAND study dampens optimism, revealing that individuals—scientists included—struggle to discern deepfakes in scientific communication. Exposure to such content might not necessarily enhance detection capabilities, contrary to the presumption that experience fosters better identification.