Microsoft Backs New Laws to Tackle Deepfake Fraud and AI Used for Sexual Abuse Images

Microsoft Backs New Laws to Tackle Deepfake Fraud and AI Used for Sexual Abuse Images

Microsoft has called on members of Congress to control the use of AI-generated deepfakes. The company wants to establish protections against deepfakes used for fraud, manipulation, and abuse. 

Microsoft Vice Chair and President Brad Smith stated the following in a blog post:

“While the tech sector and non-profit groups have taken recent steps to address this problem, it has become apparent that our laws will also need to evolve to combat deepfake fraud. One of the most important things the US can do is pass a comprehensive deepfake fraud statute to prevent cybercriminals from using this technology to steal from everyday Americans.”

Smith is pushing for policymakers to act on not just protecting the integrity of the electoral process. He also stressed the importance of guarding seniors from fraud and children from abuse.

Microsoft aims to give law enforcement officials the regulatory structure to prosecute AI-generated scams and fraud, with Smith pressing lawmakers to “ensure that our federal and state laws on child sexual exploitation and abuse and non-consensual intimate imagery are updated to include AI-generated content.” Not long ago, the Senate approved a bill that cracks down on sexually explicit deepfakes. The bill’s passage came after middle and high school students were found to be concocting counterfeit explicit images of their fellow classmates. Under this bill, the creators would be sued for damages.

After bad actors created explicit images of celebrities like pop superstar Taylor Swift, Microsoft was compelled to put more safety protocols into place for its own AI products because of a loophole in their own Designer AI image creator that led to their creation. Swift found herself at the center of PR chaos as AI-created nude images flooded the internet. Microsoft subsequently decried the proliferation of deepfakes, stressing how it will take more steps to limit their spread.

It is unfortunate that these AI tools have now become weapons, and the rapid expansion of their abuse is what has prompted Microsoft to ensure the damage can be mitigated. The abuse of these tools is not limited to sexually explicit imagery; it can also pose new threats for elections, contribute to financial fraud, and exacerbate the next generation of harmful cyberbullying.

Evidence that Microsoft tools were used to create the deepfakes was first presented in a 404 Media article that claims these images first spread within a Telegram community dedicated to creating non-consensual porn. In fact, community members recommended the Microsoft Designer tool to generate the images in question. In theory, Microsoft Designer does not produce images of famous people, but users found the tool’s rules could be easily broken by making small tweaks to prompts.

Complicating matters, a Microsoft engineer contacted multiple politicians and claimed the company did not heed his warnings. Bob Ferguson, the Attorney General for Washington state, said he “discovered a security vulnerability that allowed me to bypass some of the guardrails that are designed to prevent the [DALL-E] model from creating and distributing harmful images…. I reached the conclusion that DALL·E 3 posed a public safety risk and should be removed from public use until OpenAI could address the risks associated with this model.”

New laws must be introduced to tackle this pervasive problem and Microsoft is using its massive platform to to position itself at the forefront of these efforts.