What is Sora? And What Does it Mean for News Consumers?
Sora is the latest AI-powered tool from OpenAI, so new that it’s only available to a few select researchers, academics and visual artists. This software generates highly realistic videos from short snippets of text, like “historical footage of California in the gold rush.” After typing in that description, presto! Sora generates a high-resolution video in a fraction of the time it would take a digital artist to create it and way faster than actually filming that scene on-site with actors, props, lighting and cameras. Here’s what the final product looks like:
I don’t really understand how this software works even after reading this explanation, but suffice it to say, it’s groundbreaking. I can confirm the clips OpenAI shared are mind-blowing and include a stylish woman walking down a street in Tokyo, a cute otter surfing, an extreme close-up of an eye blinking, and a grandmother making a birthday wish. They are truly amazing and far better than any other simulated videos out there, but they aren’t perfect representations of reality.
Some have a cartoonish quality, while others have obvious tells, like the candles still burning after the grandmother blows them out or a horse disappearing into the ground as it walks along a river. And that’s just a few of the flaws. The Wall Street Journal put together this excellent video primer on how you can spot Sora’s problems and detect when a video is AI-generated.
But experts expect a lot of those kinks to be worked out before too long. What’s most astonishing is that some tech savants predict this text-to-video software will someday generate content that is indistinguishable from reality. Take a minute to let that sink in and think about how this technology, at a minimum, could revolutionize filmmaking, digital art, gaming, advertising, marketing, even education.
That may be all well and good, but here’s the hitch—and this will sound familiar—image generation software like Sora will make creating deepfakes incredibly easy and detecting them even harder. So far, there is no AI detection tool out there that is 100 percent reliable, according to an investigation by Scribbr.
An even bigger problem is that Sora can do much more than produce purely artistic videos. It can also:
generate a video from a real photo
fill in missing frames in an existing video
extend real footage with events that did not happen
This is all happening while the public is still trying to make sense of ChatGPT. Adding life-like videos to the mix is yet another challenge to news consumers everywhere—especially in an election year. Research shows that AI-generated deepfakes can be even more potent than other media when it comes to swaying public opinion because of the trust people place in video evidence.
And if that’s not enough to worry about, these deepfake videos can also spawn false memories where people remember something that didn’t happen. It’s actually quite common for humans to combine real memories with something they’ve seen or heard and then forget the source of that new memory. This phenomenon is well documented, and studies have shown that deepfake videos can implant memories of someone saying or doing something they never did.
“The marketplace of ideas already suffers from truth decay,” said UVA cyber privacy expert Danielle Citron and University of Texas professor of law Robert Chesney in a paper they co-wrote. “Deepfakes will exacerbate this problem significantly. The risks to our democracy and to national security are profound as well.”
Now, according to OpenAI, Sora has checks in place to prevent users from creating content in violation of its usage policies. “We’ll be taking several important safety steps ahead of making Sora available… including a text classifier that will check and reject text input prompts that request extreme violence, sexual content, hateful imagery, celebrity likeness, or the IP of others.” That sounds reasonable, but no security system is foolproof. Plus, Sora-like copycats may not have the same guardrails.
With more than 80 elections scheduled around the world in 2024, it’s not a big leap to say it’s only a matter of time before highly realistic, AI-generated deepfake videos of politicians appear on social media.
“The information age is over,” declared Techradar.com journalist Christian Guyton in his story on Sora. “Let the disinformation age begin.”
That proclamation may be a bit premature, but it’s not an overstatement to say Sora is ushering in a new era of digital media that will challenge our perceptions of memory, reality and even truth.
This article was originally published on News Literacy Matters.
Award-winning international business journalist Sissel McCarthy is a Distinguished Lecturer and Director of the Journalism Program at Hunter College and founder of NewsLiteracyMatters.com, an online platform dedicated to teaching people how to find credible information in this digital age. She has been teaching news literacy and multimedia reporting and writing for more than 16 years at Hunter College, NYU, and Emory University following her career as an anchor and reporter at CNN and CNBC. McCarthy serves on the board of the Association of Foreign Press Correspondents.