Meet Sora: AI-created videos test public trust

The OpenAI logo is displayed on a cellphone with an image on a computer monitor generated by ChatGPT, Dec. 8, 2023. OpenAI is now diving into the world of artificial intelligence-generated video with its new text-to-video generator tool, Sora.

Michael Dwyer/AP/File

February 26, 2024

In a world where artificial intelligence can conjure up fake photos and videos, it’s getting hard to know what to believe.

Will photos of crime-scene evidence or videos of authoritarian crackdowns, such as China’s Tiananmen Square or police brutality, pack the same punch they once did? Will trust in the media, already low, erode even more?

Such questions became more urgent earlier this month when OpenAI, the company behind ChatGPT, announced Sora. This AI system allows anyone to generate short videos. There’s no camera needed. Just type in a few descriptive words or phrases, and voilà, they turn into realistic-looking, but entirely computer-generated, videos. 

Why We Wrote This

OpenAI’s Sora, a text-to-video tool still in the testing phase, has set off alarm bells, threatening to widen society’s social trust deficit. How can people know what to believe, when they “can’t believe their eyes”?

The announcement of Sora, which is still in the testing phase, has set off alarm bells in some circles of digital media.

“This is the thing that used to be able to transcend divisions because the photograph would certify that this is what happened,” says Fred Ritchin, former picture editor of The New York Times Magazine and author of “The Synthetic Eye: Photography Transformed in the Age of AI,” a book due out this fall.

Tesla news looks grim. But the bigger picture for EVs is a bright one.

“The guy getting attacked by a German shepherd in the Civil Rights Movement was getting attacked. You could argue, were the police correct or not correct to do what they did? But you had a starting point. We don’t have that anymore,” he says. 

Technologists are hard at work trying to mitigate the problem. Prodded by the Biden administration, several big tech companies have agreed to embed technologies to help people tell the difference between AI-generated photos and the real thing. The legal system has already grappled with fake videos for high-profile celebrities. But the social trust deficit, in which large segments of citizens disbelieve their governments, courts, scientists, and news organizations, could widen. 

“We need to find a way to regain trust, and this is the big one,” says Hany Farid, a professor at the University of California, Berkeley and pioneer in digital forensics and image analysis. “We’re not having a debate anymore about the role of taxes, the role of religion, the role of international affairs. We’re arguing about whether two plus two is four. ... I don’t even know how to have that conversation.”

While the public has spent decades struggling with digitally manipulated photos, Sora’s video-creation abilities represent a new challenge.

“The change is not in the ability to manipulate images,” says Kathleen Hall Jamieson, a communication professor and director of the Annenberg Public Policy Center at the University of Pennsylvania. “The change is the ability to manipulate images in ways that make things seem more real than the real artifact itself.”

Iran’s official line on exchange with Israel: Deterrence restored

The technology isn’t there yet, but it is intriguing. In samples released by OpenAI, a video of puppies playing in the snow looks real enough, another shows three gray wolf pups that morph into a half-dozen as they frolic, and an AI-generated “grandmother” blows on birthday candles that don’t go out.

While the samples were shared online, OpenAI has not yet released Sora publicly, except to a small group of outside testers. 

A green wireframe model covers an actor's face during the creation of a synthetic facial reanimation AI video, known as a deepfake, in London Feb. 12, 2019.
Reuters/File

A boon to creative minds

The technology could prove a boon to artists, film directors, and ad agencies, offering new outlets for creativity and speeding up the process of producing human-generated video. 

The challenge lies with those who might use the technology unscrupulously. The immediate problem may prove to be the sheer number of videos produced with the help of generative AI tools like Sora.

“It increases the scale and sophistication of the fake video problem, and that will cause both a lot of misplaced trust in false information and eventually a lot of distrust of media generally,” Mark Lemley, law professor and director of the Stanford Program in Law, Science and Technology, writes in an email. “It will also produce a number of cases, but I think the current legal system is well-equipped to handle them.”

Such concerns are not limited to the United States.

“It’s definitely a world problem,” says Omar Al-Ghazzi, professor of media and communications at the London School of Economics. But it’s wrong to think that the technology will affect everyone in the same way, he adds. “A lot of critical technological research shows this, that it is those marginalized, disempowered, disenfranchised communities who will actually be most affected negatively,” particularly because authoritarian regimes are keen to use such technologies to manipulate public opinion.

In Western democracies, too, a key question is, who will control the technology?

Governments can’t properly regulate it anytime soon because they don’t have the expertise, says Professor Hall Jamieson of the Annenberg Center.

Combating disinformation

The European Union has enacted the Digital Markets and Digital Services acts to combat disinformation. Among other things, these acts set out rules for digital platforms and protections for online users. The U.S. is taking a more hands-off approach.

In July, the Biden administration announced that OpenAI and other large tech companies had voluntarily agreed to use watermarking and other technologies to ensure people could detect when AI had enhanced or produced an image. Many digital ethicists worry that self-regulation won’t work. 

“That can all be a step in the right direction,” says Brent Mittelstadt, professor and director of research at the Oxford Internet Institute at the University of Oxford in the United Kingdom. But “as an alternative to hard regulation? Absolutely not. It does not work.”

Consumers also have to become savvier about distinguishing real from fake videos. And they will, if the Adobe Photoshop experience is any guide, says Sarah Newman, director of art and education at Berkman Klein Center’s metaLAB at Harvard, which explores digital art and humanities.

Three decades ago, when Photoshop began popularizing the idea of still photo manipulation, many people would have been confused by a photo of Donald Trump kissing Russian President Vladimir Putin, she says. Today, they would dismiss it as an obvious fake. The same savvy will come in time for fake videos, Ms. Newman predicts.

Photojournalists will also have to adapt, says Brian Palmer, a longtime freelance photographer based in Richmond, Virginia. “We journalists have to give people a reason to believe and understand that we are using this technology as a useful tool and not as a weapon.”

For more than 30 years, he says, he’s been trying to represent people honestly. “I thought that spoke for itself. It doesn’t anymore.” So, a couple of months ago, he put up on his website a personal code of ethics, which starts, “I do not and will not use generative artificial intelligence in my photography and journalism.”