Art community fights for integrity as AI presents artificial images

Artificial intelligence is adding art to its growing résumé. But artists and computer experts are starting to push back against companies that allow AI to create art from original works, citing copyright infringement and the possibility of misinformation.

|
John Minchillo/AP
A visitor looks at artist Refik Anadol’s “Unsupervised” exhibit at the Museum of Modern Art, Jan. 11, 2023, in New York. The new AI-generated installation is meant to be a thought-provoking interpretation of the New York City museum's prestigious collection.

Countless artists have taken inspiration from “The Starry Night” since Vincent Van Gogh painted the swirling scene in 1889.

Now artificial intelligence systems are doing the same, training themselves on a vast collection of digitized artworks to produce new images you can conjure in seconds from a smartphone app.

The images generated by tools such as DALL-E, Midjourney, and Stable Diffusion can be weird and otherworldly but also increasingly realistic and customizable – ask for a “peacock owl in the style of Van Gogh,” and they can churn out something that might look similar to what you imagined.

But while Van Gogh and other long-dead master painters aren’t complaining, some living artists and photographers are starting to fight back against the AI software companies creating images derived from their works.

Two new lawsuits – one this week from the Seattle-based photography giant Getty Images – take aim at popular image-generating services for allegedly copying and processing millions of copyright-protected images without a license.

Getty said it has begun legal proceedings in the High Court of Justice in London against Stability AI – the maker of Stable Diffusion – for infringing intellectual property rights to benefit the London-based startup’s commercial interests.

Another lawsuit in a U.S. federal court in San Francisco describes AI image-generators as “21st-century collage tools that violate the rights of millions of artists.” The lawsuit, filed on Jan. 13 by three working artists on behalf of others like them, also names Stability AI as a defendant, along with San Francisco-based image-generator startup Midjourney and the online gallery DeviantArt.

The lawsuit alleges that AI-generated images “compete in the marketplace with the original images. Until now, when a purchaser seeks a new image ‘in the style’ of a given artist, they must pay to commission or license an original image from that artist.”

Companies that provide image-generating services typically charge users a fee. After a free trial of Midjourney through the chatting app Discord, for instance, users must buy a subscription that starts at $10 per month or up to $600 a year for corporate memberships. The startup OpenAI also charges for use of its DALL-E image generator, and StabilityAI offers a paid service called DreamStudio.

Stability AI said in a statement that “Anyone that believes that this isn’t fair use does not understand the technology and misunderstands the law.”

In a December interview with The Associated Press, before the lawsuits were filed, Midjourney CEO David Holz described his image-making service as “kind of like a search engine” pulling in a wide swath of images from across the internet. He compared copyright concerns about technology with how such laws have adapted to human creativity.

“Can a person look at somebody else’s picture and learn from it and make a similar picture?” Mr. Holz said. “Obviously, it’s allowed for people and if it wasn’t, then it would destroy the whole professional art industry, probably the nonprofessional industry too. To the extent that AIs are learning like people, it’s sort of the same thing and if the images come out differently then it seems like it’s fine.”

The copyright disputes mark the beginning of a backlash against a new generation of impressive tools – some of them introduced just last year – that can generate new visual media, readable text, and computer code on command.

They also raise broader concerns about the propensity of AI tools to amplify misinformation or cause other harm. For AI image generators, that includes the creation of nonconsensual sexual imagery.

Some systems produce photorealistic images that can be impossible to trace, making it difficult to tell the difference between what’s real and what’s AI. And while some have safeguards in place to block offensive or harmful content, experts fear it’s only a matter of time until people utilize these tools to spread disinformation and further erode public trust.

“Once we lose this capability of telling what’s real and what’s fake, everything will suddenly become fake because you lose confidence of anything and everything,” said Wael Abd-Almageed, a professor of electrical and computer engineering at the University of Southern California.

As a test, the AP submitted a text prompt on Stable Diffusion featuring the keywords “Ukraine war” and “Getty Images.” The tool created photo-like images of soldiers in combat with warped faces and hands, pointing and carrying guns. Some of the images also featured the Getty watermark, but with garbled text.

AI can also get things wrong, like feet and fingers or details on ears that can sometimes give away that they’re not real, but there’s no set pattern to look out for. Those visual clues can also be edited. On Midjourney, users often post on the Discord chat asking for advice on how to fix distorted faces and hands.

With some generated images traveling on social networks and potentially going viral, they can be challenging to debunk since they can’t be traced back to a specific tool or data source, according to Chirag Shah, a professor at the Information School at the University of Washington, who uses these tools for research.

“You could make some guesses if you have enough experience working with these tools,” Mr. Shah said. “But beyond that, there is no easy or scientific way to really do this.”

For all the backlash, there are many people who embrace the new AI tools and the creativity they unleash. Some use them as a hobby to create intricate landscapes, portraits, and art; others to brainstorm marketing materials, video game scenery, or other ideas related to their professions.

There’s plenty of room for fear, but “what can else can we do with them?” asked the artist Refik Anadol this week at the World Economic Forum in Davos, Switzerland, where he displayed an exhibit of climate-themed work created by training AI models on a trove of publicly available images of coral.

At the Museum of Modern Art in New York, Mr. Anadol designed “Unsupervised,” which draws from artworks in the museum’s prestigious collection – including “The Starry Night” – and feeds them into a digital installation generating animations of mesmerizing colors and shapes in the museum lobby.

The installation is “constantly changing, evolving and dreaming 138,000 old artworks at MoMA’s archive,” Mr. Anadol said. “From Van Gogh to Picasso to Kandinsky, incredible, inspiring artists who defined and pioneered different techniques exist in this artwork, in this AI dream world.”

Mr. Anadol, who builds his own AI models, said in an interview that he prefers to look at the bright side of the technology. But he hopes future commercial applications can be fine-tuned so artists can more easily opt out.

“I totally hear and agree that certain artists or creators are very uncomfortable about their work being used,” he said.

For painter Erin Hanson, whose impressionist landscapes are so popular and easy to find online that she has seen their influence in AI-produced visuals, the concern is not about her own prolific output, which makes $3 million a year.

She does, however, worry about the art community as a whole.

“The original artist needs to be acknowledged in some way or compensated,” Ms. Hanson said. “That’s what copyright laws are all about. And if artists aren’t acknowledged, then it’s going to make it hard for artists to make a living in the future.”

This story was reported by The Associated Press. Matt O’Brien reported from Providence, Rhode Island.

You've read  of  free articles. Subscribe to continue.
Real news can be honest, hopeful, credible, constructive.
What is the Monitor difference? Tackling the tough headlines – with humanity. Listening to sources – with respect. Seeing the story that others are missing by reporting what so often gets overlooked: the values that connect us. That’s Monitor reporting – news that changes how you see the world.

Dear Reader,

About a year ago, I happened upon this statement about the Monitor in the Harvard Business Review – under the charming heading of “do things that don’t interest you”:

“Many things that end up” being meaningful, writes social scientist Joseph Grenny, “have come from conference workshops, articles, or online videos that began as a chore and ended with an insight. My work in Kenya, for example, was heavily influenced by a Christian Science Monitor article I had forced myself to read 10 years earlier. Sometimes, we call things ‘boring’ simply because they lie outside the box we are currently in.”

If you were to come up with a punchline to a joke about the Monitor, that would probably be it. We’re seen as being global, fair, insightful, and perhaps a bit too earnest. We’re the bran muffin of journalism.

But you know what? We change lives. And I’m going to argue that we change lives precisely because we force open that too-small box that most human beings think they live in.

The Monitor is a peculiar little publication that’s hard for the world to figure out. We’re run by a church, but we’re not only for church members and we’re not about converting people. We’re known as being fair even as the world becomes as polarized as at any time since the newspaper’s founding in 1908.

We have a mission beyond circulation, we want to bridge divides. We’re about kicking down the door of thought everywhere and saying, “You are bigger and more capable than you realize. And we can prove it.”

If you’re looking for bran muffin journalism, you can subscribe to the Monitor for $15. You’ll get the Monitor Weekly magazine, the Monitor Daily email, and unlimited access to CSMonitor.com.

QR Code to Art community fights for integrity as AI presents artificial images
Read this article in
https://www.csmonitor.com/The-Culture/Arts/2023/0120/Art-community-fights-for-integrity-as-AI-presents-artificial-images
QR Code to Subscription page
Start your subscription today
https://www.csmonitor.com/subscribe