EU leaders in a race against time to agree on first AI regulations

The European Union has been drafting artificial intelligence guidelines for years, but the emergence of OpenAI’s ChatGPT has upped the urgency. If EU leaders aren’t able to reach a deal this week, negotiators will be forced to pick up the issue next year.

|
Eric Risberg/AP
Former OpenAI CEO Sam Altman participates in a discussion during the Asia-Pacific Economic Cooperation CEO Summit, Nov. 16, 2023, in San Francisco.

The generative AI boom has sent governments worldwide scrambling to regulate the emerging technology, but it also has raised the risk of upending a European Union push to approve the world’s first comprehensive artificial intelligence rules. 

The 27-nation bloc’s Artificial Intelligence Act has been hailed as a pioneering rulebook. But with time running out, it’s uncertain if the EU’s three branches of government can thrash out a deal Dec. 6 in what officials hope is a final round of closed-door talks.

Europe’s years-long efforts to draw up AI guardrails have been bogged down by the recent emergence of generative AI systems like OpenAI’s ChatGPT, which have dazzled the world with their ability to produce human-like work but raised fears about the risks they pose.

Those concerns have driven the United States, United Kingdom, China, and global coalitions like the Group of 7 major democracies into the race to regulate the rapidly developing technology, though they’re still catching up to Europe.

Besides regulating generative AI, EU negotiators need to resolve a long list of other thorny issues, such as a full ban on police use of facial recognition systems, which have stirred privacy concerns.

Chances of clinching a political agreement between EU lawmakers, representatives from member states, and executive commissioners “are pretty high partly because all the negotiators want a political win” on a flagship legislative effort, said Kris Shrishak, a senior fellow specializing in AI governance at the Irish Council for Civil Liberties.

“But the issues on the table are significant and critical, so we can’t rule out the possibility of not finding a deal,” he said.

Some 85% of the technical wording in the bill already has been agreed on, Carme Artigas, AI and digitalization minister for Spain, which holds the rotating EU presidency, said at a press briefing Dec. 5 in Brussels.

If a deal isn’t reached in the latest round of talks, starting the afternoon of Dec. 6 and expected to run late into the night, negotiators will be forced to pick it up next year. That raises the odds the legislation could get delayed until after EU-wide elections in June – or go in a different direction as new leaders take office.

One of the major sticking points is foundation models, the advanced systems that underpin general purpose AI services like OpenAI’s ChatGPT and Google’s Bard chatbot. 

Also known as large language models, these systems are trained on vast troves of written works and images scraped off the internet. They give generative AI systems the ability to create something new, unlike traditional AI, which processes data and completes tasks using predetermined rules.

The AI Act was intended as product safety legislation, like similar EU regulations for cosmetics, cars, and toys. It would grade AI uses according to four levels of risk – from minimal or no risk posed by video games and spam filters to unacceptable risk from social scoring systems that judge people based on their behavior.

The new wave of general purpose AI systems released since the legislation’s first draft in 2021 spurred European lawmakers to beef up the proposal to cover foundation models.

Researchers have warned that powerful foundation models, built by a handful of big tech companies, could be used to supercharge online disinformation and manipulation, cyberattacks, or creation of bioweapons. They act as basic structures for software developers building AI-powered services so that “if these models are rotten, whatever is built on top will also be rotten – and deployers won’t be able to fix it,” said Avaaz, a nonprofit advocacy group.

France, Germany, and Italy have resisted the update to the legislation and are calling instead for self-regulation – a change of heart seen as a bid to help homegrown generative AI players, such as French startup Mistral AI and Germany’s Aleph Alpha, compete with big U.S. tech companies like OpenAI.

Brando Benifei, an Italian member of the European Parliament who is co-leading the body’s negotiating efforts, was optimistic about resolving differences with member states.

There’s been “some movement” on foundation models, though there are “more issues on finding an agreement” on facial recognition systems, he said.

This story was reported by The Associated Press. Matt O’Brien contributed from Providence, Rhode Island.

You've read  of  free articles. Subscribe to continue.
Real news can be honest, hopeful, credible, constructive.
What is the Monitor difference? Tackling the tough headlines – with humanity. Listening to sources – with respect. Seeing the story that others are missing by reporting what so often gets overlooked: the values that connect us. That’s Monitor reporting – news that changes how you see the world.

Dear Reader,

About a year ago, I happened upon this statement about the Monitor in the Harvard Business Review – under the charming heading of “do things that don’t interest you”:

“Many things that end up” being meaningful, writes social scientist Joseph Grenny, “have come from conference workshops, articles, or online videos that began as a chore and ended with an insight. My work in Kenya, for example, was heavily influenced by a Christian Science Monitor article I had forced myself to read 10 years earlier. Sometimes, we call things ‘boring’ simply because they lie outside the box we are currently in.”

If you were to come up with a punchline to a joke about the Monitor, that would probably be it. We’re seen as being global, fair, insightful, and perhaps a bit too earnest. We’re the bran muffin of journalism.

But you know what? We change lives. And I’m going to argue that we change lives precisely because we force open that too-small box that most human beings think they live in.

The Monitor is a peculiar little publication that’s hard for the world to figure out. We’re run by a church, but we’re not only for church members and we’re not about converting people. We’re known as being fair even as the world becomes as polarized as at any time since the newspaper’s founding in 1908.

We have a mission beyond circulation, we want to bridge divides. We’re about kicking down the door of thought everywhere and saying, “You are bigger and more capable than you realize. And we can prove it.”

If you’re looking for bran muffin journalism, you can subscribe to the Monitor for $15. You’ll get the Monitor Weekly magazine, the Monitor Daily email, and unlimited access to CSMonitor.com.

QR Code to EU leaders in a race against time to agree on first AI regulations
Read this article in
https://www.csmonitor.com/World/Europe/2023/1206/EU-leaders-in-a-race-against-time-to-agree-on-first-AI-regulations
QR Code to Subscription page
Start your subscription today
https://www.csmonitor.com/subscribe