Heed the warning signs? ‘Godfather’ of AI cautions misuse of AI.

Geoffrey Hinton, the man widely considered the “Godfather” of artificial intelligence, has left Google. He’s now saying some of the potential dangers stemming from the same technology he helped build are “quite scary.”

|
Noah Berger/AP
Computer scientist Geoffrey Hinton who studies artificial intelligence applications poses at Google's headquarters in Mountain View, California, on March 25, 2015. Mr. Hinton has concerns about the potential dangers of the technology he helped build.

Sounding alarms about artificial intelligence has become a popular pastime in the ChatGPT era, taken up by high-profile figures as varied as industrialist Elon Musk, leftist intellectual Noam Chomsky, and the retired statesman Henry Kissinger.

But it’s the concerns of insiders in the AI research community that are attracting particular attention. A pioneering researcher and the “Godfather of AI” Geoffrey Hinton quit his role at Google so he could more freely speak about the dangers of the technology he helped create.

Over his decadeslong career, Mr. Hinton’s pioneering work on deep learning and neural networks helped lay the foundation for much of the AI technology we see today.

There has been a spasm of AI introductions in recent months. The San Francisco-based startup OpenAI, the Microsoft-backed company behind ChatGPT, rolled out its latest artificial intelligence model, GPT-4, in March. Other tech giants have invested in competing tools – including Google’s “Bard.”

Some of the dangers of AI chatbots are “quite scary,” Mr. Hinton told the BBC. “Right now, they’re not more intelligent than us, as far as I can tell. But I think they soon may be.”

In an interview with MIT Technology Review, Mr. Hinton also pointed to “bad actors” that may use AI in ways that could have detrimental impacts on society – such as manipulating elections or instigating violence.

Mr. Hinton, says he retired from Google so that he could speak openly about the potential risks as someone who no longer works for the tech giant.

“I want to talk about AI safety issues without having to worry about how it interacts with Google’s business,” he told MIT Technology Review. “As long as I’m paid by Google, I can’t do that.”

Since announcing his departure, Mr. Hinton has maintained that Google has “acted very responsibly” regarding AI. He told MIT Technology Review that there are also “a lot of good things about Google” that he would want to talk about – but those comments would be “much more credible if I’m not at Google anymore.”

Google confirmed that Mr. Hinton had retired from his role after 10 years overseeing the Google Research team in Toronto.

Mr. Hinton declined further comment Tuesday but said he would talk more about it at a conference Wednesday.

At the heart of the debate on the state of AI is whether the primary dangers are in the future or present. On one side are hypothetical scenarios of existential risk caused by computers that supersede human intelligence. On the other are concerns about automated technology that’s already getting widely deployed by businesses and governments and can cause real-world harm.

“For good or for not, what the chatbot moment has done is made AI a national conversation and an international conversation that doesn’t only include AI experts and developers,” said Alondra Nelson, who, until February, led the White House Office of Science and Technology Policy and its push to craft guidelines around the responsible use of AI tools.

“AI is no longer abstract, and we have this kind of opening, I think, to have a new conversation about what we want a democratic future and a nonexploitative future with technology to look like,” Ms. Nelson said in an interview last month.

A number of AI researchers have long expressed concerns about racial, gender, and other forms of bias in AI systems, including text-based large language models that are trained on huge troves of human writing and can amplify discrimination that exists in society.

“We need to take a step back and really think about whose needs are being put front and center in the discussion about risks,” said Sarah Myers West, managing director of the nonprofit AI Now Institute. “The harms that are being enacted by AI systems today are really not evenly distributed. It’s very much exacerbating existing patterns of inequality.”

Mr. Hinton was one of three AI pioneers who in 2019 won the Turing Award, an honor that has become known as the tech industry’s version of the Nobel Prize. The other two winners, Yoshua Bengio and Yann LeCun, have also expressed concerns about the future of AI.

Mr. Bengio, a professor at the University of Montreal, signed a petition in late March calling for tech companies to agree to a six-month pause on developing powerful AI systems, while Mr. LeCun, a top AI scientist at Facebook parent Meta, has taken a more optimistic approach.

This story was reported by The Associated Press. AP technology reporter Matt O’Brien reported from Cambridge, Massachusetts.

You've read  of  free articles. Subscribe to continue.
Real news can be honest, hopeful, credible, constructive.
What is the Monitor difference? Tackling the tough headlines – with humanity. Listening to sources – with respect. Seeing the story that others are missing by reporting what so often gets overlooked: the values that connect us. That’s Monitor reporting – news that changes how you see the world.

Dear Reader,

About a year ago, I happened upon this statement about the Monitor in the Harvard Business Review – under the charming heading of “do things that don’t interest you”:

“Many things that end up” being meaningful, writes social scientist Joseph Grenny, “have come from conference workshops, articles, or online videos that began as a chore and ended with an insight. My work in Kenya, for example, was heavily influenced by a Christian Science Monitor article I had forced myself to read 10 years earlier. Sometimes, we call things ‘boring’ simply because they lie outside the box we are currently in.”

If you were to come up with a punchline to a joke about the Monitor, that would probably be it. We’re seen as being global, fair, insightful, and perhaps a bit too earnest. We’re the bran muffin of journalism.

But you know what? We change lives. And I’m going to argue that we change lives precisely because we force open that too-small box that most human beings think they live in.

The Monitor is a peculiar little publication that’s hard for the world to figure out. We’re run by a church, but we’re not only for church members and we’re not about converting people. We’re known as being fair even as the world becomes as polarized as at any time since the newspaper’s founding in 1908.

We have a mission beyond circulation, we want to bridge divides. We’re about kicking down the door of thought everywhere and saying, “You are bigger and more capable than you realize. And we can prove it.”

If you’re looking for bran muffin journalism, you can subscribe to the Monitor for $15. You’ll get the Monitor Weekly magazine, the Monitor Daily email, and unlimited access to CSMonitor.com.

QR Code to Heed the warning signs? ‘Godfather’ of AI cautions misuse of AI.
Read this article in
https://www.csmonitor.com/Science/2023/0503/Heed-the-warning-signs-Godfather-of-AI-cautions-misuse-of-AI
QR Code to Subscription page
Start your subscription today
https://www.csmonitor.com/subscribe