What turmoil over a CEO tells us about the future of AI

|
Carlos Barria/Reuters
Sam Altman, CEO of OpenAI, attends the Asia-Pacific Economic Cooperation (APEC) CEO Summit in San Francisco, Nov. 16, 2023.
  • Quick Read
  • Deep Read ( 2 Min. )

If ever someone makes a movie about how not to fire a CEO, they could base their script on the playbook of OpenAI. On Friday, the San Francisco artificial intelligence company fired its chief executive, triggered a revolt from employees who threatened to leave, and early Wednesday announced a deal to reinstate him. 

Because OpenAI owns ChatGPT, a leading AI language generator, or chatbot, all this got big attention. While the full details behind the firing of CEO Sam Altman are still not known, the turbulent events highlight wider societal questions over who will control this powerful transformative technology.

Why We Wrote This

A story focused on

The company behind ChatGPT embodied a key question surrounding artificial intelligence: Will the profit motive face any constraints, for a technology that carries risks as well as benefits?

Will it be a few billionaire-owned corporations? A nonprofit consortium? The government through regulation? 

OpenAI had tried a novel structure, as a nonprofit controlling a for-profit company – and with its board pledged to the mission of benefiting humanity. The upheaval at OpenAI represents, at least in part, an ongoing battle between the fear of AI’s potential dangers and the lure of its expected benefits and profits.

And the outcome there may signal the powerful role that capitalists and entrepreneurs will play in shaping AI’s future.

“Money wins a lot,” says Lilly Irani, a professor of communication at the University of California, San Diego.

If ever someone makes a movie about how not to fire a CEO, they could base their script on the playbook of OpenAI. On Friday, the San Francisco artificial intelligence company fired its chief executive, subsequently triggered a revolt from employees who threatened to leave, and, early Wednesday, announced it had reached an agreement to reinstate him. 

Because OpenAI owns ChatGPT, a leading AI language generator, or chatbot, each of the company’s head-spinning moves got plenty of attention. While the full details behind the firing of CEO Sam Altman are still not known, the turbulent events highlight wider societal questions over who will control this powerful transformative technology.

Will it be a few billionaire-owned corporations? A nonprofit consortium? The government? 

Why We Wrote This

A story focused on

The company behind ChatGPT embodied a key question surrounding artificial intelligence: Will the profit motive face any constraints, for a technology that carries risks as well as benefits?

OpenAI had tried a novel structure, as a nonprofit controlling a for-profit company – and with its board pledged to the mission of benefiting humanity. The upheaval at OpenAI represents, at least in part, an ongoing battle between the fear of AI’s potential dangers and the lure of its expected benefits and profits.

The outcome, with Mr. Altman reinstated as CEO and new people on the company’s board, may signal the powerful role that capitalists and entrepreneurs will play – at least in the United States – in shaping the future of this emerging technology.

“This is an early skirmish in a war for the future,” says Tim O’Reilly, founder and CEO of O’Reilly Media and a visiting professor at University College London’s Institute for Innovation and Public Purpose.

AI sprang into the public consciousness almost exactly a year ago when OpenAI released ChatGPT to the public. It surpassed all expectations as an overnight sensation. People around the world couldn’t wait to interact with a super-knowledgeable computer that talked the way they did. 

Less than two months later, OpenAI backer Microsoft announced it was plowing $10 billion into the company and would incorporate ChatGPT into its products. That set off a corporate spending race as Google, Amazon, and other tech giants sped up their own AI projects and investments. Capitalism was outrunning ethical concerns – again – in a period of disruptive technological change.

But this time it came with a twist. The companies themselves began raising the specter of super-intelligent machines causing harm if regulators didn’t provide guardrails. 

In the rush for capital, OpenAI’s nonprofit structure came under pressure. The board came to feel it couldn’t trust Mr. Altman, a co-founder as well as CEO, who pushed for rapid development and deployment of AI, releasing the technology to the public. In his view, it was the best way to democratize the technology, expose its faults, and accelerate its benefits. His reinstatement and the overhauling of the board suggests that this techno-optimism has won out at OpenAI.

The ongoing struggle between techno-optimism and doomerism gets exaggerated in every period of rapid technological change, says Benjamin Breen, a historian at the University of California, Santa Cruz and author of an upcoming book on utopian science in the mid-20th century. No one knows where AI will take humanity. If history is any guide, he adds, the extremists on both sides tend to get it wrong. 

For the foreseeable future, then, the battle over AI may not be whether the machines control people, but who and what controls the machines. 

“Money wins a lot,” says Lilly Irani, a professor of communication at the University of California, San Diego. “Techno-optimism and techno-doomerism both miss the point about who has the voice at the table and gets to decide how the technology is designed, developed, and deployed.”

You've read  of  free articles. Subscribe to continue.
Real news can be honest, hopeful, credible, constructive.
What is the Monitor difference? Tackling the tough headlines – with humanity. Listening to sources – with respect. Seeing the story that others are missing by reporting what so often gets overlooked: the values that connect us. That’s Monitor reporting – news that changes how you see the world.

Dear Reader,

About a year ago, I happened upon this statement about the Monitor in the Harvard Business Review – under the charming heading of “do things that don’t interest you”:

“Many things that end up” being meaningful, writes social scientist Joseph Grenny, “have come from conference workshops, articles, or online videos that began as a chore and ended with an insight. My work in Kenya, for example, was heavily influenced by a Christian Science Monitor article I had forced myself to read 10 years earlier. Sometimes, we call things ‘boring’ simply because they lie outside the box we are currently in.”

If you were to come up with a punchline to a joke about the Monitor, that would probably be it. We’re seen as being global, fair, insightful, and perhaps a bit too earnest. We’re the bran muffin of journalism.

But you know what? We change lives. And I’m going to argue that we change lives precisely because we force open that too-small box that most human beings think they live in.

The Monitor is a peculiar little publication that’s hard for the world to figure out. We’re run by a church, but we’re not only for church members and we’re not about converting people. We’re known as being fair even as the world becomes as polarized as at any time since the newspaper’s founding in 1908.

We have a mission beyond circulation, we want to bridge divides. We’re about kicking down the door of thought everywhere and saying, “You are bigger and more capable than you realize. And we can prove it.”

If you’re looking for bran muffin journalism, you can subscribe to the Monitor for $15. You’ll get the Monitor Weekly magazine, the Monitor Daily email, and unlimited access to CSMonitor.com.

QR Code to What turmoil over a CEO tells us about the future of AI
Read this article in
https://www.csmonitor.com/Business/2023/1122/What-turmoil-over-a-CEO-tells-us-about-the-future-of-AI
QR Code to Subscription page
Start your subscription today
https://www.csmonitor.com/subscribe