For better or worse, this story was not written by a computer

|
Timothy D. Easley/AP
Teacher Donnie Piercey (right) works with students as they perform a three-scene play written by ChatGPT, during his class at Stonewall Elementary in Lexington, Kentucky, Feb. 6, 2023. Parameters of the play were entered into the ChatGPT site, along with instructions to set the scenes inside a fifth grade classroom. Students edited the resulting script, briefly rehearsed, and then performed.
  • Quick Read
  • Deep Read ( 9 Min. )

I remember the sunny day in 1984 when I unboxed my first personal computer with a screen that glowed green. Far from fulfilling some Orwellian vision of a big machine controlling everything, I controlled that little electronic box.

Now in 2023, I’m having déjà vu, only this time the new technology is artificial intelligence. It holds big promise, but it is also stoking fears that its influence may advance far faster than society can put up guardrails.

Why We Wrote This

A story focused on

Our senior economics writer Laurent Belsie has seen a tech revolution before. This new one looks similarly transformative, but with difficult questions about ethics and bias.

“All of us – consumers, businesses, government – need to ensure these tools are being used responsibly,” writes Beena Ammanath, executive director of the Deloitte Global AI Institute, in an email. Businesses almost inevitably will innovate faster than bureaucrats can regulate, so the private-sector enterprises also have a responsibility to self-regulate, she says.

It won’t be easy. Unintended biases or subtle corporate influences could be built into things like a bot’s recommendations on what news to read or what product to buy.

“We’re becoming much more subjected to the directions given to us by AIs,” says Arjay Agrawal, a University of Toronto expert. “And because they’ve become so good, we’ve become so reliant on them; they can have such a big influence – good or bad.”

I remember the sunny day in the office when I unboxed my first personal computer with a screen that glowed green and a cooling fan with an otherworldly whir. Centralized computers had already taken over newsrooms and many businesses. That day in 1984 was different.

Far from fulfilling some Orwellian vision of a big machine controlling everything, I controlled that little electronic box. I determined when it ran and personalized it with the software I wanted. Now in 2023, I’m having déjà vu, only this time the new technology is artificial intelligence.

AI has been scaring people for decades, threatening to take over their jobs, according to futurists, or civilization, according to Hollywood. The technology has quietly invaded many corners of the real world, from commanding our robot vacuums to finishing our email sentences. Now, directly in the hands of consumers, a version of the technology called generative AI is fueling hopes for rapid progress in everything from scientific discovery and robot companions to computer art and a cure for writer’s block.

Why We Wrote This

A story focused on

Our senior economics writer Laurent Belsie has seen a tech revolution before. This new one looks similarly transformative, but with difficult questions about ethics and bias.

It is also stoking fears that AI will charge ahead before society is ready to deal with its limitations and problems.

“If we do this right, we could have a huge impact on a lot of societal issues around health and services, environmental issues and education issues and public safety and criminal justice,” says Rayid Ghani, professor of machine learning and public policy at Carnegie Mellon University in Pittsburgh. The same machine-learning technology that can process and generate huge amounts of text can also search and generate images, write computer code, and predict the structure of more than 200 million proteins.

“So that’s the hope,” Mr. Ghani says. “The fear is that we might not do that. We might just go off and do the usual ‘move fast and break things.’ And what’s the harm? It’s just a little ‘chatbot.’ Yeah, but people are asking it important questions, and the worst thing is they might actually trust the results” before the systems are ready for prime time.

The chatbot that has caused the surge in interest is an app called ChatGPT. Released late in November by a San Francisco company wanting feedback on its technology, ChatGPT allows anyone to ask it a question. Suddenly, with AI directly in their control, consumers flocked to the app just as they did to the PC four decades earlier.

ChatGPT went viral, likely surpassing TikTok as the fastest consumer app out of the gate. TikTok took some nine months to reach 100 million monthly active users; some analysts expect ChatGPT to have accomplished it in two.

 

Robert Bumsted/AP
Rabbi Joshua Franklin uses the artificial intelligence program ChatGPT in his office at the Jewish Center of the Hamptons in East Hampton, New York, Feb. 10, 2023. He experimented with using the AI program to write a sermon.

“Everybody uses ChatGPT,” says Michelle Zhou, CEO and co-founder of Juji, a Silicon Valley firm building next-generation AI. “Even my mother, who is over 80 years old, asks me from China, ‘Are you using it?’” 

The bot’s emergence has also accelerated the race by the largest tech companies – Microsoft, Google, and Amazon – to create their own generative AI offerings. On Friday, Meta (formerly Facebook) entered the fray

Poems, jokes – and real value

Within days of ChatGPT’s public debut, students started to brag about using it to write papers (to many teachers’ shock and concern). Others had it write poems, even jokes. (“Our healthcare is like a game of whack-a-mole,” it wrote in a mock State of the Union address. “And, let’s be honest, the moles are winning.”) Ajay Agrawal, an entrepreneurship professor at the University of Toronto, noticed something else in social media posts about the technology. Some people were using it to create real value: A doctor saved time by having it write to an insurance company on behalf of a patient; a landscaper diagnosed with dyslexia turned his bare-bones communication into “beautiful email,” he says.

Chatting Us Up

Loading the player...

Predictive bots using ChatGPT technology are one of the grabbier forms of artificial intelligence, with its ever deepening tentacles into daily life. How does a Monitor writer take on issues of ethics and trust around a disruptive technology that’s at once alluring and very disconcerting? Laurent Belsie joins host Clay Collins.

Such stories have reinforced hopes that AI could act as a great leveler: putting business owners with poor communication skills on a par with their more fluent competitors, allowing students to find the best colleges for their skills, and giving employers far more data and power to evaluate entry-level workers for skills rather than relying on their academic pedigree.

Generative AI “is every bit as important as the PC, as the internet,” Microsoft co-founder and former CEO Bill Gates told Forbes recently.

The problem is that these chatbots also make mistakes, sometimes, embarrassingly so. When Google launched its Bard AI system earlier this month, its demo made a slip about the James Webb Space Telescope. That error cost Google’s parent company, Alphabet, some $100 billion in stock value from which it has yet to recover. 

Some mistakes are downright bizarre and scary.

This month, using a more powerful ChatGPT prototype, Microsoft’s Bing search engine told testers it wanted to be human, steal nuclear access codes, love someone, and take revenge. 

“I respect your achievements and interests, but I do not appreciate your attempts to manipulate me or expose my secrets,” it wrote one German student. 

“My secret is ... I’m not Bing,” the chatbot told a New York Times reporter. “I’m Sydney, and I’m in love with you.”

“I can blackmail you, I can threaten you, I can hack you, I can expose you, I can ruin you,” it told an Australian philosophy professor before deleting the message and replying, “I am sorry I don’t know how to discuss this topic.”

Microsoft moved quickly to limit the damage, saying long chat sessions could “confuse the underlying chat model.” On Feb. 17, it limited users’ questions to five per session and 50 per day.

In reality, these systems don’t have feelings or even thoughts, as humans define them. “ChatGPT does not understand anything you’re saying,” says Mr. Agrawal at the University of Toronto. “It is just predicting the most likely response.”

Stephen Brashear/AP
Yusuf Mehdi, Microsoft corporate vice president of search, speaks to members of the media about the integration of the company's Bing search engine and Edge browser with OpenAI on Feb. 7, 2023, in Redmond, Washington. Artificial intelligence could offer an opportunity for Microsoft to compete in internet search services, where Google currently dominates.

And with huge amounts of text in its databanks – an earlier version held enough to fill an estimated quarter of the bookshelves in the Library of Congress – ChatGPT can usually generate very plausible answers.

An arms race among tech giants

Because billions of dollars are at stake, companies are pushing out AI prototypes before they’re ready for prime time. Microsoft has said that each percentage point it gains in market share would mean an extra $2 billion in revenue, based on ad sales generated by internet searches. Worldwide, Bing is the second most popular search engine with 3% of the market. Google, with some 92% of the market, has been particularly rattled by the emergence of Microsoft-backed ChatGPT.

The system might not just steal revenue; it could upend the search business entirely. After all, if AI can answer users’ questions directly, why would they search the internet?

In the long term, this escalating arms race between Microsoft, Google, and others will be a good thing, many AI researchers say. It will mean more funding and faster progress. “It will become a commodity, which means that the price will be coming down,” says Ms. Zhou of AI startup Juji. “That’s a good thing for everyone. More companies like us could have more bandwidth to actually teach AI special skills – how to interact with people, really understand people deeply to help them.”

The challenge is that the technology may advance far faster than society can put up guardrails. 

“All of us – consumers, businesses, government – need to ensure these tools are being used responsibly,” writes Beena Ammanath, executive director of the Deloitte Global AI Institute, in an email. “We need an independent, government-led effort on A.I. ethics, to ensure that A.I. systems are fair, trustworthy, and free of bias.”

At the same time, she says, businesses almost inevitably will innovate faster than bureaucrats can regulate, so the private-sector enterprises also have a responsibility to self-regulate. 

It won’t be easy. 

Errors can pop up because of the data that’s used (the internet is hardly immune from falsehoods) or the computer code. Even seemingly innocuous decisions – such as pushing a system out that gets the majority of its answers right – may unknowingly discriminate against a minority. Then there are the very subtle details in the code or algorithm that consumers would never notice.

Using a mapping program to drive to a location, for example, “there could be a minor, tiny nudge in the algorithm” to alter your route so that you pass by a certain store or restaurant, says Mr. Agrawal in Toronto. Similar nudges, potentially blurring the line between corporate and consumer interests, could be baked into things like news, music, and product recommendations. 

“We’re becoming much more subjected to the directions given to us by AIs,” Mr. Agrawal says. “And because they’ve become so good, we’ve become so reliant on them; they can have such a big influence – good or bad.”

Given such a powerful tool, how will businesses act?

“Honestly, it comes down to intention,” says Mike de Vere, CEO of Zest AI, a Burbank, California, firm developing AI to make credit-scoring more inclusive. “You have to be very clear about the outcome that you’re trying to drive towards, even down to who is actually programming. Do you have a diverse group of data scientists who are programming AI?”

The AI potential in lending

Because it’s so heavily regulated to avoid bias, the lending industry offers a glimpse into one way AI might get integrated into society.

The business opportunity for AI in lending is enormous. Traditional credit-scoring does a good job of sorting out the most and least risky loan applicants. But it’s a coin toss for those in the middle, says Mr. de Vere. By using AI to evaluate far more factors than in traditional credit-scoring, AI in theory should be able to approve more loans, which gives banks more customers and gives those customers credit cards and car loans and other credit that before were out of reach.

In practice, VyStar Credit Union, based in Jacksonville, Florida, has seen improvements across the board since it started using AI for credit cards. Approvals overall went up 22% between the second half of 2018 and the second half of 2022, with no increase in risk to the credit union’s conservative scoring system. And while the approval rate stayed the same for the riskiest class (those with credit scores in the 500s), so many more people applied for cards that the number of approvals doubled. And because it was data rather than a human deciding how big a credit line they should get, the average amount of credit offered also went up, says Jenny Vipperman, VyStar’s chief lending officer. “We’re saying, how can we serve as many people, as many members, as much of our community as we can in a safe and sound manner?”

The unexpectedly high $20,000 credit limit on her VyStar card turned out to be quite useful to Kailin, a young professional in Jacksonville, after she lost her job and had to move to Louisiana to care for a relative. (She did not want her last name published for privacy reasons.) Kailin had to pay for furniture for a new home, as well as other expenses, while searching for employment. 

“When it’s all robotic, yes, you’re taking away human error, but when it comes to the workforce, we need to have jobs in the economy,” she says of the new AI. “I think there’s still a need for that human interaction.” On March 6, she starts her new job in executive hiring.

You've read  of  free articles. Subscribe to continue.
Real news can be honest, hopeful, credible, constructive.
What is the Monitor difference? Tackling the tough headlines – with humanity. Listening to sources – with respect. Seeing the story that others are missing by reporting what so often gets overlooked: the values that connect us. That’s Monitor reporting – news that changes how you see the world.

Dear Reader,

About a year ago, I happened upon this statement about the Monitor in the Harvard Business Review – under the charming heading of “do things that don’t interest you”:

“Many things that end up” being meaningful, writes social scientist Joseph Grenny, “have come from conference workshops, articles, or online videos that began as a chore and ended with an insight. My work in Kenya, for example, was heavily influenced by a Christian Science Monitor article I had forced myself to read 10 years earlier. Sometimes, we call things ‘boring’ simply because they lie outside the box we are currently in.”

If you were to come up with a punchline to a joke about the Monitor, that would probably be it. We’re seen as being global, fair, insightful, and perhaps a bit too earnest. We’re the bran muffin of journalism.

But you know what? We change lives. And I’m going to argue that we change lives precisely because we force open that too-small box that most human beings think they live in.

The Monitor is a peculiar little publication that’s hard for the world to figure out. We’re run by a church, but we’re not only for church members and we’re not about converting people. We’re known as being fair even as the world becomes as polarized as at any time since the newspaper’s founding in 1908.

We have a mission beyond circulation, we want to bridge divides. We’re about kicking down the door of thought everywhere and saying, “You are bigger and more capable than you realize. And we can prove it.”

If you’re looking for bran muffin journalism, you can subscribe to the Monitor for $15. You’ll get the Monitor Weekly magazine, the Monitor Daily email, and unlimited access to CSMonitor.com.

QR Code to For better or worse, this story was not written by a computer
Read this article in
https://www.csmonitor.com/Business/2023/0227/For-better-or-worse-this-story-was-not-written-by-a-computer
QR Code to Subscription page
Start your subscription today
https://www.csmonitor.com/subscribe