Fears about robot overlords are (perhaps) premature

Computer science professor Melanie Mitchell clears up misconceptions about machine learning in “Artificial Intelligence: A Guide for Thinking Humans.”

|
Courtesy of Macmillan Publishers
“Artificial Intelligence: A Guide for Thinking Humans” by Melanie Mitchell, Farrar, Straus and Giroux, 324 pp.

In “Artificial Intelligence: A Guide for Thinking Humans,” Melanie Mitchell, a computer science professor at Portland State University, tells the story, one of many, of a graduate student who had seemingly trained a computer network to classify photographs according to whether they did or did not contain an animal. When the student looked more closely, however, he realized that the network was not recognizing animals but was instead putting images with blurry backgrounds in the “contains an animal” category. Why? The nature photos that the network had been trained on typically featured both an animal in focus in the foreground and a blurred background. The machine had discovered a correlation between animal photos and blurry backgrounds.

Mitchell notes that these types of misjudgments are not unusual in the field of AI. “The machine learns what it observes in the data rather than what you (the human) might observe,” she explains. “If there are statistical associations in the training data, even if irrelevant to the task at hand, the machine will happily learn those instead of what you wanted it to learn.”

Mitchell’s lucid, clear-eyed account of the state of AI – spanning its history, current status, and future prospects – returns again and again to the idea that computers simply aren’t like you and me. She opens the book by recounting a 2014 meeting on AI that she attended at Google’s world headquarters in Mountain View, California. She was accompanying her mentor, Douglas Hofstadter, a pioneer in the field who spoke passionately that day about his profound fear that Google’s great ambitions, from self-driving cars to speech recognition to computer-generated art, would turn human beings into “relics.” The author’s own, more measured view is that AI is not yet poised to be successful precisely because machines lack certain human qualities. Her belief is that without a good deal of decidedly human common sense, much of which is subconscious and intuitive, machines will fail to achieve human levels of performance.

Many of the challenges of creating fully intelligent machines come down to the paradox, popular in AI research, that “easy things are hard.” Computers have famously vanquished human champions in chess and in Jeopardy, but they still have trouble, say, figuring out whether or not a given photo includes an animal. Machines are as yet incapable of generalizing, understanding cause and effect, or transferring knowledge from situation to situation – skills that we homo sapiens begin to develop in infancy.

These big themes are fascinating, and Mitchell conveys them clearly and lucidly. Along the way, she describes specific AI programs in technical language that can be challenging for the layperson (the many charts and illustrations are helpful). She lightens the book, though, with an affable tone, even throwing in the occasional “Star Trek” joke. She also writes with admirable frankness. Posing the question “Will AI result in massive unemployment for humans?” she answers, “I don’t know.” (She adds that her guess is that it will not.) She predicts that AI will not master speech recognition until machines can actually understand what speakers are saying but then acknowledges that she’s “been wrong before.”

While she’s an AI booster, Mitchell expresses a number of concerns about future implementations of the technology. Recent advances in AI accompanied the growth of the Internet and the related explosion in data. The field is currently dominated by deep learning, which involves networks training themselves by consuming vast amounts of data, and the author warns that “there is a lot to worry about regarding the potential for dangerous and unethical uses of algorithms and data.” She also points out that AI systems are easily tricked, making them vulnerable to hackers, which could have disastrous consequences where technologies like self-driving cars are concerned. Finally, Mitchell worries about the social biases that can be reproduced in AI programs; for instance, facial recognition technology is significantly more likely to produce errors when the subjects are people of color.

The author does an excellent job establishing that machines are not close to demonstrating humanlike intelligence, and many readers will be reassured to know that we will not soon have to bow down to our computer overlords. It’s almost a surprise, then, when Mitchell at the end of the book aligns herself with other researchers “trying to imbue computers with commonsense knowledge and to give them humanlike abilities for abstraction and analogy making,” what she’s identified as the missing piece to creating superintelligent machines. While computers won’t surpass humans anytime soon, not everyone will be convinced that the effort to help them along is a good idea.

You've read  of  free articles. Subscribe to continue.
Real news can be honest, hopeful, credible, constructive.
What is the Monitor difference? Tackling the tough headlines – with humanity. Listening to sources – with respect. Seeing the story that others are missing by reporting what so often gets overlooked: the values that connect us. That’s Monitor reporting – news that changes how you see the world.

Dear Reader,

About a year ago, I happened upon this statement about the Monitor in the Harvard Business Review – under the charming heading of “do things that don’t interest you”:

“Many things that end up” being meaningful, writes social scientist Joseph Grenny, “have come from conference workshops, articles, or online videos that began as a chore and ended with an insight. My work in Kenya, for example, was heavily influenced by a Christian Science Monitor article I had forced myself to read 10 years earlier. Sometimes, we call things ‘boring’ simply because they lie outside the box we are currently in.”

If you were to come up with a punchline to a joke about the Monitor, that would probably be it. We’re seen as being global, fair, insightful, and perhaps a bit too earnest. We’re the bran muffin of journalism.

But you know what? We change lives. And I’m going to argue that we change lives precisely because we force open that too-small box that most human beings think they live in.

The Monitor is a peculiar little publication that’s hard for the world to figure out. We’re run by a church, but we’re not only for church members and we’re not about converting people. We’re known as being fair even as the world becomes as polarized as at any time since the newspaper’s founding in 1908.

We have a mission beyond circulation, we want to bridge divides. We’re about kicking down the door of thought everywhere and saying, “You are bigger and more capable than you realize. And we can prove it.”

If you’re looking for bran muffin journalism, you can subscribe to the Monitor for $15. You’ll get the Monitor Weekly magazine, the Monitor Daily email, and unlimited access to CSMonitor.com.

QR Code to Fears about robot overlords are (perhaps) premature
Read this article in
https://www.csmonitor.com/Books/Book-Reviews/2019/1025/Fears-about-robot-overlords-are-perhaps-premature
QR Code to Subscription page
Start your subscription today
https://www.csmonitor.com/subscribe