AI can have values if not a conscience

The author of ‘Frankenstein’ started us on a long path to steer new technologies to a higher good.

|
Reuters
The "pop.up next" concept by Audi, Airbus, and Italdsign, an electric driver-less autonomous vehicle with vertical take-off and landing, is pictured during the Viva Tech start-up and technology summit in Paris May 25.

This year marks exactly two centuries since the publication of “Frankenstein; or, The Modern Prometheus,” by Mary Shelley. Even before the invention of the electric light bulb, the author produced a remarkable work of speculative fiction that would foreshadow myriad ethical questions to be spawned by technologies yet to come.

Today the rapid growth of artificial intelligence (AI) raises fundamental questions: “What is intelligence, identity, or consciousness? What makes humans humans?"

What is being called artificial general intelligence, machines that would mimic the way humans think, continues to elude scientists. Yet humans remain fascinated by the idea of robots that would look, move, and respond like humans, similar to those recently depicted on popular sci-fi TV series such as “Westworld” and “Humans.”

Just how people think is still far too complex to be understood, let alone reproduced, says David Eagleman, a Stanford University neuroscientist and science adviser for “Westworld.” “[W]e are just in a situation where there are no good theories explaining what consciousness actually is and how you could ever build a machine to get there.”

But that doesn’t mean crucial ethical issues involving AI aren’t at hand. Less sophisticated AI is already embedded in everyday life, from the (sometimes) helpful voice assistants like Alexa to Facebook tagging photos for users.

Besides much-talked-about vehicles that will drive themselves, AI is crunching huge amounts of data to suggest whether a prisoner would likely return to crime if released; algorithms exist that can choose the best applicants for a job or the right classes for a student to take (not to mention defeat a human at chess or win a debate).

All these systems contain the possibility of misuse. One viral video shows an automatic soap dispenser in a public bathroom that only dispenses soap onto white hands. Apparently the design team forgot to calibrate the sensor so that it recognized hands with darker skin tones.

While that foul-up might seem frivolous, or even humorous (though perhaps not to those being denied soap), it illustrates a more serious problem: If an employer looks for new hires, for example, using an algorithm based on the characteristics of its presently all-white or all-male staff, might the algorithm recommend only people with those characteristics?

The coming use of autonomous vehicles poses gnarly ethical questions. Human drivers sometimes must make split-second decisions. Their reactions may be a complex combination of instant reflexes, input from past driving experiences, and what their eyes and ears tell them in that moment. 

AI "vision" today is not nearly as sophisticated as that of humans. And to anticipate every imaginable driving situation is a difficult programming problem. One possible technique may be to survey human drivers to ask what they would do in myriad driving situations. Another would be to analyze accidents involving AI after the fact, to understand how it proved deficient and fix the problem.

The hope is that AI-driven vehicles will become far better drivers than humans, saving thousands of human injuries and deaths.

But whenever decisions are based on masses of data, “you quickly get into a lot of ethical questions,” notes Tan Kiat How, chief executive of the Info-communications Media Development Authority, a Singapore-based agency that is helping the government develop a voluntary code for the ethical use of AI.

Along with Singapore, other governments and mega-corporations are beginning to establish their own guidelines. Britain is setting up a data ethics center. India released its AI ethics strategy this spring. Worldwide, high schools and colleges could seriously commit to teaching students in AI courses about the ethical issues this new technology raises.

On June 7 Google pledged to not “design or deploy AI” that would cause “overall harm,” or to develop AI-directed weapons or use AI for surveillance that would violate international norms. It also pledged to not deploy AI whose use would violate international laws or human rights.

While the statement is vague, it represents one starting point. So does the idea that decisions made by AI systems should be “explainable, transparent, and fair,” as S. Iswaran, Singapore’s minister for communications and information, put it recently.

To put it another way: How can we make sure that the thinking of intelligent machines reflects humanity’s highest values? Only then will they be useful servants and not Frankenstein’s unleashed monster.

You've read  of  free articles. Subscribe to continue.
Real news can be honest, hopeful, credible, constructive.
What is the Monitor difference? Tackling the tough headlines – with humanity. Listening to sources – with respect. Seeing the story that others are missing by reporting what so often gets overlooked: the values that connect us. That’s Monitor reporting – news that changes how you see the world.

Dear Reader,

About a year ago, I happened upon this statement about the Monitor in the Harvard Business Review – under the charming heading of “do things that don’t interest you”:

“Many things that end up” being meaningful, writes social scientist Joseph Grenny, “have come from conference workshops, articles, or online videos that began as a chore and ended with an insight. My work in Kenya, for example, was heavily influenced by a Christian Science Monitor article I had forced myself to read 10 years earlier. Sometimes, we call things ‘boring’ simply because they lie outside the box we are currently in.”

If you were to come up with a punchline to a joke about the Monitor, that would probably be it. We’re seen as being global, fair, insightful, and perhaps a bit too earnest. We’re the bran muffin of journalism.

But you know what? We change lives. And I’m going to argue that we change lives precisely because we force open that too-small box that most human beings think they live in.

The Monitor is a peculiar little publication that’s hard for the world to figure out. We’re run by a church, but we’re not only for church members and we’re not about converting people. We’re known as being fair even as the world becomes as polarized as at any time since the newspaper’s founding in 1908.

We have a mission beyond circulation, we want to bridge divides. We’re about kicking down the door of thought everywhere and saying, “You are bigger and more capable than you realize. And we can prove it.”

If you’re looking for bran muffin journalism, you can subscribe to the Monitor for $15. You’ll get the Monitor Weekly magazine, the Monitor Daily email, and unlimited access to CSMonitor.com.

QR Code to AI can have values if not a conscience
Read this article in
https://www.csmonitor.com/Commentary/the-monitors-view/2018/0702/AI-can-have-values-if-not-a-conscience
QR Code to Subscription page
Start your subscription today
https://www.csmonitor.com/subscribe