Wanted: Ethical intelligence for artificial intelligence

From the Pentagon to the United Nations, leaders seek advice on AI’s potential to harm – or to serve.

Commuters in Beijing walk by surveillance cameras. At least 75 countries are actively using AI tools such as facial recognition for surveillance.

By 2030 the total gross domestic product of the world will be 14% higher because of one thing: more use of artificial intelligence or AI.

That’s the conclusion of PwC, a professional services firm based in London. If such forecasts are right these sophisticated computer programs will be doing tasks such as driving vehicles, planning and waging wars, and advising humans on how to handle both their health and wealth.

One observer writing in the Journal of the American Medical Association has declared that the “hype and fear” surrounding AI “may be greater than that which accompanied the discovery of the structure of DNA or the whole genome.”

Yet despite the possibility of colossal impacts from AI, the U.S. government has been doing little to study its ethical implications. The federal government’s Office of Technology Assessment, which might have led the effort, was closed in 1995; other research groups such as the Government Accountability Office and the Congressional Research Service have seen their budgets severely cut.

AI’s effect on privacy has already become a major issue as personal data is constantly gathered in myriad ways individuals may not realize. Facebook’s Mark Zuckerberg has been meeting with members of Congress to discuss how his organization might do a better job at protecting users’ privacy. Last year a group of Google employees joined up to question the ethics of Project Maven, in which Google would supply AI image recognition capabilities to U.S. military drones.

AI has already received criticism when used to recommend prison sentences. In one case it consistently suggested longer sentences for black people than white people convicted of the same crime. Because of the increasing sophistication of AI, revealing hidden biases in the writing of the software and figuring out why they occurred is likely to become harder in the future.

Even the choice of voices for popular virtual assistants, such as Siri and Alexa, has come under ethical scrutiny. Why choose mainly feminine voices for many AI programs, whose primary role is to do our bidding submissively with little pushback?

For decades the U.S. Navy has used Phalanx automated cannons on its warships, capable of aiming and firing on their own much more rapidly than humans might. And the Navy is experimenting with a ship called Sea Hunter, which would be armed and patrol the oceans without a human crew. In a test voyage it has already sailed from Hawaii to California on its own.

Recently Germany, France, and other countries proposed a declaration at the United Nations urging regulation of lethal autonomous weapons, more popularly referred to as killer robots. While the autonomous killer robots portrayed in the “Terminator” movies still seem a ways off, they’re no longer considered science fiction. Some AI ethicists are calling for talks to create an international treaty to regulate the use of robotic weapons.

Recognizing its growing need for guidance, the Pentagon has been advertising for an ethicist to advise it. At the same time France, Germany, and Japan have begun joint research into what they’re calling a “human centered” AI that would respect individual privacy and provide transparency.

To add to the urgency for AI ethics, Google recently announced that it had successfully tested “quantum computing,” which could soon usher in much faster data crunching and, potentially, much smarter AI systems.

All these developments, and others, show that the efforts of governments, private companies, and individuals are needed to provide ethical guidance as AI advances into our lives. Intelligence, whether artificial or not, must be built on the common good. Alertness now can prevent alarm later.

You've read  of  free articles. Subscribe to continue.

Dear Reader,

About a year ago, I happened upon this statement about the Monitor in the Harvard Business Review – under the charming heading of “do things that don’t interest you”:

“Many things that end up” being meaningful, writes social scientist Joseph Grenny, “have come from conference workshops, articles, or online videos that began as a chore and ended with an insight. My work in Kenya, for example, was heavily influenced by a Christian Science Monitor article I had forced myself to read 10 years earlier. Sometimes, we call things ‘boring’ simply because they lie outside the box we are currently in.”

If you were to come up with a punchline to a joke about the Monitor, that would probably be it. We’re seen as being global, fair, insightful, and perhaps a bit too earnest. We’re the bran muffin of journalism.

But you know what? We change lives. And I’m going to argue that we change lives precisely because we force open that too-small box that most human beings think they live in.

The Monitor is a peculiar little publication that’s hard for the world to figure out. We’re run by a church, but we’re not only for church members and we’re not about converting people. We’re known as being fair even as the world becomes as polarized as at any time since the newspaper’s founding in 1908.

We have a mission beyond circulation, we want to bridge divides. We’re about kicking down the door of thought everywhere and saying, “You are bigger and more capable than you realize. And we can prove it.”

If you’re looking for bran muffin journalism, you can subscribe to the Monitor for $15. You’ll get the Monitor Weekly magazine, the Monitor Daily email, and unlimited access to CSMonitor.com.

QR Code to Wanted: Ethical intelligence for artificial intelligence
Read this article in
QR Code to Subscription page
Start your subscription today