Wanted: Ethical intelligence for artificial intelligence

From the Pentagon to the United Nations, leaders seek advice on AI’s potential to harm – or to serve.

Commuters in Beijing walk by surveillance cameras. At least 75 countries are actively using AI tools such as facial recognition for surveillance.

AP

October 18, 2019

By 2030 the total gross domestic product of the world will be 14% higher because of one thing: more use of artificial intelligence or AI.

That’s the conclusion of PwC, a professional services firm based in London. If such forecasts are right these sophisticated computer programs will be doing tasks such as driving vehicles, planning and waging wars, and advising humans on how to handle both their health and wealth.

One observer writing in the Journal of the American Medical Association has declared that the “hype and fear” surrounding AI “may be greater than that which accompanied the discovery of the structure of DNA or the whole genome.”

In Kentucky, the oldest Black independent library is still making history

Yet despite the possibility of colossal impacts from AI, the U.S. government has been doing little to study its ethical implications. The federal government’s Office of Technology Assessment, which might have led the effort, was closed in 1995; other research groups such as the Government Accountability Office and the Congressional Research Service have seen their budgets severely cut.

AI’s effect on privacy has already become a major issue as personal data is constantly gathered in myriad ways individuals may not realize. Facebook’s Mark Zuckerberg has been meeting with members of Congress to discuss how his organization might do a better job at protecting users’ privacy. Last year a group of Google employees joined up to question the ethics of Project Maven, in which Google would supply AI image recognition capabilities to U.S. military drones.

AI has already received criticism when used to recommend prison sentences. In one case it consistently suggested longer sentences for black people than white people convicted of the same crime. Because of the increasing sophistication of AI, revealing hidden biases in the writing of the software and figuring out why they occurred is likely to become harder in the future.

Even the choice of voices for popular virtual assistants, such as Siri and Alexa, has come under ethical scrutiny. Why choose mainly feminine voices for many AI programs, whose primary role is to do our bidding submissively with little pushback?

For decades the U.S. Navy has used Phalanx automated cannons on its warships, capable of aiming and firing on their own much more rapidly than humans might. And the Navy is experimenting with a ship called Sea Hunter, which would be armed and patrol the oceans without a human crew. In a test voyage it has already sailed from Hawaii to California on its own.

A majority of Americans no longer trust the Supreme Court. Can it rebuild?

Recently Germany, France, and other countries proposed a declaration at the United Nations urging regulation of lethal autonomous weapons, more popularly referred to as killer robots. While the autonomous killer robots portrayed in the “Terminator” movies still seem a ways off, they’re no longer considered science fiction. Some AI ethicists are calling for talks to create an international treaty to regulate the use of robotic weapons.

Recognizing its growing need for guidance, the Pentagon has been advertising for an ethicist to advise it. At the same time France, Germany, and Japan have begun joint research into what they’re calling a “human centered” AI that would respect individual privacy and provide transparency.

To add to the urgency for AI ethics, Google recently announced that it had successfully tested “quantum computing,” which could soon usher in much faster data crunching and, potentially, much smarter AI systems.

All these developments, and others, show that the efforts of governments, private companies, and individuals are needed to provide ethical guidance as AI advances into our lives. Intelligence, whether artificial or not, must be built on the common good. Alertness now can prevent alarm later.