Readers Write: Can we teach robots to think ethically?
Letters to the Editor for the October 8, 2012 weekly print issue: When we create artificial intelligence, will we create artificial 'ethicators,' too? The potential for 'cognitive decision-making skills' in computers is both challenging and exciting.
Can we teach robots to think ethically?
Regarding the Sept. 17 cover story, "Man & Machine," on the development of artificial intelligence (AI): I don't wish to be an alarmist, but I'm glad we're still far from inventing self-reasoning machines. Humankind has a history of creating new technologies simply because they're possible, only thinking about their impact later. Ray Bradbury suggested that science fiction is the nursery of new possibilities for humanity. If so, it should also be considered a warning.Skip to next paragraph
Gallery Monitor Political Cartoons
Subscribe Today to the Monitor
From Isaac Asimov's novel "I, Robot" to HAL in Stanley Kubrick's film "2001: A Space Odyssey," thinkers have long been asking: How can we be sure an artificial intelligence will be good? A machine has no moral sense or inner Jiminy Cricket to guide it. Will we create artificial "ethicators," too? If we can't even train dogs reliably, are we really capable of training machines with human-level reasoning?
Extremely sophisticated, "smart" software could play a key role in reviving the US economy, just as highly capable computer-based systems may replace some human job functions. But this article doesn't really push to the most challenging frontier of AI.
Computer systems may develop to the point where they seem to possess cognitive decision-making skills and reach conclusions not foreseen by their creators. These themes are touched on by "cyber prophets" like the computer pioneer Bill Joy when he wrote the groundbreaking article "Why the future doesn't need us" (Wired Magazine, 2000).
One of the most vital aspects of this new world is the rapid proliferation of a vast variety of "networks" – where "smart" machines and "smart" systems share information in an endless "ebb and flow." The flowing data are altered and improved in what some refer to as a kind of "collective intelligence." Our current Internet is a mild precursor of the potential involved in such a system.
This prospect can sometimes seem overwhelming, but I am reassured that in dealing with both exciting potential and sobering challenge we can be sure that our highest sense of intelligence will provide a steadfast guide.
Dr. Allan Hauer