$5 million prize for A.I. targets the 'dystopian conversation'

As IBM and X Prize unveiled a new $5 million competition to spur research into A.I., one purpose of the prize is to address unwarranted scaremongering about artificial intelligence.

Jianan Yu/Reuters/File
A member from a German team adjusts a humanoid robot during the 2015 Robocup finals in Hefei, Anhui province, July 22, 2015.

Developers of artificial intelligence (A.I.) now have an added incentive to pursue their work: $5 million dollars.

The prize money was announced at the annual TED conference Wednesday, in a joint initiative between tech giant IBM and X Prize, the company behind the world’s first private space race to reach the moon.

Motivating the backers of this competition is, among other things, a desire to demonstrate the potential benefits to mankind of advances in A.I., but many skeptics have yet to be convinced.

"Personally, I am sick and tired of the dystopian conversation around artificial intelligence," said X Prize founder Peter Diamandis when unveiling the prize.

The competition challenges teams to “develop and demonstrate how humans can collaborate with powerful cognitive technologies to tackle some of the world’s grand challenges,” according to an X Prize statement.

The winner will be determined at the 2020 TED conference when three finalists will take the stage, but each year in the meantime will see competitors vying for interim prizes, seeking to progress to the next round.

“We believe A.I. will be the most important technology of our lifetimes, and our scientists, researchers, and developers have decades of innovation ahead of them,” stated IBM in a press release.

But it is precisely this enormous potential that causes many to wonder – to pause – to question whether we need to slow down and consider the implications of A.I., before tearing ahead with its evolution.

Probably the most dramatic incarnation of these concerns is the debate swirling around the development of autonomous weapons, machines of war able to make deadly decisions without the input of humans.

Renowned physicist Stephen Hawking was one of thousands of researchers, experts, and business leaders to sign an open letter in July 2015, urging caution, as The Christian Science Monitor reported.

Yet even those who are most vocal in their opposition do not counsel that we abandon our A.I. ambitions.

“It’s not about destroying an industry or a whole field,” said Mary Wareham, coordinator of Campaign to Stop Killer Robots, in a phone interview with The Christian Science Monitor. “It’s about trying to ring-fence the dangerous technology."

And so we find ourselves at something of a crucial juncture: can opponents and proponents of A.I. development find common ground or, at the very least, remain engaged in this critical discussion?

Some researchers have ceased communicating with the media or the public, tired of what they perceive to be “hyped headlines," as Sabine Hauert, robotics lecturer at the University of Bristol, United Kingdom, wrote in the journal Nature.

But we must not disengage,” writes Dr. Hauert. “[The public] hear a mostly one-sided discussion that leaves them worried that robots will take their jobs, fearful that AI poses an existential threat, and wondering whether laws should be passed to keep hypothetical technology 'under control'.”

“Experts need to become the messengers,” she says.

X Prize describes itself as a “facilitator of exponential change” and a “catalyst for the benefit of humanity”.

IBM developed Watson, “a technology platform that uses natural language processing and machine learning to reveal insights from large amounts of unstructured data,” which rose to fame in 2011 after defeating human opponents on the “Jeopardy” quiz show.

They seek to use, promote, and develop A.I. in a quest for progress stating in their announcement that “we are forging a new partnership between humans and technology.”

But such laudable aspirations cannot eliminate the risks. And whether the risks are real or imagined can only be determined by continuing to engage in reasonable and informed discussion.

You've read  of  free articles. Subscribe to continue.

Dear Reader,

About a year ago, I happened upon this statement about the Monitor in the Harvard Business Review – under the charming heading of “do things that don’t interest you”:

“Many things that end up” being meaningful, writes social scientist Joseph Grenny, “have come from conference workshops, articles, or online videos that began as a chore and ended with an insight. My work in Kenya, for example, was heavily influenced by a Christian Science Monitor article I had forced myself to read 10 years earlier. Sometimes, we call things ‘boring’ simply because they lie outside the box we are currently in.”

If you were to come up with a punchline to a joke about the Monitor, that would probably be it. We’re seen as being global, fair, insightful, and perhaps a bit too earnest. We’re the bran muffin of journalism.

But you know what? We change lives. And I’m going to argue that we change lives precisely because we force open that too-small box that most human beings think they live in.

The Monitor is a peculiar little publication that’s hard for the world to figure out. We’re run by a church, but we’re not only for church members and we’re not about converting people. We’re known as being fair even as the world becomes as polarized as at any time since the newspaper’s founding in 1908.

We have a mission beyond circulation, we want to bridge divides. We’re about kicking down the door of thought everywhere and saying, “You are bigger and more capable than you realize. And we can prove it.”

If you’re looking for bran muffin journalism, you can subscribe to the Monitor for $15. You’ll get the Monitor Weekly magazine, the Monitor Daily email, and unlimited access to CSMonitor.com.