Stephen Hawking launches AI research center with opening speech

The Leverhulme Centre for the Future of Intelligence will bring together experts from academia, tech, and public policy. 

Melanie Stetson Freeman/The Christian Science Monitor
Students sit on the lawn in front of The University of Cambridge's Trinity College, on April 17, in Cambridge, England.

Theoretical physicist and cosmologist Stephen Hawking has repeatedly warned of the dangers posed by out-of-control artificial intelligence (AI). But on Wednesday, as the professor opened the Leverhulme Centre for the Future of Intelligence (CFI) at the University of Cambridge, he remarked on its potential to bring positive change – if developed correctly.

"Success in creating AI could be the biggest event in the history of our civilisation. But it could also be the last, unless we learn how to avoid the risks," Dr. Hawking said at the launch, according to a University of Cambridge press release.

Representing a collaboration between the universities of Oxford, Cambridge, Imperial College London, and the University of California, Berkeley, the CFI will bring together a multidisciplinary team of researchers, as well as tech leaders and policy makers, to ensure that societies can "make the best of the opportunities of artificial intelligence," as its website states.

"Perhaps with the tools of this new technological revolution, we will be able to undo some of the damage done to the natural world by the last one – industrialization," Dr. Hawking said, according to a press release. "And surely we will aim to finally eradicate disease and poverty. Every aspect of our lives will be transformed."

The center, which is funded by a 10-million-pound (about $12 million) grant from the Leverhulme Trust, will not just pursue research and development of the many possible applications for AI in fields from autonomous weapons to politics.

Ethics will also be a major focus of the center’s research.

"It's about how to ensure intelligent artificial systems have goals aligned with human values," said Stephen Cave, the director of the center, as AFP reports. One of the goals of CFI is to predict and avoid the potential "grave dangers" of the technology, said Margaret Borden, one of the center's consultants and a professor at the University of Sussex, according to the AFP.

In his opening speech, Hawking took up the same theme, saying that "Alongside the benefits, AI will also bring dangers, like powerful autonomous weapons, or new ways for the few to oppress the many."

"It will bring disruption to our economy. And in the future, AI could develop a will of its own – a will that is in conflict with ours," he said.

CFI is one of several attempts to team up and build a roadmap for the future of AI.

Tech giants Amazon, Facebook, Google, Microsoft, and IBM joined forces last month to form the Partnership on Artificial Intelligence to Benefit People and Society. The partnership was created with the dual purposes of dispelling public misconceptions and fears about AI through education, and recommending ethical guidelines and best practices for the entire industry.

"The positive impact of AI will depend not only on the quality of our algorithms, but on the amount of public discussion ... to ensure AI is understood by – and benefits – as many people as possible," Mustafa Suleyman, one of the founders of Google DeepMind and a chair of the partnership on AI, said in a call with the media in September.

Earlier that month, Stanford University released the first report of its planned 100-year study on AI, in which a panel of specialists analyzed the potential practical applications of the technology in a typical North American city 15 years from now.

"As a society, we are now at a crucial juncture in determining how to deploy AI-based technologies in ways that promote, not hinder, democratic values such as freedom, equality, and transparency," the panel states in its report.

You've read  of  free articles. Subscribe to continue.