Will the machines take over? Why Elon Musk thinks so.

When one of the most prevalent technological pioneers worries about artificial intelligence, the world listens.

October 27, 2014

Just as Tony Stark warns of the dangers of high-tech weaponry in the wrong hands, Elon Musk – the Tesla and SpaceX founder who is regularly compared to Iron Man's not-so-secret identity – is raising the alarm about advances in artificial intelligence.

The Space X founder called artificial intelligence "our biggest existential threat," at an MIT symposium, comparing it to "summoning the demon."

Musk opined that governments need to begin regulating artificial intelligence development, saying that HAL 9000 – the sentient computer antagonist in the Space Odyssey series – would be "like a puppy dog" in comparison to what is possible.

In Kentucky, the oldest Black independent library is still making history

This isn't the first time he's gone public with this fear.

Musk has apparently done some heavy reading of late – “Superintelligence: Paths, Dangers, Strategies” by Nick Bostrom of the University of Oxford’s Future of Humanity Institute raises the questions, "What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us?"

Musk took to Twitter in August to encourage others to read the book, adding, "We need to be super careful with AI. Potentially more dangerous than nukes." Musk also suggested James Barrat's "Our Final Invention: Artificial Intelligence and the End of the Human Era," which argues such intelligence could threaten human existence.

He then continued to muse on the impending doom of robots undermining human intelligence:

In June, Musk drew references to "The Terminator," telling CNBC he invests in companies working on artificial intelligence to keep an eye on developments.

A majority of Americans no longer trust the Supreme Court. Can it rebuild?

Roger McNamee, Elevation Partners co-founder, was quick to disagree.

"In a world where we have the NSA looking at everything that we do, where the government is spending hundreds of billions of dollars on fighter planes that can't fly, and where we're starting wars in countries we can't possibly win in, it seems to me that worrying about AI is irrelevant," he told CNBC.

Besides, he said, "There's a good chance we will have polluted the earth beyond repair long before they could get any of this AI stuff to work."