Will Google's new partnership lead to smartphones that learn?

Google's partnership with Movidius on the development of neural networking in smartphones and other hand-held devices could lead to advanced computing that 'learns' from real-world data.

Google to make use of Movidius processors and software development to create mobile devices that mimic the understanding process of a human brain.

Neil Hall/Reuters

January 28, 2016

A new partnership between Google and machine vision processing developer Movidius could pave the way for mobile devices to become even smarter.

The deal, announced Wednesday, will allow Google to make use of Movidius processors and software development while providing Movidius with the web giant’s help in its neural network technology work.

“What Google has been able to achieve with neural networks is providing us with the building blocks for machine intelligence, laying the groundwork for the next decade of how technology will enhance the way people interact with the world,” said Google machine intelligence software architect and designer Blaise Agüera y Arcas in a Movidius press release. “By working with Movidius, we’re able to expand this technology beyond the data center and out into the real world, giving people the benefits of machine intelligence on their personal devices.”

Iran’s official line on exchange with Israel: Deterrence restored

Artificial neural networks are adaptive computer algorithms that attempt to mimic the understanding process of a human brain. The models are designed to function in a similar manner as the brain and are supposed to recognize patterns through the interpretation of real-world sensory data, such as sounds and images. Eventually, the systems could allow computers to learn.

The newly announced partnership will move forward Google’s plans to run its advanced neural network engines on mobile devices such as tablets and smartphones. These advances could help build consumer appliances that can “understand images and audio” and support a more “personal and contextualized” user experience on devices utilizing Google and Movidius technology.

Google has already partnered with Movidius on its Project Tango initiative for almost two years. The Project Tango platform utilizes computer vision and special sensors to allow mobile devices to navigate through space by perceiving their position and orientation, in a manner similar to humans. Computer vision is an imaging field related to neural network technology like Google’s which both companies hope to make more prevalent through their furthered cooperation.

“The technological advances Google has made in machine intelligence and neural networks are astounding,” said Movidius Chief Executive Officer Remi El-Ouazzane in the company’s release. “Movidius’ mission is to bring visual intelligence to devices so that they can understand the world in a more natural way. This partnership with Google will allow us to accelerate that vision in a tangible way.”

One challenge El-Ouazzane mentioned was the power efficiency needs of devices running neural networks. In the Movidius statement he said having the Google collaboration begin with the initial hardware design at Movidius should provide a “deep synthesis” between the architecture and processing, eventually making the technology functional and attainable in consumer products.

Monitor Breakfast

Senate map favors the GOP. But Steve Daines won’t predict a ‘red wave.’

The Movidius press release also confirmed that Google would utilize the Movidius Myriad 2 MA2450 chip, which the company says is the only commercial chip with a small enough size and great enough power efficiency to run neural network computing. In 2014, Google’s first Project Tango prototype phone utilized a Movidius Myriad 1 processor for basic contextual understanding by the device.