Humans love human-like robots. Or at least the idea of them, anyway.
With this week's release of Chappie comes another film exemplifying humans' fascination with, and concerns about, the developments of artificial intelligence.
The 2004 sci-fi film, I, Robot, based on Isaac Asimov's short-story collection, involves an infantry of humanoid robots that are ordered to overtake humans as the power-holders in society. One of the robots, Sonny, reveals himself to have emotions and dreams.
Other movies portray the dangers of intelligent robots lacking in compassion: the original Terminator follows an unfeeling cyborg sent back in time to assassinate Sarah Connor before she can give birth to her son, the future leader of a resistance movement against the AI machines.
Chappie, who looks strikingly similar to anime character Briareos Hecatonchires of the Appleseed manga franchise, is one robot among an entire force of police droids designed to fight crime. But Chappie becomes the first of his kind after he is stolen and his programming is altered, allowing him to become sentient.
But how close are we to successfully building a real-life Chappie?
"We definitely have had major aspects of systems like Chappie already in existence for quite a while," Caltech physicist and AI expert Wolfgang Fink told LiveScience.
There are numerous robots now that can walk and perform tasks like picking up objects or kicking a ball. Last year, President Obama played soccer with a humanoid robot called ASIMO, which was designed by Japanese car manufacturer Honda.
And not only are robots capable of learning, they can be their own teachers: scientists at Google created an artificially intelligent computer program that can teach itself to play video games.
The big jump from anthropomorphic bipedal robots of today to the Chappie that walks and talks on the big screen lies in the presence of self-awareness. While existing robots can simulate autonomy by way of executing tasks, scientists have not yet built an artificially intelligent machine that can distinguish between itself and everything else.
It is this jump that fosters unease over further developments in artificial intelligence. The Skynet network that wiped out almost all of mankind in The Terminator was only able to do so after the artificially intelligent machines became self-aware.
Tesla Motors and SpaceX founder Elon Musk tweeted last summer about advancing the science with caution: "We need to be super careful with A.I. Potentially more dangerous than nukes."
Some scientists believe a machine of the Chappie kind is far off, though. In an interview with the Monitor, Robert Lindsay, professor emeritus of psychology and computer science at the University of Michigan in Ann Arbor, said:
"We're a long way from [humanlike AI], and we're not really on track toward that because we don't understand enough about what makes people intelligent and how people solve problems."