How researchers got George W. Bush's words into Hillary Clinton’s mouth

3D simulations point to a future when anyone's photos can be used to create an interactive likeness. How does it work?

“There is no one alive who is Youer than You," wrote Dr. Suess. But what is it about you that makes you you? Researchers, using none other than Tom Hanks, have sought to find out. 

Leveraging the enormous web presence of the prolific actor – a "Tom Hanks" Google search generates nearly 40 million entries – Supasorn Suwajanakorn, Steven M. Seitz, and Ira Kemelmacher-Shlizerman from the University of Washington compiled some of that visual data, combined it with an algorithm they created, and generated a 3D likeness that they say captures the "persona" of the well-liked actor. The digital likeness can be manipulated to deliver lines Mr. Hanks himself has never uttered, like a kind of futuristic puppet. 

“One answer to what makes Tom Hanks look like Tom Hanks can be demonstrated with a computer system that imitates what Tom Hanks will do,” said Mr. Suwajanakorn, the study's lead author and a UW graduate student in computer science and engineering, in a statement explaining the study.

As technologies evolve, from augmented reality to three-dimensional printing, the researchers envision a future where anyone's photos and videos can be used to create an interactive 3D model from the persona captured in media. 

“You might one day be able to put on a pair of augmented reality glasses and there is a 3-D model of your mother on the couch,” said Professor Kemelmacher-Shlizerman, a senior author on the paper, which will be presented at the International Conference on Computer Vision in Chile on Dec. 16. “Such technology doesn’t exist yet — the display technology is moving forward really fast — but how do you actually re-create your mother in three dimensions?”

To "reconstruct" public figures like Tom Hanks, Barack Obama, and Daniel Craig, which the researchers did for the project called "What Makes Tom Hanks Look Like Tom Hanks," the algorithm pulled at least 200 images from the web of each person covering different periods of time, and various poses, and actions. The ability to synthesize "unconstrained data" signifies "a big deal in the world of digital modeling, said Kemelmacher-Shlizerman.

The results are an intricate map of one person’s features and expressions that can be transferred to another person’s 3D digital likeness. Such “control” of the digital model, according to the team, could open new doors for animation and virtual reality innovations.

“How do you map one person’s performance onto someone else’s face without losing their identity?” said Professor Seitz. “That’s one of the more interesting aspects of this work. We’ve shown you can have George Bush’s expressions and mouth and movements, but it still looks like George Clooney.”

You've read  of  free articles. Subscribe to continue.