Reuben Langdon has a most unusual profession: He’s a human marionette.
Part acrobat, part martial artist, Langdon is one of the world’s top motion-capture actors. For years, video-game- makers have filmed the performer’s choreographed actions in a suit dotted with sensors and then mapped them on a computer to create iconic characters such as Ken in “Street Fighter IV” and Dante in “Devil May Cry.” The latter, which Langdon also voices, even has its own buff action figure. “They did model the six-pack after my six-pack,” jokes the actor.
Now it’s Langdon’s turn to pull the strings. The versatile thespian is not only crossing over into cinema – he was hired as a performance-capture actor for James Cameron’s upcoming “Avatar” – but his production company, Just Cause, is employing cutting-edge cameras used in 3-D movies to bring a more cinematic feel to video games.
“What Just Cause is doing is really a sign of the times in the cross integration of interactive entertainment in gaming with big-budget movies,” says Scott Lowe, gear editor at IGN.com, a website devoted to multimedia and gaming. “Both mediums seem to be benefiting from that kind of cross integration.”
“Avatar,” a 3-D space opera set on a planet straight off a Yes album cover, is Cameron’s ambitious attempt to smudge the line between live action and realistic animation by adapting video-game technology. To that end, Cameron affixed a camera to each actor’s head so that animators could capture each minute detail of an expression, right down to a tongue twitch.
“We actually had a head rig on ‘Devil May Cry,’ where we put a camera right in front of the actor’s face,” says Langdon, who played the lead character of the Capcom-produced game. “It just captures the best data.”
In turn, Langdon has borrowed an innovation used in “Avatar” called the iKam, a virtual camera that allows for “real-time capture.”
Previously, directors have had to wait until postproduction to view what the performance capture looks like in the animated world. The virtual camera allows on-set filmmakers to peer into a computer-generated 3-D environment during filming to see how the actors interact with their animated surroundings.
“You are able to take this device – which looks like a cross between a steering wheel, a PlayStation controller, and an iPhone – and ... look into the screen into the game world,” enthuses Chris Kramer, senior director of communications and community at Capcom, the Japanese gaming company behind franchises such as “Street Fighter” and “Resident Evil.” “It’s like holding a portal into an alternate dimension in your hands.”
As such, the cameraman is able to move around performers to create a more cinematic effect in video-game “cut scenes” (between-action interludes for exposition and dialogue). By contrast, other games look as if they were clinically filmed by cameras sliding along smooth rails.
“All the cut scenes were done in real time, but all those cut scenes have a rendered look,” says Langdon, whose chiseled cheekbones are curtained by center-parted blond hair.
“Real time [scenes are] definitely cheaper to produce than rendered” scenes, he says, because designers don’t need to spend countless hours on high-powered rendering computers.
Motion capture has evolved dramatically since 1996, the year Langdon successfully auditioned for a video-game role in Capcom’s “Resident Evil Code: Veronica.” “At the time, it was a tethered system with wires sticking out [of the suit],” he recalls. “There were wires that attached to this big bundle that someone would have to follow you around with as you performed.”
Later suits dispensed with umbilical cords in favor of small sensors, but they left welts on Langdon’s body after bruising stunts. Fortunately, he was already accustomed to cumbersome costumes. A self-professed “Japanime” geek, Langdon relocated himself to Tokyo right out of high school in the early 1990s and landed a role in the children’s TV series “B-Fighter Kabuto.” (His character would emerge from a puff of smoke transformed into a superhero robot with shoulder pads big enough to sideswipe Godzilla.)
“That definitely helped for when motion-capture work started,” says Langdon. “There’s a bit of overacting that needed to be done because, especially in the earlier games, the motions were conveyed but there’s no facial animation. Someone would go in and animate just a smirk, or a mouth open/mouth closed. Today, technology’s definitely advanced. The face is hyperreal, but you’re still playing an animated character. All the expressions have to be over-the-top – ‘Power Ranger’-like.”
Inside the Just Cause office, a tiny space that includes a racquetball court-size stage for performance capture, a handful of animators are sculpting remarkably nuanced digital faces. But where other motion-capture companies film a full-body performance all at once, Just Cause has an unorthodox procedure. After filming action scenes in suits studded with spongy markers, actors retreat to a sound booth to record dialogue with sensors on their faces. The process allows the animators to ensure an exact matchup between the dialogue and the character’s lips. Another advantage: One can put a different face on a character’s body if the voice-over actor needs to be recast.
“We’d cut together our footage of what we’d done on the set, and then we’d try to mimic those emotions and those reactions,” explains Langdon. “There’s stuff going on in here that you may not have gotten in the past because of that lack of detail in the face and facial capture.”
The Just Cause team – 21 employees in Tokyo and Los Angeles whose desks are populated with anime action figures such as Akira and Astro Boy – is currently finishing up another Capcom game, “Lost Planet 2.” But despite overseeing production, Langdon isn’t ready to hang up the spandex suit that makes him look like a futuristic jewel thief.
“Now that I have my own company, I’ve seen all the stuff that goes on behind the scenes and how much work goes into just making that eye open and close or that lip to curl,” he says. “It takes a big team to make that happen.”