Joe Letteri on the VFX of Dawn of the Planet of the Apes

Joe Letteri on the VFX of Dawn of the Planet of the Apes

Interview by Chris Davison, Chief Correspondent for ACM Computers in Entertainment.

This excerpt of an interview with award-winning visual effects artist Joe Letteri is provided courtesy of ACM Computers in Entertainment (CiE), a website that features peer-reviewed articles and non-refereed content covering all aspects of entertainment technology and its applications. Joe Letteri is a four-time Academy Award-winning visual effects guru and a partner at Weta Digital. Joe recently led his team in creating the groundbreaking effects in Matt Reeves’ “Dawn of the Planet of the Apes."

CiE: What was your creative process like in working with Matt Reeves?

"Early on, it was trying to work out some of the design ideas for the apes, because we had this idea that the story is going to be 10 years or so into the future — and we established in the first film that they’re taking the drug that’s making them more intelligent. So the first question was: how much of the effect of that are we going to see when we open the film? What will the changes be? So that was leading us into an aspect of the story that we needed to explore, how much were the apes going to be speaking in this film? That was really one of the big initial creative considerations, how do we bring audiences into that aspect of the story?
"We played with a couple of small design changes, with Caesar we made him a little bit older, a little bit grayer, he’s put on a little bit more weight than he had in the first film, since he’s 10 years older. We looked at some dialogue tests for the apes and we specifically did a test scene between Caesar and Koba to figure out how they would talk. We hit on the idea that talking doesn’t come naturally to them — and so you have this situation where you almost have to draw the words out of them, as if their brains are kind of outracing the physical evolution of their vocal cords to be able to conceive the words and to utter them.

Joe Letteri quote on performance capture

"We did the tests and then crafted them into the storyline where you begin the film — you’re just seeing the apes in their community, they have the sign language that they were learning at the end of the last film, and you can see that they’ve become proficient in it — they’re communicating with each other, but they don’t vocalize until the humans arrive. When the humans arrive, suddenly the need to vocalize is there because Caesar has to communicate with the humans, and that sets off a whole chain of events where the apes now suddenly have to communicate more with themselves — and the sign language starts to give way to more vocal communication. So there was this arc that drove this whole dynamic, and it’s reflected in how they speak. So the question we had to solve was: how do we make apes speak without having it look like a man in a suit? We wanted that sense of this being new to them but it had to come out of them, it had to be driven by events and it had to be supported by the character designs."

CiE: What are some of the unique challenges of shooting outside?

"Motion capture outdoors and performance capture outdoors is kind of a new field that we’re entering into. The evolution of that started with the “Lord of the Rings” when we decided that we did want to try and use Andy Serkis’ performance directly to drive Gollum’s performance. We got him on a motion capture stage and had to recreate the performances that he was doing with the actors that they were filming and that worked well for us, we were able to use that throughout “Rings” with a number of the sequences, still a lot of keyframe involved, we used it for a lot of the sequences but we keyframed his face for all of Rings.
Then when we came to do Kong we wanted to see if it’s possible to capture the face, to translate that, and so we came up with a technique for actually capturing his face by using motion capture markers on his face and working out how to do the solves and the translation into Kong’s character. In “Avatar,” Jim Cameron wanted to combine these ideas and have the actors wearing a head rig to capture the face information, that way it gives them more mobility to move on the stage and throughout the virtual world which we were shooting with a virtual camera. When it came time to do “Rise of the Planet of the Apes,” we thought — wouldn’t it be great if we could now actually capture Andy in the moment…"

Head over to the ACM CiE site to read the rest of the interview with Joe Letteri.

Watch the SIGGRAPH Award Talks

Watch the SIGGRAPH Award Talks

Among the most prestigious of honors in computer graphics, ACM SIGGRAPH awards are presented to people who are leaders in the field, or destined to become so. The award talks session at the annual SIGGRAPH conference gives winners the opportunity to expound on their background and research.

For those who missed them, the talks are now available to watch in full (below), courtesy of the ACM SIGGRAPH SCOOP team.

SIGGRAPH 2014 Award Recipients:

Noah Snavely, Significant New Researcher Award

Thomas Funkhouser, Computer Graphics Achievement Award

Scott Lang, Outstanding Service Award

Harold Cohen, Distinguished Artist Award for Lifetime Achievement

AR Sandbox: Cutting-Edge Earth Science Visualization

AR Sandbox: Cutting-Edge Earth Science Visualization

By Kristy Barkan

There are few kids who don't see the appeal in building and smashing piles of sand. But when such destruction results in real-time changes to a topographic map filled with lakes of virtual water, it becomes something more than just play. It becomes science.

The Augmented Reality Sandbox is an inspired invention that takes the visceral satisfaction of sculpting with sand and combines it with the wonder of scientific discovery. Built using a Microsoft Kinect 3D camera, a projector, and two freely distributed VR applications, the AR Sandbox transforms a box of sand into a water-filled landscape that responds to changes in topography with accurate fluid dynamics. As users play in the sandbox — sculpting, smashing and digging to their hearts' content — the Kinect camera reads the changes in the sand and the VR software computes and projects those changes in real time as a topographic map and a virtual body of water overlaying the sand.

Developed by UC Davis scientists Oliver Kreylos, Burak Yikilmaz and Peter Gold as part of an NSF-funded project on lake and watershed science education, the AR Sandbox project provides detailed instructions for educators and researchers to construct their own sandboxes. In fact, in the short time since the AR Sandbox project went public, Kreylos and his team have been made aware of more than a dozen sandboxes that have been built at various schools and museums all over the world.

The AR Sandbox setup. Photo: Oliver Kreylos, UC Davis

State University of New York at Geneseo is one of the latest additions to the list of educational institutions which own AR Sandboxes. Associate Professor of Geology Dr. Scott Giorgis caught wind of the project from a former student, who had seen a video about it on YouTube. A six-minute clip demonstrating the sandbox was all it took to convince Giorgis of its value as a teaching tool.

Within days, Giorgis and his tech-savvy team (Instructional Support Specialist Nancy Mahlen and computing experts Kirk Anne, Clint Cross and Joe Dolce) were poring over Kreylos' instructions and downloading the required software. In no time, the SUNY Geneseo basement was the proud host of a newly-constructed, fully functional AR Sandbox.

An AR Sandbox in action. Photo: Oliver Kreylos.

According to Giorgis, the potential for the AR Sandbox is enormous. "We want to use the box in the introductory geology courses to teach how to read and interpret topographic maps," he explained, "but we also think it can be used to visualize how groundwater pollution flows down the water table — for example, if an underground gasoline storage tank at a gas station leaks… where will the gas flow to? Who needs to be worried about their well water? You can look at the AR Sandbox and see the answer. It's beautiful."

In addition to using the sandbox to gain insight into environmental issues, Giorgis envisions applications for teaching about structural geology. "Kirk and I want to modify Kreylos’ code to incorporate geologic planes," he said. "Faults, for example. Where a fault crops out on the surface of the earth depends on the orientation of the fault and the topography. We want to use the AR sandbox to allow students to dynamically change the topography and see how that effects the position of the fault on the surface of the earth."

Demonstration of SUNY Geneseo's AR Sandbox installation

The applications for the AR Sandbox's innovative technology seem to be near limitless. Though education and research are the intended applications for the project, it could also be modified for use as an art installation — or even for art therapy. Real-time, virtual feedback on interactions with the physical world is at the forefront of emerging AR technology — and the potential for integration with cutting-edge computer graphics is huge.

For more information on the AR Sandbox project, visit Oliver Kreylos' website.

Spotlight on Innovation: Trillion FPS Camera

Spotlight on Innovation: Trillion FPS Camera

By Pritham Marupaka

Streak cameras are used by chemists to capture light passing through samples and determine chemical properties. At the MIT Media Lab in Cambridge, Massachusetts, scientists have found a way to modify these streak cameras to capture motion – recording ultrashort pulses of light at up to a trillion frames per second. Project Director Dr. Ramesh Raskar calls this new technique “femto-photography.”

As described in the project abstract:

We have built an imaging solution that allows us to visualize propagation of light. The effective exposure time of each frame is two trillionths of a second and the resultant visualization depicts the movement of light at roughly half a trillion frames per second. Direct recording of reflected or scattered light at such a frame rate with sufficient brightness is nearly impossible. We use an indirect 'stroboscopic' method that records millions of repeated measurements by careful scanning in time and viewpoints. Then we rearrange the data to create a 'movie' of a nanosecond long event.

Femto-photography gained global recognition in June 2012, with Ramesh Raskar’s TED talk (full video below), which has received more than three million views to date. The prospect of using light scattering to analyze the stability of materials and gain more comprehensive medical images excites both academia and the industry alike.

When Dr. Raskar and Andreas Velten, then a postdoctoral researcher, set out to conduct research in this field, they didn’t initially have materials and medicine in mind. According to Raskar, “I was obsessed with doing computational photography with ultrafast imaging — to look around corners, relighting scenes after the fact and recovering 3D structure in presence of scattering.”

A time-lapse visualization of the spherical fronts of advancing light reflected by surfaces in the scene. Photo by MIT Media Lab.

Velten saw femto-photography as an opportunity to work on a project with a computational focus where he could also use his optical engineering skills. “In a field like imaging, the separation between hardware and software does not make sense,” he said.

The technology has come a long way since it’s unveiling; numerous improvements have been made, including — as noted by the researchers — great progress with smaller, very low cost and potentially consumer grade hardware, and the development of consumer-ready time of flight cameras.

“The technology itself is ready, and can be applied in simple situations, such as characterizing bulk scattering samples,” said Raskar. He expects a wider range of applicability in the near future with further improvements to the system.

Andreas Velten (left) and Ramesh Raskar. Photo by Everett Lawson.

Velten, now affiliated with the Medical Devices Group at the University of Wisconsin, Madison, is currently working on applying femto-photography to the medical imaging applications hinted at in Ramesh’s TED talk. Though Velten is aware of the challenges present in using light scattering for accurate imaging, such as tracking the numerous bounces of light particles and observing much smaller time frames, he is optimistic. “The large scope of the biomedical potential will be unlocked gradually, application by application, by several research groups,” he said.

One of the most noticeable differences between a femto-photographic camera and any other camera is the dimensions of the device. Although large and heavy, neither Raskar nor Velten see an issue with portability. According to Velten, the camera is still portable enough for many uses; it could even be mounted on a fire truck and used to scan burning buildings for trapped victims.

The ongoing research and development into femto-photographic applications and improvements highlights the massive impact this technology could have in the years to come. Already capable of supporting numerous commercial applications, it is perhaps only a matter of time before femto-photography moves from cutting-edge research to commonplace imaging technology.

For more information on femto-photography, visit the MIT Media Lab femto-photography project website or download the ACM Transactions on Graphics article Femto-photography: capturing and visualizing the propagation of light.

Technology for the Sake of Humanity

Technology for the Sake of Humanity

By Kristy Barkan

Technology, as it turns out, is more than just a tool to advance our understanding and simplify our daily lives. It is a gift, waiting to be given. It can mean the difference between isolation and inclusion. It can move the human heart from desolation to joy. For the recipients of such a gift, the world is forever shifted. 

In his SIGGRAPH 2014 keynote speech (full video below), Elliot Kotek, co-founder of Not Impossible Labs, painted this captivating portrait of his company's mission, which is "technology for the sake of humanity." As many in the audience were moved to tears, Kotek demonstrated how his company and numerous compassionate volunteers have created and adapted technology specifically to aid the physically disadvantaged, and help them lead fuller lives.

In November 2010, Not Impossible Labs invented an innovative eyepiece that enabled a paralyzed artist to draw using his eyes. In doing so, they gave him back his art — something he thought he'd lost forever. In late 2013, they used cutting-edge 3D printing techniques to craft a remarkable prosthetic arm for a 16-year-old Sudanese refugee. The arm made it possible for the boy to feed himself for the first time in two years.

Not Impossible Labs prosthetic arm

Work is underway on a number of other collaborative projects at Not Impossible. One group is in the process of building a cane for the blind that uses laser and sonar feedback to provide real-time data on the user's surroundings. Another group is crafting a device that reads brain waves and uses them to control a computer mouse with absolutely no physical movement. A third, equally laudable project in the works is an affordable, mouth-controlled joystick that will provide quadriplegic users greater ease in operating PCs. 

Technical innovators all over the world are donating their time, talent and equipment to aid humanitarian efforts like those orchestrated by Not Impossible. It may be that technology has found its highest calling. Earlier this month, a telepresence robot donated by Double Robotics gave a young man undergoing cancer treatment and a hospital-confined 3D artist the opportunity to attend SIGGRAPH 2014 in Vancouver, remotely. It was perhaps one of the most fulfilling projects of the conference.

Below is Elliot Kotek's SIGGRAPH 2014 keynote speech, in full, courtesy of the ACM SIGGRAPH SCOOP Team.

For more information on the ongoing projects at Not Impossible Labs, visit their website.