J.J. Abrams to Receive VES Visionary Award

J.J. Abrams to Receive VES Visionary Award

Today, the Visual Effects Society named renowned director J.J. Abrams as the next recipient of the VES Visionary Award. The award, which has been bestowed upon such luminaries as Christopher Nolan, Ang Lee and Alfonso Cuarón, serves to recognize individuals who have "uniquely and consistently employed the art and science of visual effects to foster imagination and ignite future discoveries." The award will be presented at the 13th Annual VES Awards on February 4, 2015 at the iconic Beverly Hilton Hotel.

According to the VES, this year's Visionary Award has been earmarked for Abrams in recognition of his significant, unique contributions to filmed entertainment over the course of his career. "A fiercely inventive storyteller, [Abrams has a] distinctive ability to harness craft and technology in creating mysterious new worlds that transport and transfix audiences," the society declared in its press release announcing the award.

"J.J. and his team have evolved visual effects," said Jeff Okun, VES Board Chair. "He has redefined the relationship between the viewer and the story."

A two-time Emmy Award winner, Abrams is the Founder and President of Bad Robot Productions. In partnership with Paramount Picture and Warner Bros. Studios, Bad Robot has produced numerous iconic films and television series, including "Cloverfield," "Star Trek," "Super 8," "Mission Impossible: Ghost Protocol," "Star Trek Into Darkness," "Alias," "Lost," "Fringe" and "Person of Interest." He is currently working on the next installment in the Star Wars saga: "Star Wars: Episode VII," serving as director, writer and producer.

"I am very touched to be receiving the Visionary Award this year,” said Abrams. “As a lifelong fan of visual effects in film and TV, I feel so lucky to get to work with such remarkable VFX artists, and look forward to celebrating their incredible work at the VES event."

Submissions for the 13th Annual VES Awards will open on October 6, 2014.

Munich Chapter Launch Event with Paul Debevec

Munich Chapter Launch Event with Paul Debevec

The brand-new Munich ACM SIGGRAPH chapter will be celebrating its launch in style on Sunday, 21 September, with a talk from computer graphics veteran Paul Debevec. Debevec is the Former Vice President of ACM SIGGRAPH, and is widely recognized as a leader in the field for his impactful research in photorealistic image capture and rendering. 

A research professor at the University of Southern California's Institute for Creative Technology, Debevec has led the development of several Light Stage systems that capture and simulate the appearance of people and objects under real-world illumination. In 2010, Debevec and his co-developers received an Academy Award for their work with the Light Stage project. Light Stage processes have been used to create photoreal digital actors for a long list of films that include "King Kong," "Superman Returns," "Avatar," "The Avengers," "Oblivion," "Ender's Game," "Gravity" and "Maleficent."

The Munich ACM SIGGRAPH inaugural event will open with Alain Chesnais, former President of the Association for Computing Machinery, who will speak about new developments in 3D content for the Web. ACM SIGGRAPH Chapters Committee Chair Mashhuda Glencross will also be present and participate in the event.

Attendance to the Munich ACM SIGGRAPH launch event is free and open to all, but attendees are asked to register in advance on the Munich ACM SIGGRAPH website. The event will take place in the main auditorium of the University of Television and Film (HFF) at Bernd Eichinger-Platz 1, Munich. Doors will open at 14:15 hours.

Want to join the Munich ACM SIGGRAPH chapter? Visit the Munich ACM SIGGRAPH membership page.

Joe Letteri on the VFX of Dawn of the Planet of the Apes

Joe Letteri on the VFX of Dawn of the Planet of the Apes

Interview by Chris Davison, Chief Correspondent for ACM Computers in Entertainment.

This excerpt of an interview with award-winning visual effects artist Joe Letteri is provided courtesy of ACM Computers in Entertainment (CiE), a website that features peer-reviewed articles and non-refereed content covering all aspects of entertainment technology and its applications. Joe Letteri is a four-time Academy Award-winning visual effects guru and a partner at Weta Digital. Joe recently led his team in creating the groundbreaking effects in Matt Reeves’ “Dawn of the Planet of the Apes."

CiE: What was your creative process like in working with Matt Reeves?

"Early on, it was trying to work out some of the design ideas for the apes, because we had this idea that the story is going to be 10 years or so into the future — and we established in the first film that they’re taking the drug that’s making them more intelligent. So the first question was: how much of the effect of that are we going to see when we open the film? What will the changes be? So that was leading us into an aspect of the story that we needed to explore, how much were the apes going to be speaking in this film? That was really one of the big initial creative considerations, how do we bring audiences into that aspect of the story?
"We played with a couple of small design changes, with Caesar we made him a little bit older, a little bit grayer, he’s put on a little bit more weight than he had in the first film, since he’s 10 years older. We looked at some dialogue tests for the apes and we specifically did a test scene between Caesar and Koba to figure out how they would talk. We hit on the idea that talking doesn’t come naturally to them — and so you have this situation where you almost have to draw the words out of them, as if their brains are kind of outracing the physical evolution of their vocal cords to be able to conceive the words and to utter them.

Joe Letteri quote on performance capture

"We did the tests and then crafted them into the storyline where you begin the film — you’re just seeing the apes in their community, they have the sign language that they were learning at the end of the last film, and you can see that they’ve become proficient in it — they’re communicating with each other, but they don’t vocalize until the humans arrive. When the humans arrive, suddenly the need to vocalize is there because Caesar has to communicate with the humans, and that sets off a whole chain of events where the apes now suddenly have to communicate more with themselves — and the sign language starts to give way to more vocal communication. So there was this arc that drove this whole dynamic, and it’s reflected in how they speak. So the question we had to solve was: how do we make apes speak without having it look like a man in a suit? We wanted that sense of this being new to them but it had to come out of them, it had to be driven by events and it had to be supported by the character designs."

CiE: What are some of the unique challenges of shooting outside?

"Motion capture outdoors and performance capture outdoors is kind of a new field that we’re entering into. The evolution of that started with the “Lord of the Rings” when we decided that we did want to try and use Andy Serkis’ performance directly to drive Gollum’s performance. We got him on a motion capture stage and had to recreate the performances that he was doing with the actors that they were filming and that worked well for us, we were able to use that throughout “Rings” with a number of the sequences, still a lot of keyframe involved, we used it for a lot of the sequences but we keyframed his face for all of Rings.
Then when we came to do Kong we wanted to see if it’s possible to capture the face, to translate that, and so we came up with a technique for actually capturing his face by using motion capture markers on his face and working out how to do the solves and the translation into Kong’s character. In “Avatar,” Jim Cameron wanted to combine these ideas and have the actors wearing a head rig to capture the face information, that way it gives them more mobility to move on the stage and throughout the virtual world which we were shooting with a virtual camera. When it came time to do “Rise of the Planet of the Apes,” we thought — wouldn’t it be great if we could now actually capture Andy in the moment…"

Head over to the ACM CiE site to read the rest of the interview with Joe Letteri.

Watch the SIGGRAPH Award Talks

Watch the SIGGRAPH Award Talks

Among the most prestigious of honors in computer graphics, ACM SIGGRAPH awards are presented to people who are leaders in the field, or destined to become so. The award talks session at the annual SIGGRAPH conference gives winners the opportunity to expound on their background and research.

For those who missed them, the talks are now available to watch in full (below), courtesy of the ACM SIGGRAPH SCOOP team.

SIGGRAPH 2014 Award Recipients:

Noah Snavely, Significant New Researcher Award

Thomas Funkhouser, Computer Graphics Achievement Award

Scott Lang, Outstanding Service Award

Harold Cohen, Distinguished Artist Award for Lifetime Achievement

AR Sandbox: Cutting-Edge Earth Science Visualization

AR Sandbox: Cutting-Edge Earth Science Visualization

By Kristy Barkan

There are few kids who don't see the appeal in building and smashing piles of sand. But when such destruction results in real-time changes to a topographic map filled with lakes of virtual water, it becomes something more than just play. It becomes science.

The Augmented Reality Sandbox is an inspired invention that takes the visceral satisfaction of sculpting with sand and combines it with the wonder of scientific discovery. Built using a Microsoft Kinect 3D camera, a projector, and two freely distributed VR applications, the AR Sandbox transforms a box of sand into a water-filled landscape that responds to changes in topography with accurate fluid dynamics. As users play in the sandbox — sculpting, smashing and digging to their hearts' content — the Kinect camera reads the changes in the sand and the VR software computes and projects those changes in real time as a topographic map and a virtual body of water overlaying the sand.

Developed by UC Davis scientists Oliver Kreylos, Burak Yikilmaz and Peter Gold as part of an NSF-funded project on lake and watershed science education, the AR Sandbox project provides detailed instructions for educators and researchers to construct their own sandboxes. In fact, in the short time since the AR Sandbox project went public, Kreylos and his team have been made aware of more than a dozen sandboxes that have been built at various schools and museums all over the world.

The AR Sandbox setup. Photo: Oliver Kreylos, UC Davis

State University of New York at Geneseo is one of the latest additions to the list of educational institutions which own AR Sandboxes. Associate Professor of Geology Dr. Scott Giorgis caught wind of the project from a former student, who had seen a video about it on YouTube. A six-minute clip demonstrating the sandbox was all it took to convince Giorgis of its value as a teaching tool.

Within days, Giorgis and his tech-savvy team (Instructional Support Specialist Nancy Mahlen and computing experts Kirk Anne, Clint Cross and Joe Dolce) were poring over Kreylos' instructions and downloading the required software. In no time, the SUNY Geneseo basement was the proud host of a newly-constructed, fully functional AR Sandbox.

An AR Sandbox in action. Photo: Oliver Kreylos.

According to Giorgis, the potential for the AR Sandbox is enormous. "We want to use the box in the introductory geology courses to teach how to read and interpret topographic maps," he explained, "but we also think it can be used to visualize how groundwater pollution flows down the water table — for example, if an underground gasoline storage tank at a gas station leaks… where will the gas flow to? Who needs to be worried about their well water? You can look at the AR Sandbox and see the answer. It's beautiful."

In addition to using the sandbox to gain insight into environmental issues, Giorgis envisions applications for teaching about structural geology. "Kirk and I want to modify Kreylos’ code to incorporate geologic planes," he said. "Faults, for example. Where a fault crops out on the surface of the earth depends on the orientation of the fault and the topography. We want to use the AR sandbox to allow students to dynamically change the topography and see how that effects the position of the fault on the surface of the earth."

Demonstration of SUNY Geneseo's AR Sandbox installation

The applications for the AR Sandbox's innovative technology seem to be near limitless. Though education and research are the intended applications for the project, it could also be modified for use as an art installation — or even for art therapy. Real-time, virtual feedback on interactions with the physical world is at the forefront of emerging AR technology — and the potential for integration with cutting-edge computer graphics is huge.

For more information on the AR Sandbox project, visit Oliver Kreylos' website.