Computer Graphics Pioneers Score Second Technical Oscar for  Industry Standard Visual Effects

Computer Graphics Pioneers Score Second Technical Oscar for Industry Standard Visual Effects

written by Melanie A. Farmer

Pixar is synonymous with innovative computer animation, revolutionizing an industry that has brought moviegoers such hits as Toy Story, Cars and A Bug’s Life, films that have all received accolades for their breakthrough digital artistry just as much as their entertainment value. This awards season, Pixar cofounder Edwin Catmull and SIGGRAPH members and computer graphics pioneers, Tony DeRose and Jos Stam, have won a Scientific and Engineering Academy Award, for creating and expanding the fundamental mathematics behind the breakthrough graphics used in many of these popular animated films, and more. The methodology they’ve pioneered is now an industry standard for achieving strikingly realistic images on the big screen.

The Academy of Motion Picture Arts and Sciences presented the trio with plaques at the Feb. 9, 2019, awards ceremony held at the Beverly Wilshire Hotel. They were honored for their “pioneering advancement of the underlying science of subdivision surfaces as 3D geometric modeling primitives,” and the Academy noted that their advancements has helped “transform the way digital artists represent 3D geometry throughout the motion picture industry.”

Courtesy of Pixar, Geri’s Game

This marked the second time the three awardees were recognized for subdivision surfaces. In 2006, the Academy honored their work with a Technical Achievement certificate. Indeed, their advancements in subdivision surfaces is quite a feat, one that has been fine-tuned and developed for more than 40 years.

Subdivision surfaces enables digital artists to automate the technique of smoothing out surfaces, and as a result achieve highly realistic replications of physical objects in film. The examples of surfaces are endless—from human faces and skin, to clothing, table tops, and car bodies.

While Catmull is credited in the field for the first proof of subdivision surfaces, throughout the decades, DeRose, senior scientist emeritus at Pixar, and Stam, a graphics researcher at NVIDIA, have worked on advancing and improving subdivision surfaces, directly contributing to the success of the technique used in film today.

Stam’s expertise is in the simulation of natural physical phenomena for 3D computer animation and in simulation of fluids and gases. In fact, this is his third Academy Award win, having scored a Sci-Tech award in 2008 for the design and implementation of the Maya Fluid Effects system, a widely used technology for realistically simulating and rendering fluid motion. In 2005, he also was awarded the prestigious SIGGRAPH Computer Graphics Achievement award, given to members for exemplary contributions to the field of computer graphics.

In the subdivision surfaces work, Stam is credited for essentially making the technique more accessible for artists. He devised mapping methodology that eliminated complex hardware limitations that had been challenging for digital artists in prior iterations of the system.

“Now established techniques could be used for subdivision surfaces,” said Stam. “Since subdivision surfaces can express a wider range of shapes this made them accessible to artists.”

As a teenager, Stam drew caricatures, painted, and even got into air brush painting. His love for art informed his research as a computer scientist. So, he not only thinks in ‘1s’ and ‘0s’, he also considers the artist in the equation.

“I like art and I like math. In a way, they are not that different,” he said. “They are two ways to express your creativity within given bounds. In art you must work with physical tools like brushes, chisels and the computer! In math, your reasoning must make sense within a formal framework. Somewhat paradoxically art and math result from a balance between boundaries and creativity.”

When DeRose first introduced the use of subdivision surfaces in computer animated film it was for Pixar’s earliest shorts, Geri’s Game. Of the experience, he said it would not have been a success had he not worked closely with the film’s artists.

“It was that tight interaction that led to a lot of the improvements that we are being recognized for. Without sitting with the artists and understanding what they really needed, we wouldn’t be here today,” he said.

In subdivision surfaces, DeRose is credited for adding flexibility to the method and for improving the overall usability of the technology. For instance, he and collaborators devised an artist-friendly way to pick and choose when and where to add more sharpness to smoothed out surfaces. He also worked on adding shading techniques. 

For DeRose, the Academy Award underscores just how far the field has come and celebrates the comprehensive mathematics that goes along with it. Still, countless math problems remain, and he is excited for what the future holds.

“New mathematics is being created all of the time,” said DeRose. “Some of the new mathematics being created is in response to problems that have come up in filmmaking and graphics. Yes, we’ve solved a lot in 30 years or so, but there are still a lot of problems to address, including coming up with principled ways to deal with the massive geometry as a result of the more and more realistic and grander effects we’ve been able to achieve. There’s a lot of data to approximate now.”

During his acceptance speech at the Feb. 9 ceremony, Catmull made a lighthearted reference to the past when he said that prior techniques for representing organic shapes and surfaces “really sucked,” and he knew then, when he first got into computer graphics, that this was the problem he wanted to solve.

“For me, this has been an incredible journey. I never could’ve predicted any of it,” he said. “It started 46 years ago with an idea. And sometimes it just takes a lot of really smart people working on it from different points of views, from different places, and a lot of patience, and if you do that, you can end up with something that works.”

Moviegoers may never know that complex geometry is behind some of the splashiest visual effects they’ve witnessed on the big screen, and frankly, that is the way computer scientists want it. 

“I am sometimes blown away by what the artists were able to achieve with our software,” said Stam. “It’s like being a brush oil paint maker and looking at a Rembrandt … In fact, it is a sign of a work well done if the technology is hidden completely from the viewer. No one wants to see the ‘grip man’ holding the mic in a movie shot.”

Call for Participation: 8th Annual Faculty Submitted Student Work Exhibit

Reminder that the deadline is in two weeks.

The ACM SIGGRAPH Education Committee Call for Participation: 8th Annual Faculty Submitted Student Work Exhibit

Sponsored by The ACM SIGGRAPH Education Committee.

We are interested in your project assignments and the best examples of the work done by your students for that assignment.

Images and videos to be displayed at the Education Committee Booth at SIGGRAPH 2019 in Los Angeles and assignments archived on the Education Committee Website.

Any content area related to computer graphics and interactive techniques; art, animation, graphic design, game design, architecture, visualization, real-time rendering, etc.

The double-curated exhibit is open to all faculty working at Secondary/High School through University levels.

This is a wonderful opportunity for your students and school to get more exposure and to celebrate all the great work that gets created!!

To see last year’s downloadable exhibit and assignments from Vancouver click here: 7th Annual Faculty Submitted Student Work Exhibit.

DEADLINE: June 28th, 2019

Submission information can be found at:

Call for Participation: 8th Annual Faculty Submitted Student Work Exhibit

Please contact me, Richard Lewis richardlewis4@siggraph.org, with any questions.

We look forward to seeing you and your student’s work in Los Angeles this year!

Registration is now open for the 2019 Symposium on Computer Animation

Registration is now open for the 2019 Symposium on Computer Animation, which will be held at UCLA on July 26-28, 2019 (i.e., immediately preceding SIGGRAPH). Early-bird registration ends June 26, 2019 so register soon to take advantage.

Registration Information:
https://sca2019.kaist.ac.kr/data/siggraph/websites/siggraph.org/public_html/registration-2/

The full program for the symposium has also recently been announced. In addition to a full complement of 18 regular papers and 5 invited journal papers, there will be a sketch session, a poster reception, a conference dinner, and a tremendous lineup of invited speakers:

Keynote Speakers: 
-Uri Ascher (UBC)
-L. Mahadevan (Harvard)

Invited Speakers: 
-Mridul Aanjaneya (Rutgers University)
-Chenfanfu Jiang, (University of Pennsylvania)
-Sophie Jörg, (Clemson University )
-Mélina Skouras, (Inria Grenoble Rhône-Alpes)
-Steve Tonneau, (University of Edinburgh)
-Etienne Vouga, (UT Austin)

Conference Program:
https://sca2019.kaist.ac.kr/data/siggraph/websites/siggraph.org/public_html/program/

Additional details about the conference can be found on the website: https://sca2019.kaist.ac.kr
Hope to see you in Los Angeles!

Call for Participation: EGSR 2019

EGSR 2019 Call for Participation

The program of the Eurographics Symposium on Rendering (EGSR) 2019 is now online: 

This year’s program gathers research papers in 

  • Materials & Reflectance
  • High-Performance Rendering
  • Spectral Effects
  • Light Transport
  • Sampling
  • Interactive & Real Time Rendering 
  • Deep Learning

as well as an industry session. 
Three keynotes will structure the event, given by:

  • Jaakko Lehtinen from Aalto University & NVIDIA  
  • Natalya Tatarchuk from Unity  
  • Ali Eslami from Google DeepMind  

Come join the rendering research community in Strasbourg, France, July 10th-12th, 2019

In Your Face: Academy Award Celebrates the Innovative Tech Behind Digital Faces

In Your Face: Academy Award Celebrates the Innovative Tech Behind Digital Faces

Photo by: Cyrill Beeler

If you were one of the millions of moviegoers who contributed to the worldwide success of Avengers: Infinity War—the highest-grossing film of 2018—then you also got to witness the Medusa Performance Capture System in action. The team of computer scientists responsible for bringing to life characters like Thanos and the Hulk on the big screen was honored this year with a Sci-Tech Academy Award for the conception, design and engineering of Medusa.

Marvel Studios’ AVENGERS: INFINITY WAR..Thanos (Josh Brolin)..Photo: Film Frame..©Marvel Studios 2018

Medusa, a comprehensive hardware and software system which has been developed, tweaked and expanded upon over the last 10 years, enables the precise digital replication of human faces, including detailed expressions and super fine physical details at high resolution. The Academy presented Sci-Tech Award certificates to the Medusa team, Thabo Beeler, Derek Bradley, Bernd Bickel and Markus Gross, at its Feb. 9, 2019, ceremony held at the Beverly Wilshire Hotel.

Credited for the initial concept, Gross, who is a professor of computer science at the Swiss Federal Institute of Technology Zürich (ETH), vice president for research at The Walt Disney Studios and director of DisneyResearch|Studios, worked with Bickel, then a doctoral student in his lab at ETH, to overcome this grand challenge in computer graphics: to create digital human faces that are indistinguishable from reality. Bickel is currently an assistant professor of computer science at the Institute of Science and Technology Austria. In 2017, he received the ACM SIGGRAPH New Researcher Award.

“Medusa is the culmination of many years of research on digital human faces and digital facial animation that we’ve been working on as part of the ongoing work in my lab and in collaboration with Disney Research,” says Gross, a longtime member of ACM SIGGRAPH and a 2012 ACM Fellow. “We got connected much more deeply with the arts and technology of special effects through our partnership with the Walt Disney Company. It gave us the insights to steer the research in such a way that we could make the best progress for advancement of facial technologies for film.”

Marvel Studios’ AVENGERS: INFINITY WAR..Thanos (Josh Brolin)..Photo: Film Frame..©Marvel Studios 2018

Gross notes that at the time, researchers did not have a lot of success in bridging the so-called “uncanny valley”, which is a known phenomenon in the field that refers to the digital duplication of human faces that are not quite realistic, almost disturbingly fake in appearance.

With this Oscar honor, Medusa has staked its claim as an industry standard in special effects, achieving digital characters with highly realistic human features. This year, three out of the Oscar nominated films for best visual special effects used the Medusa system, and in recent years it has been used in numerous productions, including  Star Wars: The Last Jedi and Spider-Man: Homecoming.

“The methods are really tuned into highly accurate measurement of human faces. As such our technology is not generic per se,” explains Gross. “One of the insights we’ve had in exploring facial performance is that human facial expressions usually go from a neutral state into some deformed state and then go back to that neutral-rest state. There are cycles, and examining these cycles has allowed us to build methods to track the facial surface reliably over time. We further use the randomness of the pigmentation of facial skin as invisible landmarks to reference points across consecutive frames.”

Gross is also proud of the research and development that went into the analysis of facial microgeometry, such as facial pores, which eventually allows Medusa to capture the complex geometry of human faces including minute facial details. This is essential for re-creating realistic facial expressions.

Shortly following their early partnership with Disney, Gross and Bickel were joined by Beeler and Bradley, now both research scientists at Disney Research. Developing the 3D face scanner component of the capture system became Beeler’s master thesis at the time. When Bradley, then a post-doc, joined the team, he began working on the motion capture piece and helped develop the stable face-tracking technology. Together with Beeler, the duo invented the method for separating the rigid and non-rigid components of the face performance, and they have also spent a lot of time productizing the research into an artist-friendly system.

“Bringing this research into production entailed a lot of development work, but we got a lot of input regarding potential research topics in return, one of which ended up in a publication on rigid stabilization, published at SIGGRAPH,” says Beeler. “In this work we explore the relationship between the skin surface and the underlying bone structure to separate the rigid head motion from the non-rigid face deformation, an essential step when building facial rigs and an integral part of the Medusa system.”

In fact, all of the major milestones in the development of Medusa were showcased in SIGGRAPH papers. One of the team’s images had also been used on the front page of proceedings at SIGGRAPH 2011, for which Beeler was very proud.

“We’re constantly improving Medusa through new research, and simultaneously developing the next generation performance capture technology,” says Bradley. “We can’t say much, but the future is very exciting in this field.”

This Sci-Tech award marks the second Academy Award for Gross, who won a Technical Achievement Award in 2013 for the technology that more efficiently simulates smoke and explosions in films. For Gross, this current award win is very special as it both rewards years of academic research and marks the first recognition of this kind for Disney Research.

“We worked on Medusa for literally 10 years, and I’ve been working on digital human faces since I was a post-doc, which has been some 30 years,” says Gross. “It was a beautiful experience to get recognition from the Academy for all of this work.”

Still, there is more to come. “I often compare my work to a rabbit hole,” says Beeler. “We are making great progress but the deeper we go and the more we solve, the more we realize that we are far from done. My ultimate goal is to provide technology to digitize humans holistically, with minimal effort and at maximal quality—and every day we are getting a step closer to this vision.”

By Melanie A. Farmer