Somewhere between "Final Fantasy: The Spirits Within" (2001) and "The Curious Case of Benjamin Button" (2008), digital actors evolved from talking dolls to lifelike human beings. Today, computer-generated actors are often so realistic that audiences have trouble distinguishing live performers from digital fabrications. How did such a monumental improvement in the realism of CG humans happen over such a short period of time? 

"Digital Ira" (2013): An 180Hz video-driven blendshape model with screen-space subsurface scattering and advanced eye shading effects.

This past October, ACM SIGGRAPH Vice President Paul Debevec endeavored to answer that question at the fourteenth annual VIEW Conference in Turin, Italy. His hour-long talk (full video below) covers recent advances in the field, specifically: HDRI lighting, polarization difference imaging, skin reflectance measurement, autostereoscopic 3D displays, high-resolution face scanning, advanced character rigging and performance-driven facial animation. 

Grab a cup of coffee and check it out!

Paul Debevec's work has focused on image-based modeling and rendering techniques beginning with his 1996 Ph.D. thesis at UC Berkeley, with specializations in high dynamic range imaging, reflectance measurement, facial animation, and image-based lighting. He received a Scientific and Engineering Academy Award® in 2010 for his work on the Light Stage facial capture systems.

Digital Ira is a project collaboration between Oleg Alexander, Graham Fyffe, Jay Busch Xueming Yu, Ryosuke Ichikari, Andrew Jones and Paul Debevec of USC Institute for Creative Technologies and Jorge Jimenez, Etienne Danvoye, Bernardo Antionazzi, Mike Eheler, Zybnek Kysela and Javier von der Pahleny of Activision, Inc.

Digital Ira image courtesy Activision, Inc. and USC Institute for Creative Technologies