Technical Papers

Faces & Capture

Monday, 26 July | 3:45 PM - 5:15 PM | Room 403 AB
Session Chair: Hanspeter Pfister, Harvard University
High-Quality Single Shot Capture of Facial Geometry

This systems paper describes a passive stereo system for capturing 3D facial geometry in a single shot. Results are commensurate with leading-edge active light systems. This paper's primary theoretical contribution is augmentation of stereo optimization methods to recover pore-scale geometry, using a qualitative approach that produces visually realistic results.

Thabo Beeler
ETH Zürich, Disney Research Zürich

Bernd Bickel
Disney Research Zürich

Paul Beardsley
Disney Research Zürich

Robert Sumner
Disney Research Zürich

Markus Gross
ETH Zürich, Disney Research Zürich

High-Resolution Passive Facial Performance Capture

A new technique for capturing the geometry and motion of human faces that does not require markers, face paint, or structured light. Based on a novel acquisition setup, multi-view stereo, and optical flow, the technique captures facial performances in very high resolution with time-varying, highly detailed textures.

Derek Bradley
The University of British Columbia

Wolfgang Heidrich
The University of British Columbia

Tiberiu Popa
The University of British Columbia

Alla Sheffer
The University of British Columbia

Temporal Upsamplng of Performance Geometry Using Photometric Alignment

This technique uses extended spherical-gradient illumination to acquire detailed facial geometry of a dynamic performance. The method employs a novel algorithm to jointly align two photographs - under a gradient illumination condition and its complement - to a full-on tracking frame, achieving dense temporal and spatial registration.

Cyrus Wilson
University of Southern California, Institute for Creative Technologies

Abhijeet Ghosh
University of Southern California, Institute for Creative Technologies

Pieter Peers
University of Southern California, Institute for Creative Technologies

Jen-Yuan Chiang
University of Southern California, Institute for Creative Technologies

Jay Busch
University of Southern California, Institute for Creative Technologies

Paul Debevec
University of Southern California, Institute for Creative Technologies

VideoMocap: Modeling Physically Realistic Human Motion From Monocular Video Sequences

A new video-based motion-modeling system to generate physically realistic human motion from monocular video sequences. During reconstruction, the method leverages Newtonian physics, contact constraints, and 2D image measurement to reconstruct 3D full-body poses as well as joint torques and contact forces.

Xiaolin Wei
Texas A&M University

Jinxiang Chai
Texas A&M University

In the News
SIGGRAPH 2010 Video