The 4th ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia
Conference 12-15 December • Exhibition 13-15 December • Hong Kong Convention & Exhibition Centre
 

Technical Papers

Video and Capture

Tuesday, 13 December 14:15 - 16:00 |  Convention Hall B

Session Chair
Ping Tan

Modeling and Generating Moving Trees from Video - Picture

Modeling and Generating Moving Trees from Video


In this paper we describe a method for reconstructing animated 3D tree models from 2D video input, and show how to create a variety of models automatically from a reconstructed tree.


Chuan Li, University of Bath
Oliver Deussen, University of Konstanz
Yizhe Song, University of Bath
Phil Willis, University of Bath
Peter Hall, University of Bath


Candid Portrait Selection from Video - Picture

Candid Portrait Selection from Video


We train a computer to select still frames from video that work well as candid portraits. We conduct a psychology study to collect ratings of video frames across multiple videos, then compute a number of features and train a model to predict the average rating of a video frame.


Juliet Bernstein, University of Washington
Aseem Agarwala, Adobe Systems, Inc.
Brian Curless, University of Washington


Multiview Face Capture Using Polarized Spherical Gradient Illumination - Picture

Multiview Face Capture Using Polarized Spherical Gradient Illumination


Using polarized spherical gradient illumination, we explore a novel technique for acquiring detailed facial geometry with high resolution diffuse and specular photometric information from multiple viewpoints.


Abhijeet Ghosh, University of Southern California
Graham Fyffe, University of Southern California
Borom Tunwattanapong, University of Southern California
Jay Busch, University of Southern California
Xueming Yu, University of Southern California
Paul Debevec, University of Southern California


Video Face Replacement - Picture

Video Face Replacement


We present a method for replacing facial performances in video that accounts for differences in identity, visual appearance, speech, and timing between source and target videos.


Kevin Dale, Harvard University
Kalyan Sunkavalli, Harvard University
Micah K. Johnson, Massachusetts Institute of Technology
Daniel Vlasic, Massachusetts Institute of Technology
Wojciech Matusik, Massachusetts Institute of Technology and Disney Research Zurich
Hanspeter Pfister, Harvard University