Technical Papers

Computational Photography

Monday, 26 July | 9:00 AM - 10:30 AM | Room 408 AB
Session Chair: Rob Fergus, New York University
The Frankencamera: An Experimental Platform for Computational Photography

Experimentation in computational photography is hindered by a lack of portable, flexible, and open photographic platforms. This paper presents Frankencamera, an architecture for programmable cameras, and demonstrates sample applications on two hardware implementations, a custom F2 camera and the Nokia N900 smartphone.

Andrew Adams
Stanford University

Eino-Ville Talvala
Stanford University

Sung Hee Park
Stanford University

David E. Jacobs
Stanford University

Boris Ajdin
Universität Ulm

Natasha Gelfand
Nokia Research Center Palo Alto

Jennifer Dolson
Stanford University

Daniel Vaquero
University of California, Santa Barbara

Jongmin Baek
Stanford University

Marius Tico
Nokia Research Center Palo Alto

Hendrik P. A. Lensch
Universität Ulm

Wojciech Matusik
Disney Research Zürich

Kari Pulli
Nokia Research Center Palo Alto

Mark Horowitz
Stanford University

Marc Levoy
Stanford University

Image Deblurring Using Inertial Measurement Sensors

A hardware attachment and a blur-estimation algorithm that is used to deblur images from conventional cameras. The approach uses a combination of inexpensive gyroscopes and accelerometers to measure a camera’s acceleration and angular velocity during an exposure.

Neel Joshi
Microsoft Corporation

Sing Bing Kang
Microsoft Corporation

C. Lawrence Zitnick
Microsoft Corporation

Richard Szeliski
Microsoft Corporation

Diffusion-Coded Photography for Extended Depth of Field

In recent years, several cameras have been introduced that extend depth of field by producing a depth-invariant point-spread function. This paper introduces a new diffusion-coded camera that improves on this approach. The technique implements a diffusion-coding camera using a custom-designed diffuser.

Oliver Cossairt
Changyin Zhou
Shree Nayar
Columbia University

Coded Aperture Projection

Coding a projector’s aperture plane with adaptive patterns and inverse filtering allows the depth-of-field of projected imagery to be increased. This paper explains how these patterns can be computed at interactive rates, by taking into account the image content and limitations of the human visual system.

Max Grosse
Bauhaus-Universität Weimar

Gordon Wetzstein
The University of British Columbia

Anselm Grundhöfer
Bauhaus-Universität Weimar

Oliver Bimber
Johannes Kepler Universität Linz

In the News
SIGGRAPH 2010 Video