Source: ACM SIGGRAPH Citation
ACM SIGGRAPH is pleased to present Michael F. Cohen with the 2019 Steven A. Coons Award for Outstanding Creative Contributions to Computer Graphics. Selected not only for his groundbreaking work in numerous areas of research—radiosity, motion simulation & editing, light field rendering, matting & compositing, and computational photography, among others—but also for his extensive service to the computer graphics community, Cohen exemplifies the lifetime contributions to our field that the award was created to recognize.
Common to much of Cohen’s work is combining an artist’s sensibility with rigorous underlying physics, providing a way for users to specify artistic objectives or constraints, often in real time, while maintaining physical plausibility.
Cohen’s first major research contributions were in the area of photorealistic rendering, in particular, in the study of radiosity: the use of finite elements to solve the rendering equation for environments with diffusely reflecting surfaces. His most significant results included the hemicube (1985), for computing form factors in the presence of occlusion; an experimental evaluation framework (1986), one of the first studies to quantitatively compare real and synthetic imagery; extending radiosity to non-diffuse environments (1986); integrating ray tracing with radiosity (1987); progressive refinement (1988), to make interactive rendering possible; wavelet radiosity (1993), a more general framework for hierarchical approaches; and “radioptimization” (1993), an inverse method to solve for lighting parameters based on user-specified objectives. All of this work culminated in a 1993 textbook with John Wallace, Radiosity and Realistic Image Synthesis.
In a very different research area, Cohen made significant contributions to motion simulation and editing, most significantly: dynamic simulation with kinematic constraints (1987), which for the first time allowed animators to combine kinematic and dynamic specifications; interactive spacetime control for animation (1992), which combined physically-based and user-defined constraints for controlling motion; motion transitions using spacetime constraints (1996), which allowed seamless and plausible transitions between motion segments for systems with many degrees of freedom such as the human body; motion interpolation (1998), a system for real-time interpolation of parameterized motions; and artist-directed inverse kinematics (2001), which allowed a user to position an articulated character at high frame rates, for real-time applications such as games.
In addition, in his groundbreaking and most-cited work, Cohen and colleagues introduced the Lumigraph (1996), a method for capturing and representing the complete appearance—from all points of view—of either a synthetic or real-world object or scene. This work was contemporaneous with the Light Field Rendering paper of Levoy & Hanrahan, which appeared (and was presented) right next to it in the SIGGRAPH ’96 proceedings. Building on this work, Cohen published important follow-on papers in view-based rendering (1997), which used geometric information to create views of a scene from a sparse set of images; and unstructured Lumigraph rendering (2001), which generalized light field and view-dependent texture mapping methods in a single framework.
In subsequent work, Cohen significantly advanced the state of the art in matting & compositing, with papers on image and video segmentation based on anisotropic kernel mean-shift (2004); video cutout (2004), which preserved smoothness across space and time; optimized color sampling (2005), which improved previous approaches to image matting by analyzing the confidence of foreground and background color samples; and soft scissors (2007), the first interactive tool for high-quality image matting and compositing.
Most recently, Cohen has turned his attention to computational photography, publishing numerous highly creative, landmark papers: interactive digital photomontage (2004), for combining parts of photographs in various novel ways; flash/no-flash image pairs (2004), for combining images taken with and without flash to synthesize new higher-quality results than either image alone; panoramic video textures (2005), for converting a panning video over a dynamic scene to a high-resolution continuously playing video; gaze-based photo cropping (2006); multi-viewpoint panoramas (2006), for photographing and rendering very long scenes; the Moment Camera (2007) outlining general principles for capturing subjective moments; joint bilateral upsampling (2007), for fast image enhancement using a down-sampled image; gigapixel images (2007), a means to acquire extremely high-resolution images with an ordinary camera on a specialized mount; deep photo (2008), a system for enhancing casual outdoor photos by combining them with existing digital terrain data; image deblurring (2010), to deblur images captured with camera shake; GradientShop (2010), which unified previous gradient-domain solutions under a single optimization framework; and ShadowDraw (2011), a system for guiding the freeform drawing of objects.
Moreover, Cohen’s contributions extend well beyond his own research. Cohen is a longtime volunteer in the ACM SIGGRAPH community. He served on the SIGGRAPH Papers Committee eleven times, and as SIGGRAPH Papers Chair in 1998. Cohen also served as SIGGRAPH Awards Chair from 2013 until 2018. He was a keynote speaker at SIGGRAPH Asia 2018. Cohen’s joint interests in art and engineering stem from an early age: he holds undergraduate degrees in both Art (from Beloit) and Civil Engineering (from Rutgers). He earned his Master’s degree in computer graphics at Cornell in 1985, and his PhD from the University of Utah in 1992. Cohen spent a number of years in academia, serving on the faculties at Cornell and Princeton, before joining Microsoft Research in 1994, where he worked for 21 years. He also has served as an Affiliate Professor at the University of Washington for the past 25 years. In these roles he has advised many graduate students who have gone on to significant roles in academia and industry. He currently leads the Computational Photography Team at Facebook, where he has worked since 2015.