Courses

Full Conference   Full Conference

In SIGGRAPH 2010 Courses, attendees learn from the experts in the field and gain inside knowledge that is critical to career advancement. Courses are short (1.5 hours) or half-day (3.25 hours) structured sessions that often include elements of interactive demonstration, performance, or other imaginative approaches to teaching.

The spectrum of Courses ranges from an introduction to the foundations of computer graphics and interactive techniques for those new to the field to advanced instruction on the most current techniques and topics. Courses include core curricula taught by invited instructors as well as Courses selected from juried proposals.

James L. Mohler
SIGGRAPH 2010 Director for Education
Purdue University

Courses

Spectral Mesh Processing

Sunday, 25 July | 2:00 PM - 5:15 PM | Theater 411

Spectral mesh processing was proposed at the beginning of the 1990s to port the “signal processing toolbox” to 3D mesh models. Now, with recent advances in computing power and numerical software, this vision can be fully implemented. In this course, attendees learn how to transfer the underlying concepts to the mesh-model setting, how to implement the “spectral mesh processing” toolbox, and how to use it for real applications, including filtering, shape matching, remeshing, segmentation, and parameterization.

COURSE SCHEDULE

2 pm
Introduction
Levy

2:05 pm
What is so Spectral?
Zhang

2:45 pm
Do Your Own Spectral Mesh Processing at Home?
Levy

3:30 pm
Break

3:45 pm
Applications -- What Can We Do With It ?
Segmentation, Shape Retrieval, Non-Rigid Matching, Symmetry Detection
Zhang

4:35 pm
Applications -- What Can We Do With It ?
Quadrangulation, Parameterization
Levy

5 pm
Conclusion, Q&A
Zhang and Levy

Bruno Levy
INRIA

Richard Zhang
Simon Fraser University

Processing for Visual Artists and Designers

Sunday, 25 July | 2:00 PM - 5:15 PM | Room 406 AB

Processing is a free language designed for artists, designers, musicians, educators, students, and anyone else who wants to create expressive, meaningful images, animations, and interactive graphics. Unlike most programming languages, it has been designed from the ground up for creating modern visual works appropriate for print, the web, and installations. With Processing, even people who are new to computers can create rich, exciting graphics that can be used in applications as diverse as interactive demonstrations, personally expressive art, interactive educational environments, gaming, procedural art, text manipulation, and animation.

Using Processing to create graphics and animations is intellectually and emotionally rewarding. Even beginners can create imagery that's wildly complex or serenely simple and elegant. It can move as though alive, or change in only the most subtle ways. Computers give us a whole new world of tools and media for communicating and expressing ourselves. When your intellectual programming mind and your intuitive image-making mind are purring together in harmony, the process is pure joy and the results are often beautiful. Best of all, Processing is free!

This course introduces the basic ideas of modern programming, but it does not focus on abstract theory. Every idea is illustrated, and Processing's value in making pictures and animations is made clear. The course provides all the tools, from context to motivation, required to continue the learning adventure and become a Processing master.

COURSE SCHEDULE

2 pm
Welcome and Overview

2:05 pm
Why Processing Is Useful and Cool

2:20 pm
A Closet Full of Shoes: Variables

2:40 pm
Rinse, Lather, Repeat: Loops and Routines

3:10 pm
Drawing Pictures

3:45 pm
Break

4 pm
Smooth Moves: Using Curves

4:20 pm
Say It Like You Mean It: Using Type

4:30 pm
Building A Project, Start To Finish

5 pm
Resources: Where To Go From Here

5:05 pm
Q&A

Andrew Glassner
Coyote Wind Studios

Physically Based Shading Models in Film and Game Production

Sunday, 25 July | 2:00 PM - 5:15 PM | Room 502 B

Physically grounded shading models have been known for many years, but they have only recently started to replace the "ad-hoc" models in common use for both film and game production. Compared to "ad-hoc" models, which require laborious tweaking to produce high-quality images, physically-based, energy-conserving shading models easily create materials that hold up under a variety of lighting environments. These advantages apply to both photorealistic and stylized scenes, and to game development as well as production of CG animation and computer VFX. Surprisingly, physically based models are not more difficult to implement or evaluate than the traditional "ad-hoc" ones.

This course begins with a short explanation of the physics of light-matter interaction and how it is expressed in simple shading models. Then several speakers discuss specific examples of how shading models have been used in film and game production. In each case, the advantages of the new models are demonstrated, and drawbacks or issues arising from their usage are discussed. The course also includes descriptions of specific production techniques related to physically based shading.

COURSE SCHEDULE

2 pm
Background: The Physics of Shading
Hoffman

2:30 pm
Practical Implementation of Physically Based Shading Models at tri-Ace
Gotanda

3 pm
Crafting Physically Motivated Shading Models for Game Development
Hoffman

3:30 pm
Break

3:45 pm
Terminators and Iron Men: Image-Based Lighting and Physical Shading at ILM
Snow

4:30 pm
Faster Photorealism in Wonderland: Physically Based Shading and Lighting at Sony Pictures Imageworks
Martinez

5 pm
Conclusion, Q&A
Gotanda, Hoffman, Martinez

Yoshiharu Gotanda
tri-Ace Inc.

Naty Hoffman
Activision

Adam Martinez
Sony Pictures Imageworks

Ben Snow
Industrial Light & Magic

Perceptually Motivated Graphics, Visualization, and 3D Displays

Sunday, 25 July | 2:00 PM - 5:15 PM | Room 502 A

This course provides an overview of how knowledge of the human visual system (HVS) and perception are applied to several aspects of computer graphics, virtual environments, visualization, and 3D display technologies. It explains the role HVS and human perception play in optimization of rendering algorithms, display algorithms, virtual environments design, fidelity, and engineering. Example applications include real-time rendering, high-quality rendering, material editing using images, and training and knowledge transfer in virtual environments. The course also surveys recent research results presented at the ACM SIGGRAPH Symposium on Applied Perception in Graphics and Visualization and other conferences.

COURSE SCHEDULE

2 pm
Welcome Introduction
McNamara

2:05 pm
Perceptually Motivated 3D Displays and Depth Perception Banks

2:45 pm
Perceptually Motivated Visualization
Healey

3:30 pm
Break

3:45 pm
Perceptually Motivated Rendering
McNamara

4:15 pm
Perceptually Motivated Simulation & Virtual Environments
Mania

4:45 pm
Leading Edge Research and APGV 2010
Mania and Banks

5 pm
A Look to the Future
McNamara and Mania

5:10 pm
Conclusion, Q&A
All

Ann McNamara
Texas A&M University

Katerina Mania
Technical University of Crete

Christopher Healey
North Carolina State University

Marty Banks
University of California, Berkeley

Image Statistics: From Data Collection to Applications in Graphics

Sunday, 25 July | 2:00 PM - 3:30 PM | Room 403 AB

Natural images exhibit statistical regularities that differentiate them from random collections of pixels, and the human visual system appears to have evolved to exploit such statistical regularities. Because computer graphics is about producing imagery for observation by humans, it is important to understand which statistical regularities occur in nature, so they can be emulated by image-synthesis methods. This course introduces all aspects of natural image statistics, ranging from data collection to analysis and their applications in computer graphics, computational photography, and image processing.

COURSE SCHEDULE

2 pm
Introduction
Reinhard

2:05 pm
Data Collection and Calibration
Pouli

2:15 pm
First Order Statistics
Pouli

2:30 pm
Second Order Statistics
Cunningham

2:55 pm
Higher Order Statistics
Reinhard

3:10 pm
Color Statistics
Reinhard

3:25 pm
Discussion
All

Erik Reinhard
University of Bristol

Tania Pouli
University of Bristol

Douglas Cunningham
Brandenburgische Technische Universität

Build Your Own 3D Display

Sunday, 25 July | 3:45 PM - 5:15 PM | Room 403 AB

Film studios are now routinely producing live-action and animated 3D content for theatrical release. This advance is primarily enabled by widespread adoption of digital projection, which allows accurate view synchronization, but the underlying 3D display technologies have changed little in the last few decades. Theatrical systems rely on stereoscopic display: projecting unique images for the right and left eyes and separating the images with various filters in viewing glasses. But now several LCD manufacturers are introducing auto-multiscopic displays, which allow view-dependent imagery to be perceived without special glasses. 3D display is poised for another resurgence.

This course provides attendees with the mathematics, software, and practical details they need to build their own low-cost stereoscopic displays. Each new concept is illustrated using a practical 3D display implemented with off-the-shelf parts. First, the course explains glasses-bound stereoscopic displays and provides detailed plans for attendees to construct their own LCD shutter glasses. Then the course explains unencumbered auto-multiscopic displays, including step-by-step directions to construct lenticular and parallax-barrier designs using modified LCDs. All the necessary software, including algorithms for rendering and calibration, is provided for each example, so attendees can quickly construct 3D displays for their own educational, amusement, and research purposes.

The course concludes by describing various methods for capturing, rendering, and viewing various multi-view imagery sources: stereoscopic OpenGL support, methods for ray-tracing multi-view imagery with POV-Ray, and techniques for capturing live-action light fields.

COURSE SCHEDULE

3:45 pm
Introduction: History and Physiology
Hirsch

3:55 pm
Representation and Display
Lanman

4:10 pm
Glasses-Bound Stereoscopic Displays
Hirsch

4:30 pm
Unencumbered Automultiscopic Displays
Lanman

4:50 pm
Source Material: Rendering and Capture
Hirsch

5 pm
Emerging Technology
Lanman

5:10 pm
Questions & Answers
Hirsch and Lanman

Matthew Hirsch
MIT Media Lab

Douglas Lanman
Brown University

Stylized Rendering in Games

Monday, 26 July | 9:00 AM - 12:15 PM | Room 502 A

As they matured, the visual arts (painting, sculpture, photography, and architecture) all developed new visual-abstraction mechanisms to go beyond "realism" .  Recent advances in visual effects have put film and games into this transitional state.  In a sense, we're like artists at the end of the Renaissance: we've nearly mastered photorealism, but are only at the beginning of our discoveries about expression and perception.

Some film effects are subtle, like the color shifts and post-processing in Mirror's Edge. Others, such as the graphic-novel look of Prince of Persia, dominate the entire rendering style.  In games, real-time and interactive constraints require more efficient and robust solutions than are employed elsewhere in computer graphics. And to be successful, a stylized renderer must integrate with appropriately stylized models, animation, and audio to form a coherent virtual world and ultimately enhance game play.

In this course, leading game developers candidly discuss the challenges of creating and implementing a stylized artistic vision for a game. Each speaker covers a specific game aspect, such as the art pipeline, rendering algorithms, art direction, and modeling.  

COURSE SCHEDULE

9 am
Introduction
McGuire

9:20 am
Monday Night Combat
Chandana "Eka" Ekanayake

9:40 am
The Illustrative Rendering of Prince of Persia
St-Amour

10 am
Personalized Cool Characters in Brink
Calver

10:15 am
Break

10:30 am
Style and Gameplay in the Mirror's Edge
Halén

11 am
Cartoon 3D for Battlefield Heroes
Halén

11:15 am
Making Concept Art Real for Borderlands
Thibault and Martel

11:35 am
Panel Discussion and Questions
All

Morgan McGuire
Williams College

Henrik Halén
Electronic Arts

Jean-Francois St-Amour
Ubisoft Entertainment

Deano Calver
Splash Damage

Aaron Thibault
Gearbox Software

Brian Martel
Gearbox Software

Chandana Ekanayake
Uber Entertainment

Biomedical Applications: What You Need to Know

Monday, 26 July | 9:00 AM - 10:30 AM | Theater 411

This course takes an in-depth look at the types of models and analysis tools researchers are developing for biomedical applications. These applications are motivated by the increasing ubiquity of 3D imaging devices, which enables frequent, high-quality images of structures that researchers use, for example, to track the effects of disease. Although simply visualizing the data is useful, quantitative measurements require development of higher-level models for measuring quantities such as area, density, and shape change.

This course covers the state of the art in this area, including: how and what kinds of models are created from images, what kinds of measurements can be done both from the images themselves and from intermediate models, and common issues that arise when dealing with imaging data.

COURSE SCHEDULE

9 am
Background
Grimm

9:30 am
Bat Ears
Mueller

10 am
Neuro Imaging
Larson

Cindy Grimm
Washington University in St. Louis

Rolf Müeller
Virginia Polytechnic Institute and State University

Stephen Larson
National Center for Microscopy and Imaging Research, University of California, San Diego

Recent Advances in Real-Time Collision and Proximity Computations for Games and Simulations

Monday, 26 July | 2:00 PM - 5:15 PM | Room 502 A

It is quite challenging for beginners to keep up with all the published papers and evolving rendering systems for collision detection and other proximity queries. This course presents an overview of existing techniques, practical solutions, and how the field will change in the coming years. It explains recent developments designed to achieve interactive performance for large-scale rigid, articulated, deforming, and fracturing models in various applications. It also covers two popular physics libraries, Bullet and PhysX, and explains how they implement various proximity queries and can be used for various simulations.

COURSE SCHEDULE

2 pm
Course Introduction
Yoon

2:10 pm
Introduction to Collision and Proximity Queries
Manocha

2:30 pm
Proximity Queries for Rigid and Articulated Characters
Kim

3 pm
Collision Detection for Deformable and Fracturing Models
Yoon

3:30 pm
Break

3:45 pm
GPU-Based Proximity Computations
Manocha

4:15 pm
Optimizing Proximity Queries for CPU, SPU and GPU
Coumans

4:45 pm
PhysX and Proximity Queries
Tonge

Sung-eui Yoon
Korea Advanced Institute of Science and Technology
Dinesh Manocha
University of North Carolina at Chapel Hill

Erwin Coumans
Sony Computer Entertainment US R&D

Young J. Kim
Ewha Womans University

Richard Tonge
NVIDIA Corporation

Importance Sampling for Production Rendering

Tuesday, 27 July | 9:00 AM - 10:30 AM | Room 406 AB

Importance sampling provides a practical, production-proven method for integrating diffuse and glossy surface reflections with arbitrary image-based environment or area lighting constructs. Functions are evaluated at random points across a domain to produce an estimate of an integral. When using a large number of sample points, the method produces a very accurate result of the integral and provides a strong basis for simulating complex problems such as light transport.

Frequently, using the necessary number of samples to reach the exact result is too computationally expensive, so fewer samples are evaluated at the cost of visual noise, or variance, within the image. Importance sampling offers a means to reduce the variance by skewing the samples toward regions of the illumination integral that provide the most energy. For instance, the direction of specular reflection or a bright light source within an environment more likely represent the final value of the integral than a random sample.

The variance can be reduced more efficiently by combining multiple components of the illumination integral, such as the lighting and material function, to determine where to sample, which is the principle of multiple importance sampling (MIS). An alternative to the noise in importance sampling, filtered importance sampling (FIS), can provide fast integration, where the lighting environment look-ups are pre-filtered to give a smoother result with a significantly smaller number of samples.

Importance sampling, MIS, and FIS have various practical implications. This quarter-day course provides the background required for using Monte Carlo-based techniques for direct lighting and explains how visual-effects companies use these shading methods in their production pipelines.

COURSE SCHEDULE

9 am
Introduction
Background
Filtered Importance Sampling (FIS)
FIS for Area Lights in "A Christmas Carol"
Colbert

9:35 am
Importance Sampling Framework at MPC
François

9:55 am
Multiple Importance Sampling (MIS)
Premoze

10:25 am
Questions
All

Mark Colbert
ImageMovers Digital

Simon Premoze

Guillaume François
Moving Picture Company

Color Enhancement and Rendering in Film and Game Production

Tuesday, 27 July | 9:00 AM - 12:15 PM | Room 502 A

Production of convincing and compelling scene representations on limited display media is a challenge common to painting, photography, film production, and computer graphics. The core of the problem is finding a transformation from the colors in the original scene to those in the final image. For almost 200 years, this transformation has been primarily determined by the chemical and optical properties of film, which have been carefully engineered for pleasing results (the "film look"). Digital color enhancement has vastly extended the variety of possible looks, but the "film look" remains the default baseline.

Despite its importance in film and game production, the transformation from scene-referred to display-referred colors (also called "rendering"; not to be confused with the more common computer graphics meaning of the term) is little-understood by many practitioners. This course covers the relevant theory, practical production methods, techniques and considerations relating to color enhancement, and rendering in both film and game production.

COURSE SCHEDULE

9 am
Introduction
Hoffman

9:05 am
From Scene to Screen

9:35 am
Color Management
Goldstone

9:55 am
Color Spaces and Operations
Selan

10:15 am
Color at Pixar: Ingredients for Creativity
Glinn

10:35 am
Break

10:50 am
The Craft of Color Grading
Levinson

11:10 am
Filmic Tonemapping for Real-time Rendering
Duiker

11:30 am
Film Simulation for Videogames
Gotanda

noon
Color Enhancement for Videogames
Hoffman

Haarm-Pieter Duiker
Duiker Research

Dominic Glynn
Pixar Animation Studios

Joseph Goldstone
Lilliputian Pictures LLC

Yoshiharu Gotanda
tri-Ace Inc.

Naty Hoffman
Activision

Joshua Pines
Technicolor

Jeremy Selan
Sony Pictures Imageworks

Stefan Sonnenfeld
Company 3

An Introduction to 3D Spatial Interaction With Videogame Motion Controllers

Tuesday, 27 July | 2:00 PM - 5:15 PM | Room 502 A

Three-dimensional interfaces use motion sensing, physical inputs, and spatial-interaction techniques to effectively control highly dynamic virtual content. With the advent of the Nintendo Wii, the Sony EyeToy, and a host of soon-to-be-released peripherals such as the Playstation Motion Controller, Microsoft’s Natal, and Sixense’s TrueMotion, game developers, researchers, and hobbyists are challenged to create compelling interface techniques and game-play mechanics that make use of this technology. Researchers in the fields of virtual and augmented reality as well as 3D user interfaces have been working on 3D interaction for nearly two decades. The techniques, interaction styles, and metaphors developed in these communities are directly applicable to games that make use of motion-controller hardware.

This course demystifies the workings of current videogame motion controllers and provides a thorough overview of the techniques, strategies, and algorithms used in creating 3D interfaces for tasks such as 2D and 3D navigation, object selection and manipulation, gesture-based application control, and character control. It summarizes the strengths and limitations of various motion-control sensing technologies in today’s and soon-to-be-released peripherals, including accelerometers, gyroscopes, and 2D and 3D depth cameras. It also presents techniques for compensating for their deficiencies, including gesture recognition and non-isomorphic control-to-display mappings. Course materials include detailed notes and demonstration videos.

COURSE SCHEDULE

2 pm
Welcome, Introduction & Roadmap
LaViola

2:20 pm
Common Tasks in 3D User Interfaces
LaViola

2:50 pm
3D Interfaces With 2D and 3D Cameras
Marks

3:35 pm
Working With the Nintendo Wiimote
LaViola

4:15 pm
3D Spatial Interaction with the PlayStation Move
Marks

5 pm
3D Gesture Recognition Techniques
LaViola

Joseph LaViola
University of Central Florida

Richard Marks
Sony Computer Entertainment America

Volumetric Methods in Visual Effects

Wednesday, 28 July | 9:00 AM - 12:15 PM | Room 502 B

Computer-generated volumetric elements such as clouds, fire, and whitewater are becoming commonplace in movie production. The goal of this course is to familiarize attendees with the technology behind these effects. Experienced presenters who have authored proprietary and commercial volumetrics tools summarize the basics of the technology and explain the rationales behind drastically different development choices.

The course begins with a quick introduction to generating and rendering volumes, then presents a production-usable volumetrics toolkit, focusing on the feature set and why those features are desirable, and concludes with a survey of the specific tools developed at Double Negative, DreamWorks, Sony Pictures ImageWorks, Rhythm & Hues, and Side Effects Software. The production-system presentations focus on devleopment history, how the tools are used by artists, and the strengths and weaknesses of the software. Emphasis is on strategies for tackling efficient data structures, shading architecture, multi-threading and parallelization, holdouts, and motion blurring.

Course Notes

COURSE SCHEDULE

9 am
Introduction

9:15 am
Basics of Volume Modeling & Rendering

10:15 am
Break

10:30 am
Rhythm & Hues
Tessendorf

10:50 am
Side Effects Software
Clinton

11:10 am
Dreamworks Animation
Penney and Kontkanen

11:30 am
Double Negative
Clifford and Graham

11:50 am
Sony Pictures Imageworks
Wrenninge

Nafees Bin Zafar
Digital Domain

Magnus Wrenninge
Sony Pictures Imageworks

Jerry Tessendorf
Rhythm & Hues Studios

Andrew Clinton
Side Effects Software Inc.

Devon Penney
PDI/DreamWorks

Jeff Clifford
Double Negative

Gavin Graham
Double Negative

Fundamentals of Visual Analytics

Wednesday, 28 July | 9:00 AM - 12:15 PM | Room 502 A

For centuries, we have been improving our ability to collect and organize data, and this process is accelerating at an unprecedented rate. Unfortunately, our ability to analyze and generate information and knowledge from the data has not kept pace with their expanding volume. Over the past 30 years, the fields of visualization and information visualization have developed to help solve this problem. These active areas of research have led to development of helpful tools for decision makers, business operators, scientists, and engineers. However, gaining insight, supporting analysis, and making decisions based on these massive, disparate, uncertain, and rapidly evolving datasets requires more than just data visualization. So a new discipline has emerged. Visual analytics is the science of analytical reasoning facilitated by interactive visual interfaces. This course provides an introduction to the fundamentals of visual analytics, describes its core components, and summarizes the field's grand challenges.

COURSE SCHEDULE
9 am
Introduction, History, and Needs of Visual Analytics
Ebert and Thomas

9:30 am
Cognitive Science Basis of Visual Analysis and Decision Making
Tversky

10 am
Decision Making in Visual Analytics
Ebert

10:15 am
Break

10:30 am
Visual Analytics Research Problems in Scientific Visualization
Koch

11:15 am
Data Representations, Transformations, and Statistics for Visual Reasoning
Maciejewski

11:45 am
Database and Data Analysis Issues in Visual Analytics
Keim

David Ebert
Purdue University

Ross Maciejewski
Purdue University

Steffen Koch
Universität Stuttgart

Jim Thomas
Pacific Northwest National Laboratory

Daniel Keim
Universität Konstanz

Barbara Gans Tversky
Stanford University

Advances in Real-Time Rendering in 3D Graphics and Games I

Wednesday, 28 July | 9:00 AM - 12:15 PM | Room 515 AB

Advances in real-time graphics research and the ever-increasing power of mainstream GPUs and consoles continue to generate an explosion of innovative algorithms for fast, interactive rendering of complex and engaging virtual worlds. Every year, the latest video games display a vast new variety of sophisticated algorithms for ground-breaking 3D rendering that pushes the visual boundaries and interactive experience of rich environments.

This course is designed to encourage cross-pollination of knowledge for future games and other interactive applications. As the next installment in the now-established series of SIGGRAPH Courses on real-time rendering, it focuses on the best of graphics practices and research from the game-development community, and provides practical and production-proven algorithms. Course instructors include designers and producers from the makers of several award-winning games: Bungie, Naughty Dog, Crytek, DICE, AMD, Rockstar, and others. Topics include many advanced production secrets in addition to practical advice for implementing advanced techniques. Attendees will acquire several highly optimized algorithms in various areas of real-time rendering.

COURSE SCHEDULE

9 am
Introduction: Current Trends and Future Challenges in Real-Time Rendering for Games
Tatarchuk

9:05 am
Rendering Techniques in Toy Story 3
Ownby, Hall and Hall

9:55 am
A Real-Time Radiosity Architecture for Video Games
Einarsson and Martin

10:45 am
Real-Time Order Independent Transparency and Indirect Illumination Using Direct3D 11
Yang and McKee

11:30 am
CryENGINE 3: Reaching the Speed of Light
Kaplayan

Natalya Tatarchuk
Bungie LLC

John Paul Ownby
Avalanche Software

Chris Hall
Disney Interactive

Rob Hall
Disney Interactive Avalanche Studio

Per Einarsson
EA DICE

Sam Martin
Geometrics

Anton Kaplayan
Crytek

Jay McKee
Advanced Micro Devices, Inc.

Jason Yang
Advanced Micro Devices, Inc.

In the News
SIGGRAPH 2010 Video