Program Date & Time
Computer Animation Festival - Electronic Theater
Computer Animation Festival - Electronic Theater 2
Auditorium

A two-hour overview of the best animations, visual effects, and scientific visualizations produced in the last year. The jury assembled this show to represent the must-see works in computer graphics for 2010. The Electronic Theater also includes a few pieces shown by special invitation. On opening night, 16 December, the Electronic Theater begins with presentation of the Computer Animation Festival's Best of Show and Best Technical Awards.

Friday
17 December
7:00 PM - 9:00 PM
Technical Sketches
Expressive Faces and Clothing
Room 308A

Presented in English / 영어로 발표 됨

Automated Target Selection for DrivenShape
This sketch presents an automated means to select targets for
DrivenShape. Targets are chosen to allow DrivenShape to
produce the lowest error in common contexts.
The Emotion Color Wheel: An Interface for Real-Time Facial Animation
The EmoCoW is an experimental color-coded interface for blend-shape animating of virtual characters in real time with only one hand and a six-axes input device.
A Hybrid Approach to Facial Rigging
This sketch presents a hybrid approach to facial rigging that uses
Pose Space Deformation to seamlessly combine both geometric
deformations and blendshapes.
Face Reality: Investigating the Uncanny Valley for Virtual Faces
An experiment to determine if changing the rendering style of a conversing virtual human alone can change how trustworthy the character is perceived to be.
Saturday
18 December
9:00 AM - 10:45 AM
Courses
Geometry Simulation for Feature Films
Room 308B/C

Presented in English / 영어로 발표 됨

This course introduces how simulations have used flesh mesh, cloth mesh, and rigid bodies in VFX movies such as "Avatar", "Terminator", "Indiana Jones", and "Transformer". The main focus is on how CG artists plan the simulation setup to achieve efficient dynamic effects.

The first section of the course describes the process of setting up a realistic tree simulation in a very heavy and complex 3D model for "Avatar". Attendees learn how to plan and prepare a simulation setup from complex and heavy geometry. The second section explains the destruction simulation that was developed for "Indiana Jones 4" and "Transformer 2". Attendees learn how to use fracture techniques to break the geometry and how to use the fragment-clustering system to control rigid simulation. In the third section, attendees learn how various dynamic constraints and deformable rigid simulation were used in car-crash simulations and the multiple-layer simulation setup on "Terminator 4".

Saturday
18 December
9:00 AM - 12:45 PM
Courses
"Gimmee Somethin' to Shoot": Filming the Cinematics for Starcraft II: Wings of Liberty
Auditorium

Presented in Korean and English, translated simultaneously / 한국어와 영어로 발표됨 (동시 통역)

Blizzard Entertainment has been creating some of the world's most engaging entertainment software for nearly two decades. Each Blizzard game is built on a rich universe that provides the backdrop for some of the most epic, elaborate stories found in the games medium. The Blizzard Film Department has played a key role in telling these stories ever since the premiere of their cinematics for WarCraft II in 1995. The team hit its stride with Diablo II in 2000, showcasing the tale of the Wanderer. Shortly after that, it won awards for the cinematics featured in Warcraft III: Reign of Chaos and its expansion Warcraft III: The Frozen Throne, and went on to create the epic opening piece for the world's most popular subscription-based, massively multiplayer online role-playing game, World of Warcraft. Subsequently, the Blizzard Film Department created cinematics for each of the game's expansions (World of Warcraft: The Burning Crusade, World of Warcraft: Wrath of the Lich King, and the upcoming World of Warcraft: Cataclysm). This year marks the department's return to the StarCraft universe with the release of the greatly anticipated StarCraft II: Wings of Liberty.

In this course, members of the Blizzard Film Department share some of the secrets of creating their on-screen magic for StarCraft II: Wings of Liberty, including the creative and technical processes, from story to completion. They also shed some light on sequences rendered in the game's engine, a technique that allows for creation of much more content than would be practical using only traditional rendering methods. Course topics include: 3D modeling, animation, rigging, simulations, lighting, compositing, rendering, and artistic and technical direction.

Saturday
18 December
9:00 AM - 12:45 PM
Computer Animation Festival - Animation Theater
Computer Animation Festival - Animation Theater 3
Room 307A/B
Saturday
18 December
9:00 AM - 6:00 PM
Technical Papers
Computational Imagery
Room E1-E4

Presented in Korean / 한국어로 발표 됨

TOG Article: 172a
Computational Rephotography
A real-time estimation and visualization technique that helps users match the viewpoint of a reference photograph. Users can reach the desired viewpoint by following this visualization technique at capture time.
TOG Article: 171
Optimizing Continuity in Multiscale Imagery
A method for creation of visually smooth mipmap pyramids from imagery at several scales introduces techniques to reduce ghosting and blurring, and demonstrates continuous zooms from space to ground level.
TOG Article: 170
Computational Highlight Holography
Computational highlight holography converts 3D models into mechanical "holograms". Small grooves are milled into a surface, and each produces a reflected or refracted highlight that moves with correct binocular and motion parallax.
TOG Article: 172
Axial-Cones: Modeling Spherical Catadioptric Cameras for Wide-Angle Light Field Rendering
This novel geometric ray modeling for non-central spherical catadioptric cameras allows fast and exact wide-FOV lightfield rendering. The method enables perspective algorithms to be directly applied to catadioptric cameras.
Saturday
18 December
9:00 AM - 10:45 AM
Courses
Recent Advances in Real-Time Collision and Proximity Computations for Games and Simulations
Room 314

Presented in English / 영어로 발표 됨

Significant recent advances in collision detection and other proximity queries have made it quite challenging for beginners to keep up with all the published papers and existing rendering systems. This half-day course explains current algorithms and their efficient implementation for interactive games, movies, physically based simulations, robotics, and CAD/CAM.

The course summarizes how recent developments achieve interactive performance for large-scale rigid, articulated, deforming, and fracturing models in various applications. Then it explores how various proximity computations can be optimized in recent GPUs and integrated with efficient GPU-based simulation methods. This overview of existing techniques and practical solutions helps attendees understand how the field will change in the coming years.

Saturday
18 December
9:00 AM - 12:45 PM
Posters
Posters
Foyer Hall E
Optical-Flow-Based Head Tracking for Camera Mouse, Immersive 3D, and Robotics
Head tracking using optical flow with auto-feature tracking for camera mouse, immersive 3D, hands-free gaming, and robotics.
Chladni Satellite: Converting Real-Time Information on Solar Wind Into Aural and Visual Experience
This interactive Installation provides an environment for visitors to feel various changes in space "weather" directly on the earth.
A Pinch-Up Gesture on Multi-Touch Table with Hover Detection
A technique for realizing hover action in a multi-touch table with DSI and a pinch-up gesture to move objects intuitively.
New Cloth Modeling for Designing Dyed Patterns
This novel cloth-modeling method includes a local geometric operation for plural dyed patterns and stitching operation for cloth gathering.
Enhancing Video Games in Real Time With Biofeedback Data
This graduate-student project explores the relationship between external visual and audio stimuli, such as those in video games, and the effects they have on the player.
Feature-Based Probability Blending
This new blending algorithm for use with texture splatting maintains the illusion of terrain features (for example, bricks) protruding intermittently through underlying textures at terrain transitions.
Embedded Motion: Generating the Perception of Motion in Peripheral Vision
Visual motion stimulus that is selectively perceived in the peripheral visual field for more impressive visual expression in pictures and videos.
Creating A Digital Model of An Origami Crane Through Recognition of Origami States from Image Sequence
This system recognizes the user's paper-folding process and builds a 3D digital model of an origami crane that has the same texture as the actual origami.
Camera Model for Inverse Perspective
A novel camera model that can express the inverse perspective projection as well as the normal perspective projection and the orthogonal projection.
Intuitive 3D Caricature Face Maker
An interactive system for generating a caricatured 3D face from a single photo by combining the 2D caricature generation method and 3D geometric morphing method.
Tangible User Interface Design for Home Automation Energy Management
This study explores how to satisfy the emotional needs of a tangible user interface in a home automation environment.
Final Gathering Using Adaptive Multiple-Importance Sampling
A proposal for a suitable PDF technique for efficient final gathering using adaptive multiple-importance sampling (AMIS).
A System for Activity Visualization of Game Experience on a Smart Phone
Establishing a game-simulation system in which a producer can design user activity connected with a smart-phone experience of game storytelling.
Fast and Symmetry-Aware Quadrangulation
This symmetry-aware quadrangulation method manipulates the positions of singularities using a hierarchical approach that makes the quadrangulation faster.
Examination: Interactive Art Installation
This installation uses a model of its creator'€™s head as an interactive interface and projects 3D animated images of the creator on both sides of the room.
A Japanese Text-Based Mobile Augmented-Reality Application
An application that uses Japanese words as markers by converting Japanese characters from signs to text and extracting keywords in real time.
Design and Application for an Interpersonal Communication Interface
This communication interface can detect the user’s behavior and control lighting changes in accordance with time and the distance when the user views the frame.
Progressive Mesh Cutting for a Real-Time Haptic Incision Simulator
An effective mesh-cutting method for a real-time incision simulator. The method reproduces visual and haptic feedback involved in the incision.
A Realistic 3D Facial Deformation With Facial Asymmetry of Healthy Subjects
A novel method that generates plausible faces from symmetric face models by means of statistical knowledge about the facial asymmetry of healthy subjects.
Interactive Oriental Orchid Created With the Spirit and the Mind
An ink-and-wash painting of oriental orchid created with fingertips as of they are brushing dust off of an orchid leaf.
Lens-Dispersion Simulation Using a Dispersive Lens Model and a Spectral-Rendering Method
A novel method for simulating lens dispersion that combines a new realistic lens model and an efficient spectral-rendering method.
A 3D Active-Touch Interaction Based on Meso-Structure Analysis
Modeling surface asperity of cultural properties based on meso-structure analysis and development of a 3D exhibition system that enables users to touch the digital archive model interactively.
Composing Music and Images for Public Display Using Correlated KANSEI Information
A public display system that uses dynamic composition of digital images and sounds by analyzing related KANSEI information.
Conversion of Performance-Mesh Animation Into Cage-Based Animation
A new linear framework to convert performance-mesh animation into cage-based animation. The method preserves the silhouette consitency of the input data better than the skeleton-based approach.
Development of a Learning-Assist System for Dextrous Finger Movements
This learning-assist system for dextrous finger movements uses finger motion-capture data measured by Hand-MoCap.
Ghost Interruption
This presentation illustrates the artistic and technical production process of a digital performance: Ghost Interruption. The dancer interacts with animated images that function as a virtual environment.
Modeling Trees With Crown-Shape Constraints
A method for modeling trees with basic solid geometry figures, such as spheres, cones, cylinders, and prisms.
Interaction With Objects Inside Media
Media that can provide a physics-based application with predefined object information and the corresponding sensory feedback data while remaining compatible with current image formats.
Cell Voice: Touching Minds and Hearts Through Voice and Light
Cell Voice is a simple and interesting context-aware message medium that allows people to directly leave voice messages in a device and convert tactile sensation into auditory data.
Pendulum
An interpretation of how emptiness, a factor in various human-perceived experiences, can be integrated in media art.
Procedural Modeling of Multiple Rocks Piled on Flat Ground
This procedural modeling method generates realistic rock shapes and effectively places the rocks on the ground.
ZOOTOPIA : A Tangible and Accessible Zoo for Hospitalized Children
ZOOTOPIA was designed for kids experiencing anxiety, detachment, and loneliness from being too long in the hospital. With ZOOTOPIA, a stroll through the zoo becomes fun and accessible.
A Real-Time and Direct-Touch Interaction for a 3D Exhibition of Woven Cultural Property
A real-time and direct-touch interaction for a 3D exhibition of the Tenmizuhiki tapestries: Hirashaji Houou Monyou Shishu of Fune-hoko of the Gion Festival in Kyoto.
Real-Time Integral Photography Using the Extended Fractional View Method
This new system captures, synthesizes, and displays moving integral photography in real time using the extended fractional view method.
Gesture-Based Control of the Snake Robot and Its Simulated Gaits
A method of controlling the Snake robot and and its simulate gaits using natural gestures like hood lift with a custom-developed data glove and accelerometers/
Life Twitter: Connecting Everyday Commodities With Social Networking Services
A system that uses sensor bundles to reshape everyday commodities. When the reformed commodities are used, the system automatically "twitters" users' activities to the social network.
Multi-Touch Based on the Metaphor of Persistence of Vision
A new method for converting single-touch devices to multi-touch using the metaphor of persistence of vision.
Ridge Detection With a Drop-of-Water Principle
This extraction method for ridge lines generates a ridge-lined 3D terrain map of Japan.
Visualizing and Experiencing Harmful Gases in the VR Environment
An approach to visualizing and experiencing the harmful gases caused by fire in the virtual reality environment, which could help people to conduct safer fire-response activities.
Interaction Design Based on Pattern Language
A new design tool for interaction installation or new-media art based on pattern language.
Horizon-Based Screen-Space Ambient Occlusion Using Mixture Sampling
A new screen-space ambient occlusion solution that overcomes the issues of earlier methods and provides more realistic results.
Reiki - The Dark Light
Using computer-vision techniques to reproduce the ancient Chinese legend that individual humans have their own particular energy that we can't see in everyday life.
Flick Flock: The Distant and Distinct Characteristics of the Masses in Immersive Aesthetic Space
This poster elaborates on concepts of distance and distinctiveness behind an immersive virtual environment called Flick Flock (2009) that uses embodied interaction to explore human perceptual processes.
A Task-Parallel Programming Language for Interactive Applications
Based on some observations about common data and code structure in rendering and interactive applications, this poster introduces a prototype language for managing tasks and data in task-parallel programs.
Surge: An Experiment in Real-Time Music Analysis for Gaming
A multi-disciplinary collaboration at Drexel University focuses on expanding the music-game genre through merging novel audio-analysis algorithms with music-driven dynamic gameplay to create a "music-reactive" game.
A Glove-Type Input Device Using Hetero-Core Fiber Sensors for 3D-CG Modeling
A glove-type input device using hetero-core fiber optic sensors for 3D-CG modeling and intuitive interface.
Fast Patching of Moving Regions for High-Dynamic-Range Imaging
A fast-intensity mapping-function-based method for patching moving regions and perserving better high-dynamic-range images in the synthesis process.
Toward a GPU-Only Rod-Based Hair Animation System
A closed-loop GPU system for animating human hair in real time using a physically realistic method that accounts for hair twisting.
Volume Matting: Object-Tracking-Based Matting Tool
A new kind of auto-rotoscoping tool that is incorporated a matting procedure for extracting detailed
alpha mattes into a key-frame-based object-tracking procedure with appropriate occlusion handling.
Rendering Method for Multiple Reflections on a Concave Mirror
Generation of multiple reflections on a semi-cylindrical mirror with an analytical method that can render multiple reflection images according to eye position.
Pondang: Artificial Creature Ecology With Evolutionary Sound
An investigation into a tangible interactive art form (Pondang) shared by virtual creatures in an artificial pond and humans.
Direct Volume Drilling of Internal Structures Using a 2D Pointing Device
A new mouse-based drilling approach on the volume-rendering image and a framework for interactive drilling on the arbitrary 3D internal region.
Object-Motion-Based Video Key-Frame Extraction
A method of extracting the key frame that contains the most dominant object motion.
Bounding Volumes for Implicit Intersections
An approach for automatic generation of bounding volumes for intersections of implicit surfaces. The approach is based on set operations, normalization, and offsetting.
A Rule-Based Method for Creating Bookshelf Models
A rule-based method for generating bookshelf models that can be easily given variations by employing structural patterns.
Annotating With "Sticky" Light for Remote Guidance
A remote guidance system that can support a mobile worker by projecting "sticky annotations" from a remote helper into the worker's physical environment.
Cognitive Laser: New Gaming Device for First-Person-Shooter Games Using a Laser Shooter and a Big Screen
Traditional displays for first-person-shooter games use a CRT, which cannot support large-screen display. This new input device uses laser-recognizable display techniques.
Square Deformed Map With Simultaneous Expression of Close and Distant View
A new deformed map in which both close and distant view are expressible. The top view for the central area and the overhead view for the surrounding area are smoothly connected by deformation.
Visorama 2.0: A Platform for Multimedia Gigapixel Panoramas
The Visorama system provides a full platform for viewing and interacting with multimedia gigapixel panoramas.
Scene(ic): Performativity and Visual Narrative in Amateur Digital Photography
Scene(ic) is an an interactive installation that investigates the relationship among lived experience, performativity, and the visual narrative of travel photography. A tangent of memory is made manifest.
Complex Mapping With the Interpolated Julia Set and Mandelbrot Set
Introduction of color interpolation into flat images of the Julia and Mandelbrot sets for creating a shading-like effect and transferring the two fractals to the Riemann sphere.
Interactive View-Dependent Rendering With Culling for Articulated Models in Crowd Scenes
A novel, view-dependent rendering method for crowd scenes: a cluster hierarchy for an animated, articulated model as a dual representation for view-dependent rendering and occlusion culling.
Motion Sketch
A compact and convenient representation of an image sequence in a motion-outline image, captioned as "Motion Sketch".
Data-Glove-Based Interface for Real-Time Character Motion Control
A real-time character motion control interface that uses finger angles and hand positions from data gloves and magnetic sensors to control the character motion and style parameter.
Water History in a Deferred Shader: Wet Sand on the Beach
Waves in real-time graphics typically fail to leave surfaces looking wet. This method displays contact history between water and its container in way that is suitable for real-time graphics.
Saturday
18 December
9:00 AM - 6:00 PM
Courses
An Introduction to OpenGL 4.0 Programming
Room E5

Presented in English / 영어로 발표 됨

OpenGL is the most widely available library for creating interactive computer graphics applications across all of the major computer operating systems. Its uses range from creating applications for scientific visualizations to computer-aided design, interactive gaming and entertainment, and with each new version, its capabilities reveal the most up-to-date features of modern graphics hardware.

This course provides an accelerated introduction to programming OpenGL with an emphasis on the most modern methods for using the library. OpenGL has undergone numerous updates in recent years, which have fundamentally changed how programmers interact with the application programming interface (API) and the skills required for being an effective OpenGL programmer. The most notable of those changes was the introduction of shader-based rendering, which was introduced into the API many years ago, but has recently expanded to subsume almost all functionality in OpenGL. The course summarizes each of the shader stages in OpenGL version 4.0 and methods for specifying data to be used in rendering with OpenGL.

While the annual SIGGRAPH conferences have presented numerous courses on OpenGL over the years, recent revisions to the API, culminating in OpenGL version 4.0, have provided a wealth of new functionality and features that enable creation of ever-richer content. This course builds from demonstrating the use of the most fundamental shader-based OpenGL pipeline to introducing all of the latest shader stages.

Saturday
18 December
9:00 AM - 12:45 PM
Courses
Introduction to Using RenderMan
Off-Site Venue

Presented in English / 영어로 발표 됨

This full-day course is an intensive, hands-on practical introduction to Pixar's RenderMan.

In the first part of the course, attendees gain sufficient familiarity with RenderMan's scene-description protocol to edit and manipulate RIB files, which allow modeling and animation applications to communicate with RenderMan. The second part of the course introduces the RenderMan Shading Language (RSL). The goal of this section is to provide an overview of the creative potential of the shading language so attendees can continue their own independent exploration of the shading language. During the final part of the course, attendees are introduced to the Python scripting language and Pixar's PRMan.

Saturday
18 December
9:00 AM - 6:00 PM
Technical Sketches
Animated Meshes and Architecture
Room 315

Presented in English / 영어로 발표 됨

Displaced Subdivision Surfaces of Animated Meshes
A technique for extraction of a series of displaced subdivision surfaces from an existing animated mesh.
Automatic Architecture Model Generation Based on Object Hierarchy
A novel framework for generating an explicit architecture model from scanned points of an existing
architecture.
Real-Time Controlled Metamorphosis of Animated Meshes Using Polygonal-Functional Hybrids
This approach allows production of metamorphosing transitions between animated meshes of arbitrary topology in real time, using both the meshes and the animated skeletons of the objects.
Space-Time Blending With Improved User Control in Real Time
Improving the original space-time blending technique to provide users with a set of more intuitive controls in real-time applications.
Saturday
18 December
9:00 AM - 10:45 AM
Exhibition
Exhibition
Hall C1 & C2
The SIGGRAPH Asia 2010 Exhibition is a diverse and energetic showcase of everything Asia and beyond has to offer in computer graphics and interactive techniques:

Hardware and software companies
Production studios
Association pavilions
Educational institutions
Research laboratories
And more

Try the latest systems, talk with the people who developed them, and meet buyers, distributors, and resellers. The SIGGRAPH Asia 2010 Exhibition provides the networking opportunities you need for another year of evolving ideas and successful business.

Saturday
18 December
9:30 AM - 6:00 PM
Technical Papers
Fluids and Flows
Room E1-E4

Presented in English / 영어로 발표 됨

TOG Article: 175
Multi-Phase Fluid Simulations Using Regional Level Sets
Simulation of multi-phase liquid interactions with each component registered to the regional level-set graph.
TOG Article: 174
Scalable Fluid Simulation Using Anisotropic Turbulence Particles
A novel, scalable method for simulating turbulent fluids. The method uses energy transport and an efficient particle representation to achieve accurate turbulent detail at interactive frame rates.
TOG Article: 176
Detail-Preserving Fully-Eulerian Interface Tracking Framework
This paper presents a new fully Eulerian interface tracking framework that preserves fine details of liquid.
TOG Article: 173
Free-Flowing Granular Materials With Two-Way Solid Coupling
An efficient continuum-based model for simulating granular materials that allows free flows with zero cohesion, global coupling between pressure and friction, and two-way interaction with solid bodies.
Saturday
18 December
11:00 AM - 12:45 PM
Technical Sketches
Image Editing
Room 308A

Presented in English / 영어로 발표 됨

Fast Local Color Transfer Via Dominant Colors Mapping
A more accurate local color-transfer method.
NinjaEdit: Simultaneous and Consistent Editing of an Unorganized Set of Photographs
A novel editing technique to apply visual effects to multiple photographs of a scene easily and simultaneously, maintaining their visual coherency with 3D reconstruction.
Fusion-Based Image and Video Decolorization
A novel decolorization method built on the principle of image fusion.
Layer-Based Single-Image Dehazing by Per-Pixel Haze Detection
A novel single-image dehazing technique built on a per-pixel haze-detection strategy.
Saturday
18 December
11:00 AM - 12:45 PM
Exhibitor Tech Talks
Khronos Media Acceleration Forum
Exhibitor Tech Talk Stage

Presented in English / 영어로 발표 됨

Khronos Group open standards are the foundation of all the products on display at the Khronos Pavilion at SIGGRAPH Asia 2010. If you develop multimedia content, you should attend this 90-minute session to learn from the world's leading experts on computer graphics and interactive techniques. It is a highly-condensed overview of how royalty-free Khronos APIs let you tap into cutting edge graphics and media processing on platforms ranging from high-end workstations to mobile phones.

Topics include:

• Applications driving next-generation handset requirements
• Opportunities opened up by innovation and standardization in graphics and mobile gaming
• Technological advances in multimedia handset technology

Saturday
18 December
11:15 AM - 12:45 PM
Exhibitor Tech Talks
State-of-the-Art Application Development on GPUs
Exhibitor Tech Talk Stage

Presented in English / 영어로 발표 됨

Learn how NVDIA's range of professional software makes it easy for application developers to maximize the power of professional GPUs to deliver the ultimate experiences and capabilities within their products and for their customers. Also learn how many of these offerings benefit end users directly as they push the boundaries of what's possible in graphics and visual computing.

Saturday
18 December
1:30 PM - 2:45 PM
Courses
Creating Amazing Effects With GPU Shaders
Room 308B/C

Presented in English / 영어로 발표 됨

Shader programming has become an indispensible part of graphics application development. But learning to program shaders is difficult, and it is especially difficult to understand the effect of shader parameters. This course presents shader development from an interactive standpoint. It discusses vertex, fragment, and geometry shaders, shader-specific theory, and the GLSL 4.0 shader language, then reviews the graphics pipeline, including features rarely taught in beginning courses, but exposed in shaders, and shows how shaders fit into the pipeline operations. Each class of shaders is introduced with glman examples that explain details of the concept.

The OpenGL 4.0 and GLSL 4.0 specifications were recently released. While most attendees will not yet have compatible hardware on their laptops, the course explains what is new in this release and the extra functions it can perform. Attendees receive free software so they can follow along and interact with examples.

Saturday
18 December
2:15 PM - 6:00 PM
Courses
Exploiting Temporal Coherence in Real-Time Rendering
Room 314

Presented in English / 영어로 발표 됨

Temporal coherence, the correlation of content between adjacent rendered frames, exists across a wide range of scenes and motion types in practical real-time rendering. Taking advantage of temporal coherence can save redundant computation and significantly improve the performance of many rendering tasks with only a marginal decrease in quality. This not only allows incorporation of more computationally intensive shading effects in existing applications, but it also offers exciting opportunities to extend high-end graphics applications to reach lower-spec, consumer-level hardware.

This introduces the concepts of temporal coherence and provides the working practical and theoretical knowledge required to exploit temporal coherence in a variety of shading tasks. It begins with an introduction to the general idea of temporal coherence in rendering and an overview of the recent developments in the field. Then it focuses on a key technique: reverse reprojection cache, which is the foundation of many applications. The course explains a number of extensions of the basic algorithm for assisting in multi-pass shading effects, shader antialiasing, casting shadows, and global-illumination effects. And it introduces several more general coherence topics beyond pixel reuse, including visibility-culling optimization and object-space global-illumination approximations. For all the major techniques and applications covered, implementation and practical issues involved in development are addressed in detail.

The course emphasizes "know how" and the guidelines related to algorithm choices. After attending the course, participants are encouraged to find and utilize temporal coherence in their own applications and rapidly adapt existing algorithms to meet their requirements.

Saturday
18 December
2:15 PM - 6:00 PM
Featured Speakers
Visual Works of Blade & Soul
Auditorium

Presented in Korean, translated simultaneously to English / 한국어로 발표 됨 (영어 동시 통역)

Hyung-tae Kim is a Korean artist who is well known for his outstanding contributions to the videogame industry. He gained his reputation for art creation in games such as the Magna Carta series and the later installments of the War of Genesis series (War of Genesis: Tempest, War of Genesis III, and War of Genesis III: Part 2). He began his videogame career working in the field of background music, then shifted to art and design, and since then, he has branched out into the comic book industry, serving as a guest cover artist and contributing to the Udon street fighter series. He is currently an art director at NCsoft, where he is working on a multiplayer online role-playing game called Blade & Soul, an Asian martial arts fantasy influenced mainly by Korean culture.

In this presentation, Hyung-tae Kim shares his experience in production of the highly anticipated massively multiplayer online role-playing game (MMORPG) Blade & Soul and reveals some of the early stages, such as concept art or trailers, on the journey to give life to one of the most popular games today. Focusing on the creation of new characters in the game, he also discusses how art direction and game design influence the way the game is played, how it is marketed to a worldwide audience.

Saturday
18 December
2:15 PM - 3:45 PM
Technical Papers
Volumetric Modeling and Rendering
Room E1-E4

Presented in English / 영어로 발표 됨

TOG Article: 180
Volumetric Modeling With Diffusion Surfaces
This paper proposes Diffusion Surfaces (DSs), a new primitive for volumetric modeling, and a user interface tailored to modeling objects whose inner structure has rotational symmetries.
TOG Article: 179
Fast Parallel Surface and Solid Voxelization on GPUs
Data-parallel methods for conservative and 6-separating surface voxelization and for solid voxelization, including a novel octree-based sparse solid voxelization approach.
TOG Article: 177
Unbiased, Adaptive Stochastic Sampling for Rendering Inhomogeneous Participating Media
This unbiased and adaptive sampling technique for generating scattering events accelerates rendering of highly inhomogeneous participating media inbyone to two orders of magnitude.
TOG Article: 178
A Hierarchical Volumetric Shadow Algorithm for Single Scattering
An incremental, hierarchical algorithm for rendering volumetric shadows in single-scattering participating media.
Saturday
18 December
2:15 PM - 4:00 PM
Technical Sketches
Production and New Media
Room 308A

Presented in English / 영어로 발표 됨

Simulation-Aided Performance: Behind The Coils Of Slinky Dog in "Toy Story 3"
Summary of the collaborative work between simulation technical directors and animators for Slinky Dog in "Toy Story 3".
Simulating Drools and Snots for Bulls in "Knight and Day"
A set of techniques used to produce drool effects for bulls in the "Knight and Day" movie production.
MobiRT: An Implementation of OpenGL ES-Based CPU-GPU Hybrid Ray Tracer for Mobile Devices
Implementation of an OpenGL ES-based CPU-GPU hybrid ray trace, the first demonstration of full-Whitted ray tracing in dynamic scenes using OpenGL ES.
Blue Mars Chronicles: Building for Millions
Insights and guidelines on how to maintain productive relationships with 3D content developers of the online virtual world Blue Mars.
Saturday
18 December
2:15 PM - 4:00 PM
Courses
Introduction to Processing on Android Devices
Room E5

Presented in English / 영어로 발표 됨

Processing is a Java-based programming language and environment used as a teaching, prototyping, and production tool for creative media. Many interactive installations, generative art pieces, data visualizations, and physical computing systems have been implemented with Processing. Android, on the other hand, is an operating system and software stack for mobile devices, which is steadily gaining popularity all over the world.

Processing is widely used by designers, artists, students, etc., but until recently it has been limited to the PC and Mac platforms. In early 2010, the port of Processing for the Android OS was initiated. Although it is still at an early stage of development, significant portions of its functionality, such as 2D and 3D rendering, are already usable. The combination of Processing's simple API and extensible architecture with the unique characteristics of mobile devices and their ubiquitous presence will certainly open up new creative uses of such devices.

This course helps attendees get started with Processing development on Android devices. It introduces the main characteristics of the Android platform, explains how to run simple graphic applications and upload them to the phones, and summarizes the possibilities offered by more advanced features (OpenGL, GPS). Audience comments, questions, suggestions, etc. are encouraged.

Saturday
18 December
2:15 PM - 6:00 PM
Computer Animation Festival - Electronic Theater
Computer Animation Festival - Electronic Theater 3
Auditorium

A two-hour overview of the best animations, visual effects, and scientific visualizations produced in the last year. The jury assembled this show to represent the must-see works in computer graphics for 2010. The Electronic Theater also includes a few pieces shown by special invitation. On opening night, 16 December, the Electronic Theater begins with presentation of the Computer Animation Festival's Best of Show and Best Technical Awards.

Saturday
18 December
4:15 PM - 6:15 PM
Technical Sketches
Interactive Rendering
Room 315

Presented in English / 영어로 발표 됨

Real-Time Stereo Visual Hull Rendering using a Multi-GPU-accelerated Pipeline
A novel real-time, image-based visual hull-rendering pipeline that adopts a single PC with multiple GPUs and renders stereo images of the object at 30~40 fps.
Ordered Depth-First Layouts for Ray Tracing
In this ordered depth-first tree layout, a child that has a larger surface area is stored next to its parent, so parent-child localities can be maximized.
Parallel Progressive Photon Mapping on GPUs
A GPU implementation of progressive photon mapping. The main idea is a new data-parallel progressive radiance estimation algorithm, including data-parallel construction of photon maps.
Interactive Voxelized Epipolar Shadow Volumes
A new visibility sampling in epipolar space that can be efficiently computed via a standard parallel scan. This allows shadow rendering in participating media at up to 300 fps.
Saturday
18 December
4:15 PM - 6:00 PM
Technical Papers
3D Modeling
Room E1-E4

Presented in English / 영어로 발표 됨

TOG Article: 183
Data-Driven Suggestions for Creativity Support in 3D Modeling
This paper introduces data-driven suggestions for 3D modeling. The approach computes and presents components that can augment the artist's current shape.
TOG Article: 181
Computer-Generated Residential Building Layouts
A method for automated generation of interior building layouts. The approach is based on the layout design process developed in architecture.
TOG Article: 182
Context-Based Search for 3D Models
A context-based model search engine that uses information about the supporting scene a user is building to find good models.
TOG Article: 184
Style-Content Separation by Anisotropic Part Scales
This paper demonstrates co-analysis of a set of 3D objects to create novel instances derived from the set. The analysis is facilitated by style-content separation.
TOG Article: 185
Multi-Feature Matching of Fresco Fragments
This multiple-cue matching approach for reassembling fragments of archaeological artifacts improves current 3D matching algorithms by incorporating normal-based feature descriptors and reweighting features by importance.
Saturday
18 December
4:15 PM - 6:30 PM