This course introduces and defines augmented reality and user interfaces that can apply AR to enhance users' perceptions of reality.



This course introduces and defines augmented reality and user interfaces that can apply AR to enhance users' perceptions of reality.



Representation of spatial data is an important issue in game programming, computer graphics, visualization, solid modeling, computer vision, and geographic information systems (GIS). Recently, there has been much interest in hierarchical representations such as quadtrees, octrees, and pyramids, based on image hierarchies, as well as methods that use bounding boxes, based on object hierarchies. The key advantage of these representations is that they provide a way to index into space. In fact, they are little more than multidimensional sorts. They are compact and, depending on the nature of the spatial data, they save space and time, and facilitate operations such as search.
This course provides a brief overview of hierarchical spatial data structures and related algorithms that make use of them. It describes hierarchical representations of points, lines, collections of small rectangles, regions, surfaces, and volumes. For region data, it points out the dimension-reduction property of the region quadtree and octree, and how to navigate between nodes in the same tree, a main reason for the popularity of these representations in ray-tracing applications. The course also demonstrates how to use these representations for both raster and vector data. In the case of nonregion data, it shows how these data structures can be used to compute nearest objects in an incremental fashion so that the number of objects need not be known in advance. It also reviews a number of different tessellations and shows why hierarchical decomposition into squares instead of triangles or hexagons is preferred. The course concludes with a demonstration of the SAND spatial browser, based on the SAND spatial database system, and presentation of the VASCO JAVA applet.



Interpolation is a fundamental topic in computer graphics. While textbooks have generally focused on "regular" interpolation schemes such as B-splines, scattered interpolation approaches also have been a wide variety of applications. These include topics in facial animation, skinning, morphing, rendering, and fluid simulation.
This best-practice guide to scattered data interpolation reviews the major algorithms for scattered interpolation, shows how and where they are applied in a variety of published graphics studies, and compares and contrasts them. The algorithms include Shepard interpolation, Wiener interpolation, Laplace and thin-plate interpolation, radial basis functions (RBFs), moving least squares, and kernel regression. The course summarizes stability and computational properties with a focus on real-time applications and provides some theoretical insights to broaden the course's engineering perspective.



This course explores rebuilding and relighting the Toy Story characters, their worlds, and their human co- stars, and ensuring that they remained familiar and recognizable to the audiences who saw them 11 years ago in "Toy Story 2".
The tools available in computer graphics have significantly improved during the last decade. Technologies such as ambient occlusion, subsurface scattering, efficient Indirect Illumination, and subdivision surfaces either did not exist or were in their infancy when "Toy Story 2" was released in 1999. Audiences have also developed higher expectations for the visuals in contemporary animated films. New and improved technologies add welcome richness and depth, but they must be used carefully, to make sure that familiar characters do not become strangers.
For "Toy Story 3", the Pixar team had to rebuild all the models, both characters and sets, yet make them appear exactly the same as their earlier incarnations. As they increased the level of detail and visual richness, they had to maintain a consistent design that would be recognizable from the preceding two films. The course explanations of visual storytelling through lighting, early prototypes of Lotso, and many examples of the team's successes and failures.



Data-driven animation using motion-capture data has become a standard practice in character animation. A number of techniques have been developed to add flexibility on captured human motion data by editing joint trajectories, warping motion paths, blending a family of parameterized motions, splicing motion segments, and adapting motion to new characters and environments.
Even with the abundance of motion-capture data and the popularity of data-driven animation techniques, programming with motion capture-data is still not easy. A single clip of motion data encompasses a lot of heterogeneous information including joint angles, the position and orientation of the skeletal root, their temporal trajectories, and a number of coordinate systems. Due to this complexity, even simple operations on motion data, such as linear interpolation, are rarely described as succinct mathematical equations in research papers.
This course provides not only a solid mathematical background, but also a practical guide to programming with motion-capture data. It begins with a brief review of affine geometry and coordinate-invariant (conventionally called coordinate-free) geometric programming, which generalizes incrementally to deal with three-dimensional rotations/orientations, the poses of an articulated figure, and full-body motion data. Then it identifies a collection of coordinate-invariant operations on full-body motion data and their object-oriented implementation. Finally, it explains the practical use of this programming framework in a variety of contexts ranging from data-driven manipulation and interpolation to state-of-the-art biped locomotion control.





In three individual sessions, The Foundry and some special industry guests showcase their high-end compositor NUKE, stereo toolset OCULA, creative texture-painting tool MARI, and new digital production hub STORM. They also summarize research and development at The Foundry.
In order to attend The Foundry GeekFest, attendees must register via The Foundry web site.



User interfaces for desktop, web, mobile, and vehicle applications reach across culturally diverse user communities, sometimes within a single country or language group, and certainly throughout the world. If user interfaces are to be usable, useful, and appealing to such a wide range of audiences, user-interface and user-experience developers must account for cultural preferences in globalizing and localizing products and services. In this tutorial, attendees learn about culture models, culture characteristics, and practical principles and techniques of design and analysis that are immediately useful in both current and next-generation products and services. As the course concludes, attendees have an opportunity to put their new understanding into practice through a series of pen-and-paper exercises.



A SIGGRAPH conference is an exciting event, but it is often an intimidating experience for first-time attendees. There are so many new terms, new ideas, and new product concepts to try to understand. It is like standing in a room with 100 doors and having no idea which door to open because you have no idea what the label on each door actually means. This leaves new attendees baffled and frustrated about how to spend their time. This course is designed to ease newcomers into the SIGGRAPH Asia conference experience by presenting the fundamental concepts and vocabulary at a level that can be readily understood. Far from being made up of dry facts, this course will also portray the fun and excitement that led most of us here in the first place. Attendees in the course will become well-prepared to understand, appreciate, enjoy, network, and learn from the rest of the SIGGRAPH Asia experience.
This is a half-day Beginning course. This course will be given lecture-style. There will be PowerPoint slides and live demos showing the use of computer graphics in applications These will be used to illustrate fundamental concepts (e.g., here is an image with and without perspective) and to illustrate those concepts in applications (e.g., here is the use of perspective in a visualization application and why you have to be careful when using it). The source code for the graphics program demos will be included on the conference DVD so that attendees can use them in their own applications and classes as well.
Because this is the first time that SIGGRAPH is being held in Korea, an introductory course taught by experienced practioners will be an important aspect of the conference.












Languages, Tools, and APIs for GPU Computing
9:00-9:50 am
This talk covers the key features and differences between the major programming languages, APIs, and development tools available today. It also explains several high-level design patterns for consumer, professional, and HPC applications, with practical programming considerations for each.
Scott Fitzpatrick
NVIDIA Corporation
Introduction to Programming for CUDA C
10:00-10:50 am
This presentation teaches the basics of programming GPUS . For anyone with a background in C or C++, it explains everything necessary to start programming in CUDA C. Explore the various APIs available to CUDA applications and learn the best (and worst) ways to employ them in applications. The concepts are illustrated with step-by-step walkthroughs of code samples.
Samuel Gateau
NVIDIA Corporation
Computational Photography, Real-Time Plenoptic Rendering
11:00-11:50 am
Learn how to change the focus of an image after it has been captured. Get the latest information on GPU-based plenoptic rendering, including a demonstration of refocusing, novel view generation, polarization, high-dynamic-range imaging, and stereo 3D. Learn how GPU hardware enables interactive plenoptic rendering with high-resolution imagery, and how this process opens up entirely new possibilities for modern photography.
Todor Georgiev
Adobe Systems Incorporated
Andrew Lumsdaine
Indiana University
Using 3D Stereoscopic Solutions in Broadcast and Film Production
12:00-12:50 pm
Film, television, and the entire digital-content industry are undergoing a stereoscopic 3D revolution. 3D stereoscopic techniques provide an entirely new way to tell a story to your audience. Learn how software tools and hardware work together to make stereoscopic 3D production and distribution possible.
Andrew Page
NVIDIA Corporation
Developer Tools for the GPU: Parallel Nsight & AXE
1:00-1:50 pm
Learn about NVIDIA's latest software-development tools. The first half of this session explains how NVIDIA's range of professional software makes it easy for application developers to maximize the power of professional GPUs to deliver ultimate experiences and capabilities within their products and for their customers. Many of these offerings benefit end users directly, as they push the boundaries of what's possible in graphics and visual computing.
The second half of the session covers NVIDIA’s Parallel Nsight development environment for Windows. Parallel Nsight, fully integrated into Visual Studio 2010, introduces native GPU debugging and platform-wide performance analysis tools for both computing and graphics developers.
Phillip Miller
Samuel Gateau
NVIDIA Corporation
DX11 Tessellation
2:00-2:50 pm
This talk provides a comprehensive overview of Direct3D11 tessellation technology. It not only demonstrates how to leverage Direct3D11 tessellation to take your games to the next level, but also addresses crucial practical issues. It also presents a few breathtaking practical examples and demonstrates how these advanced techniques have been adopted into gaming source engines.
Tianyun Ni
NVIDIA Corporation
Using the GPU for Visual Effects for Film & Video
3:00-4:30 pm
Learn from NVIDIA partners who are working with the GPU to improve visual effects for film and video."Avatar" Case Study: Accelerating Out-Of-Core Ray Tracing of Sparsely Sampled Occlusion with Pantaray
3:00-3:30 pm
Modern VFX rendering pipelines confront major complexity challenges: a film like "Avatar" requires rendering hundreds of thousands of frames, each containing hundreds of millions or billions of polygons. Furthermore, the process of lighting requires many rendering iterations across all shots. This talk presents the architecture of an efficient out-of-core ray-tracing system designed to make rendering precomputations of gigantic assets practical on GPUs. The system, dubbed PantaRay, leverages development of modern ray-tracing algorithms for massively parallel GPU architectures and combines them with new out-of-core streaming and level-of-detail rendering techniques.
Luca Fascione
Sebastian Sylwan
Weta Digital
Additional speakers and topics coming soon.
CUDA and Fermi Optimization Techniques
4:45-6:00 pm
This talk presents a detailed technical overview of implementing CUDA on Fermi for various algorithms: Monte Carlo simulation, finite-difference method, and some useful image processing algorithms. Learn how to implement these algorithms on Fermi-based CUDA architecture and how to use new Fermi features and optimize to achieve optimum performance on these applications.
Hyungon Ryu
NVIDIA Corporation
Presentations will be available for download on the SIGGRAPH Asia 2010 and NVIDIA web sites soon after the workshop.





As real-time graphics aspires to movie-quality rendering, higher-order, smooth-surface representations take center stage. Catmull-Clark subdivision surfaces are the dominant higher-order surface type used in feature films, as they can model surfaces of arbitrary topological type and provide a compact representation for smooth surfaces that facilitate modeling and animation. But their use has been hindered in real-time applications because the exact evaluation of such surfaces on modern GPUs is neither memory nor performance efficient. The advent of DirectX 11, recent theoretical results in efficient substitutes for subdivision surfaces, and recent hardware advances offer the possibility to see real-time cinematic rendering in the near future.
This course covers the hardware tessellation part of the SIGGRAPH 2009 Course Efficient Substitutes for Subdivision Surfaces [3] with the emphasis on character tessellation in movie-quality games. It adds additional material based on recent research findings and practical optimizations. The goal of this course is to familiarize attendees with the practical aspects of introducing substitutes in subdivision surfaces to increase efficiency in real-time applications.
The course begins by highlighting the properties that make SubD modeling attractive and introduces some recent techniques to capture these properties by alternative surface representations with a smaller foot-print. It lists and compares the new surface representations and focuses on their implementation on current and next-generation GPUs, then addresses crucial practical issues, such as watertight evaluation, creases and corners, view-dependent displacement occlusion mapping, and LOD computation. Finally and most importantly, it explains how these advanced techniques have been adopted into their gaming pipelines.





The rapidly changing capabilities of modern graphics processing units (GPUs) require that developers understand how to combine parallel-programming techniques with the traditional interactive rendering pipeline exposed by OpenGL and Direct3D. This course demonstrates how to combine traditional rendering APIs with advanced parallel computation using OpenCL (Open Computing Language), a cross-platform API for programming parallel systems such as GPUs. The course presenters are experts on general-purpose GPU computation and advanced rendering from academia and industry, and have presented papers and tutorials on the topic at SIGGRAPH, Graphics Hardware, Supercomputing, and elsewhere.
The first section reviews the basics of the OpenCL API, including a "Hello World" application written in OpenCL. Attendees with laptops will be able to try the examples on their own during the course. The second section covers more advanced cases, including how to write applications that interact with standard graphics APIs. The final section includes performance-optimization "tips and tricks" for writing OpenCL applications.



Proceduralism is a powerful concept in computer graphics. It facilitates scenes of enormous scale, exquisite varieties of detail, and impressive efficiency. However, artists who are fluent in procedural techniques are still rare, and many studios miss out on the possibilities that this exciting field offers. This course explores how to create procedural shaders without programming, using Pixar's industry standard renderer, RenderMan, and its Autodesk Maya-based front-end, RenderMan Studio.
The first section of the course is an overview of RenderMan, its history, use in the industry, important features, and how it works. Topics include the Reyes pipeline and how it has helped to create some of the most impressive visuals in computer graphics, and proceduralism: its pros and cons, both in general and as it applies to shading.
The second section is a live demonstration of how to create a procedural animated shader for an orange that ages over time, from unripe to fresh to old and dusty. The demo begins with a sphere in Maya, then shows how to create all the detail using a shading network in RenderMan Studio. No textures are used. The look is created entirely with noise, splines, displacement, and more! Finally, the course presents examples of how the techniques used in shading the orange apply in industry and beyond.





Autodesk showcases its latest Digital Entertainment Creation (DEC) technology and how it enables cutting-edge pipelines for games, film, and television production.
This is your opportunity to:
Check here [4] for the latest event and speaker information.





Autodesk demonstrates its latest Digital Entertainment Creation (DEC) technology and how it enables cutting-edge pipelines for game, film, and television production.
The demonstrations are open to all SIGGRAPH Asia 2010 attendees.





Hardware and software companies
Production studios
Association pavilions
Educational institutions
Research laboratories
And more
Try the latest systems, talk with the people who developed them, and meet buyers, distributors, and resellers. The SIGGRAPH Asia 2010 Exhibition provides the networking opportunities you need for another year of evolving ideas and successful business.





Learn about X3D the only open standard (ISO), royalty-free file format and run-time player specification for virtual environments. See the latest real-world interactive 3D applications built with X3D, a powerful optimized visualization of real-time 3D graphics for the web that has easy-to-create interactive 3D content with robust interoperability and supports native 3D in HTML5. Find out how you can build and protect your content investment in this ever-changing competitive market.


Inside the studios of one of the world's most technologically advanced automotive and racing companies, a lump of clay, an organic expression of a next-generation car, sits near a supercomputer that visualizes the car in a virtual environment that is almost impossible to distinguish from real life. The studio walls are covered with images and examples of natural objects and technological tours de force; animals, jet fighters, exotic new materials, and even the world’s fastest sailfish splashed in chrome with McLaren’s rocket red accents. The studio's designers are constantly debating and exploring new technologies, new materials, and new concepts.
Technology has changed the way designers work. Traditional design phases, from hand-drawn sketches to clay modeling, are now realized through digital tablets and digital three-dimensional modeling. Designers must combine the best traditional and new approaches to define the optimal balance between digital and analog solutions.
Automobiles have always been a reflection of technology draped in art, and "super cars" are the ultimate expression of automotive art and technology. Designers who create the next generation of super cars must have the right tools to articulate their vision. At the same time, they must always ask: What’s next?
With its roots in Formula One racing, the design studio at McLaren is where "what’s next" is already here. In his featured speech, McLaren Design Director Frank Stephenson explains how his world-class design team looks beyond what's next to challenge the future.





Shading indicates a shadow effect on the target that will be rendered for normal CG. Real-time rendering has a different meaning. Response is very important in real-time rendering. It processes commands through hardware to calculate GPU graphics. In the "shader module", a programmable module processes the vertex and the pixel, and the CPU becomes less flexible and provides less time for calculation. Solving these problems requires an algorithm that focuses on visual effects rather than accuracy in a standardized data structure.





Web 3D technology breaks the barriers of time and space, and displays everything that exists in the real world in 3D space just as it is. This talk explores current web 3D technology and how it can be applied to a business model.





This talk introduces CODE3’s Nobound 3D, a versatile engine for cross-platform development environments, especially smart phones. It supports both PC Windows and smart-phone platforms including iPhone and Android.





Lucasfilm Singapore is a fully integrated digital studio designed to produce content for feature and television animation, visual effects, and games. The studio in Singapore houses the Singapore divisions for Industrial Light & Magic, LucasArts, and Lucasfilm Animation, which them to share technology, processes and people, with interwoven, interdependent teams. This also allows for seamless hand-off of work between the US and Singapore, creating a highly productive production pipeline with a complementary workflow. And because tools and techniques are always changing, Lucasfilm Singapore provides on-going production training for all artistic staff, conducted by leading supervisors and mentors from ILM and LucasArts.
This talk covers how Lucasfilm Singpore's convergence leverages productivity. Productions not only share infrastructure and resources, but also people, skill sets, and assets among multiple platforms. Artists work in a truly integrated studio – with the ability to contribute to a game, a television show, or visual effects for feature films – all under one roof.


Lucasfilm has some of the greatest IP in the industry. Who wouldn’t want to work on a Star Wars game? In this Special Session, Kent Byers, Associate Producer for LucasArts Singapore, summarizes Star Wars: The Force Unleashed II, the highly anticipated sequel to the fastest-selling Star Wars game ever created, which has sold more than seven million copies worldwide. In Star Wars: The Force Unleashed, the world was introduced to Darth Vader’s now-fugitive apprentice, Starkiller, the unlikely hero who ignited the flames of rebellion in a galaxy so desperately in need of a champion.







There are strong indications that the future of interactive graphics involves a more flexible programming model than today's OpenGL/Direct3D pipelines. That means that graphics developers will need a basic understanding of how to combine emerging parallel-programming techniques with the traditional interactive rendering pipeline.
This course provides an introduction to parallel-programming architectures and environments for interactive graphics, and demonstrates how to combine traditional rendering APIs with advanced parallel computation. It presents several studies of how developers combine traditional rendering API techniques with advanced parallel computation. Each case study includes a live demo and discusses the mix of parallel-programming constructs used, details of the graphics algorithm, and how the rendering pipeline and computation interact to achieve the technical goals. All computation is done in OpenCL. A combination of DirectX and OpenGL is used for the rendering.



Compared with console games, online games have unique characteristics that require more complicated design of software architecture. This course delivers state-of-the-art technologies for making online games from clients to servers. Topics include advanced production secrets in developing the world’s best online games. Attendees learn what to consider in making advanced online games and how to optimize them to meet the increasing performance demands of future online environments. The course is taught by lead programmers and software engineers from Crytek, ETRI, NCSoft, and Intel.





Crowds and groups are a vital element of life, and simulating them in a convincing manner is one of the great challenges in computer graphics and interactive techniques. This course focuses on the problem of efficiently simulating realistic crowd and group behavior for a range of applications, including games and design of spaces. It covers data-driven methods, where the characteristics of crowds are simulated based on real-world data; evaluation and perceptual issues, and creation of behavioral variety; interactive simulation and control of large-scale crowds and traffic for games and other real-time applications; and finally a case study of using crowd simulation for design of spaces in the Disney theme parks.


This introduction to trash simulation pipelines for Disney/Pixar's "Toy Story 3" focuses on interactions between bulldozers and trash piles. The simulation pipeline used Pixar's in-house rigid-body tool, Ned, to create a sim model and several custom simulation-rig forces, collision events, procedural motions, etc. A second pipeline generated interactive smoke and dust. Texture-driven particles were used to convert collision data to textures, and Atmos, Pixar's system for creating volumetric effects based on ray-marching in RenderMan provided rendering volume information.


What is photomodeling, and what is sequence light setup? In this Special Session, KiBum Kim, Digital Artist for ILM Singapore, explains how he created the virtual set for the movie using real photos from the actual set and how these models were used in lighting sequences. He uses actual shots from the movie to illustrate these two processes and shows how they can be updated and shared for other shots to save time and effort.





Discover how The Bakery enables lighting artists to work interactively on final render-quality images when dealing with complex 3D scenes.
Erwan Maigret, CEO & COO, explains the vision of The Bakery and how it fits in an evolving image industry, where production of 3D sequences needs more than ever to create massive productivity benefits without compromising creativity. Arnauld Lamorlette, CTO & Relight™ Product Manager, summarizes the technology used in Bakery Relight™ and demonstrates how lighting work can be done exclusively inside of the 3D scene. He also explains the benefits of this approach with regard to the latest production constraints (stereoscopy, high definition, and growing complexity).




In this presentation, VFX supervisiors from DIGITALIDEA talk about creating VFX shots: how previsualization is used to design and simulate difficult shots; how digital matte painting is used to create virtual backgrounds; how crowd simulation, set extension, and digital environments are created; and how they all come together to bring the director's vision to reality.
Having worked on over 200 feature films,








Translating George Lucas’ vision to the small screen is an incredible task. To say there is no other TV animation like it is an understatement. Every episode is a cinematic work of art. Currently in its third season, "The Clone Wars" has redefined the quality, complexity, and expectations for TV animation. In this session, Jass Mun, Senior FX Artist for Lucasfilm Animation Singapore, talks about the production pipeline and how the production teams, from layout to final compositing, work together to deliver these high-quality episodes on deadline. The talk emphasizes production of effects for the episodes.




A two-hour overview of the best animations, visual effects, and scientific visualizations produced in the last year. The jury assembled this show to represent the must-see works in computer graphics for 2010. The Electronic Theater also includes a few pieces shown by special invitation. On opening night, 16 December, the Electronic Theater begins with presentation of the Computer Animation Festival's Best of Show and Best Technical Awards.

Mingle with the SIGGRAPH Asia 2010 community. Greet old friends, share a toast with colleagues, and meet the thinkers from Asia and around the world who are shaping the future of computer graphics and interactive techniques. Let the Navier-Stokes Band entertain you, and bring your business cards to make new connections for another year of business and collaborations.
SIGGRAPH Asia 2010 provides food. Drinks are available for purchase.
Full Conference attendees receive one reception ticket. Tickets are available for purchase in the Merchandise Store. The cost is ₩ 40,000 per ticket.







Due to advances in digital audio technologies, computers now play a role in most music production and performance. Digital technologies offer unprecedented opportunities for creation and manipulation of sound, but the flexibilty of these new technologies implies an often confusing array of choices for musical composers and performers. Some artists are using computers directly to create music and generate an explosion of new musical forms. However, most would agree that the computer is not a musical instrument, in the same sense as traditional instruments, and it is natural to wonder "how to play the computer" using interface technology appropriate for human brains and bodies.
A decade ago, the presenters of this course organized the first workshop on New Interfaces for Musical Expression (NIME), to attempt to answer this question by exploring connections with the better-established field of human-computer interaction. The course summarizes what has been learned at the NIME conferences. It begins with an overview of the theory and practice of new musical interface design and asks: What makes a good musical interface? Are any useful design principles or guidelines available? Then it reviews topics such as mapping from human action to musical output and control intimacy, and presents practical information about the tools for creating musical interfaces, including an overview of sensors and microcontrollers, audio synthesis techniques, and communication protocols such as Open Sound Control (and MIDI). The remainder of the course consists of several specific case studies of the major broad themes of the NIME conference, including augmented and sensor-based instruments, mobile and networked music, and NIME pedagogy.





Khronos standards are fundamental to many of the technologies on display at SIGGRAPH Asia 2010. If you develop multimedia content for games, DCC, CAD, or mobile devices, you should attend Khronos Group’s DevU (Developer University) to learn about the new industry standards for royalty-free multimedia development:
• Applications driving next-generation handset requirements
• Opportunities opened up by innovation and standardization in graphics and mobile gaming
• Technological advances in multimedia handset technology










Stereoscopy has been very well known since the beginning of the 19th century. The theory is very simple: just present a left view of a scene to the left eye, and a right view to the right eye, and the viewer's brain perceives a three-dimensional view. In 2010, stereoscopy is a common experience, in the latest blockbuster movies and in systems designed for viewing 3D at home. Although the theory is simple, realization of a comfortable and convincing stereoscopic experience implies many subtle considerations, both aesthetics and technical.
This course addresses those requirements in animation and real-time applications. It presents a comprehensive overview of the rendering techniques and explores artistic concepts to provide a solid understanding of stereoscopy and how to make it work most effectively for viewers. Topics include the standard stereo projection techniques, how stereo images are perceived by the viewer, creative ways to use stereo, and and how to fix the common issues encountered when stereo is added to a graphics pipeline.



This course examines the purposes and value of stories and storytelling as they relate to the classical story structure. The role of genres and cliches is examined as they relate to storytelling. The top five storytelling techniques are presented in detail, with many examples. The 10 principles of animation are revisited from a storytelling point of view. Classical story design and structure are presented as a simple and powerful model that travels across cultures and generations. And the five parts of a classical story are analyzed in detail. The course concludes with a summary of characterization and revelation of deep character. Short exercises complement the lectures and dialog.





Autodesk showcases its latest Digital Entertainment Creation (DEC) technology and how it enables cutting-edge pipelines for games, film, and television production.
This is your opportunity to:
Check here [4] for the latest event and speaker information.





Autodesk demonstrates its latest Digital Entertainment Creation (DEC) technology and how it enables cutting-edge pipelines for game, film, and television production.
The demonstrations are open to all SIGGRAPH Asia 2010 attendees.





Hardware and software companies
Production studios
Association pavilions
Educational institutions
Research laboratories
And more
Try the latest systems, talk with the people who developed them, and meet buyers, distributors, and resellers. The SIGGRAPH Asia 2010 Exhibition provides the networking opportunities you need for another year of evolving ideas and successful business.





Ray tracing is a natural fit for the massively parallel processing capabilities of GPUs, and the resulting performance gains are making physically correct techniques plausible for production and simpler techniques truly interactive. This talk surveys a range of GPU ray-tracing options and solutions that are transforming the creative process, and describes how developers can harness GPU power in their applications and even in the cloud.







CoreENT provides simple solutions for facial and body performance capture using a small number of motion-capture devices. This demonstration shows how to use Facerobot and in-house solutions to auto-clean raw data for efficient, high-performance pipelines in CG production.







This total solution uses scene-feature, example-based automatic converting techniques to convert 2D video to high-quality 3D video and triple productivity.





Want to work in the bustling city state of Singapore? Kenneth Chew, Area Director for Contact Singapore’s South Korea operations, summarizes Singapore’s growing and excitingiInteractive and digital media industry, which includes a potent mix of local and foreign animation companies: game and visual FX studios such as Ubisoft, Koei, Electronic Arts, Lucasfilm Animation Singapore, Rainbow Media, Peach Blossom Media, Scrawl Studios, and more. Also, learn about the reasons why so many skilled specialists from around the world are moving to Singapore for work and how you can join them in Singapore!
Contact Singapore is a Singapore government agency that aims to attract graduates and professionals to live and work in Singapore, particularly in industry sectors such as interactive and digital media, engineering and R&D, and healthcare.
More information: Contact Singapore's South Korea office [5]



Commercial stereoscopic displays have re-emerged in the consumer market, and film studios routinely produce live-action and animated 3D content for theatrical release. While primarily enabled by the widespread adoption of digital projection, which allows accurate view synchronization, the underlying 3D display technologies have changed little in the last few decades. Theatrical systems rely on stereoscopic display: projecting a left/right image pair separated by various filters in glasses worn by viewers. In contrast, several LCD manufactures are introducing automultiscopic displays, which allow view-dependent imagery to be perceived without special glasses. 3D display is poised for another resurgence.
This hands-on introduction to 3D display provides attendees with the mathematics, software, and practical details necessary to build their own low-cost stereoscopic displays. An example-driven approach is used throughout. Each new concept is illustrated by a practical 3D display implemented with off-the-shelf parts. First, glasses-bound stereoscopic displays are explained. Detailed plans are provided for attendees to construct their own LCD shutter glasses. Next, unencumbered automultiscopic displays are explained, including step-by-step directions to construct lenticular and parallax-barrier designs using modified LCDs. All the necessary software, including algorithms for rendering and calibration, is provided for each example, so attendees can quickly construct 3D displays for their own educational, amusement, and research purposes.
The course concludes by describing various methods for capturing, rendering, and viewing various multi-view imagery sources. Stereoscopic OpenGL support is reviewed, as well as methods for ray-tracing multi-view imagery with POV-Ray. Techniques for capturing "live-action" light fields are also outlined. Finally, recent developments are summarized and attendees are encouraged to evolve the capabilities of their self-built 3D displays.






Originally a high school art teacher, Jae-Dong Park started his career as a comics artist at the newly founded Hangyeore Sinmun daily newspaper in 1988. From the first issue, Park created humorous illustrations and comics for the illustrations section. His caricatures played a significant part in the movement for democracy in Korea. He attacked the men in power through exaggeration and travesty. After Korea completed its democratization process, he drew for Hangyeore Sinmun until 1996, when he turned to animation with the film "Odoltoki". In 2009, the 10th Korean National Assembly honored him with the Caricaturist of the Year award. His honors also include directing two animations and publishing three books of essays. He is interested in art and technology, and the movement among Korean artists and engineers to bring art and technology together. He is particularly interested in CG technology that can create caricatures from photographs with high-level guidance.
In his featured talk, Jae-Dong Park discusses how computer graphics have changed the way of drawing caricatures and cartoons over the last decade. He speaks about his vision of how non-photorealistic rendering, an active research subject in CG, can enrich human life, shares his passion for art, and explains his own approach to drawing caricatures that exaggerate without offending the subjects of his drawings.







This introduction to incentives for investment and relocation companies in Gwangju is available by invitation only. Interested attendees should inquire at the registration desk at the entrance of the session room.





Industrial Light & Magic (ILM) is one of the world’s leading visual effects houses. It has been awarded 15 Academy Awards for Best Visual Effects, 23 Scientific and Technical Achievement Awards, and the National Medal of Technology, by the President of the United States.
In 2006, George Lucas’ world-renowned visual effects studio expanded its operations beyond its San Francisco headquarters, opening a studio in Singapore. The Singapore division of ILM is a full-service effects studio responsible for delivering complete shots and sequences on top-tier Hollywood projects. From asset development and digital-matte painting, through animation, creature, and effects simulation, to lighting and compositing, ILM Singapore delivers cutting-edge work for feature films. At the Singapore studio, ILM veterans work alongside top-level talent from around the world.
ILM Singapore leverages the expertise and technological know-how developed over 35 years at Industrial Light & Magic. Knowledge and culture flow freely between the two locations via daily video conferences, visits from San Francisco-based supervisors, and integration of key ILM Singapore artists with the teams in California to receive training in the latest techniques. Tools and workflow at ILM Singapore are identical to those in San Francisco, allowing for seamless collaboration on all aspects of projects.
This talk explores how ILM has been able to remain at the forefront of the film industry by focusing on artistry and pioneering technology. From developing proprietary software to investing in employee training, ILM continues to set the gold standard for production management while producing awe-inspiring imagery for audiences worldwide.


Computer graphics models of artists' studios allow scholars to address a number of vexing problems in the history and interpretation of art, particularly those related to old masters' working methods. Recent studies have used this modeling technique to analyze how some of the world's most famous paintings were created:
• A model of Jan van Eyck's Portrait of Arnolfini and his wife exposed perspective inaccuracies that are difficult to discern by the unaided eye. The model also confirmed that the focal length of the convex mirror in the painting was much shorter than a putative projection mirror for this work. Both results led to rejection of the claim that this work was executed by tracing optical projections.
• The lighting direction estimated in a model of Jan Vermeer's Girl With a Pearl Earring agreed closely with directions estimated by five other sources within the painting, objectively revealing this artist's mastery in rendering the effects of light and supporting the claim that this work was executed with a live model, not from the artist's imagination.
• A model of Georges de la Tour's Christ in the Carpenter's Studio confirmed that the light in the tableau was at the candle, rather than "in place of the other figure", and this, in turn, led to a rejection of the claim that this painting was executed using optical projections.
• A model of Parmigianino's Self Portrait in a Convex Mirror revealed that the warped image is consistent with the artist faithfully recording the image of a rectilinear room distorted by the mirror, rather than inventing a fictive space. The model also reveals that the work may be hung too high in its gallery home.
This talk, profusely illustrated with art works and movies of computer graphics models of artists' studios, explains the steps by which computer experts and art scholars build such computer graphics models, the types of assumptions that are brought to bear, and the strengths and limitations of the overall methodology. The talk concludes with a number of unsolved problems in the history of art that seem amenable to these computer graphics methods.



This course explains the various components of DirectX 11, such as direct compute, tessellation, multi-threaded command buffers, dynamic shader linking, new texture-compression formats, read-only depth, conservative oDepth, etc.



This course presents a publisher's perspective on development opportunities related to the pursuit of new intellectual property, work for hire on existing portfolio needs, and possible acquisition off external, independent developers.
The course explores four key questions:
1. What are publishers looking for these days?
2. How does a small, independent developer get traction when risks are high and investment is scarce?
3. How does the publisher's perspective change based on the genre and/or the region for product opportunities?
4. What is the best way to pitch a company and/or a product-development opportunity?
The course ends with a question-and-answer session.
Presented in cooperation with the Academy of Interactive Arts & Sciences.
[6]












A two-hour overview of the best animations, visual effects, and scientific visualizations produced in the last year. The jury assembled this show to represent the must-see works in computer graphics for 2010. The Electronic Theater also includes a few pieces shown by special invitation. On opening night, 16 December, the Electronic Theater begins with presentation of the Computer Animation Festival's Best of Show and Best Technical Awards.





OpenGL is the most widely available library for creating interactive computer graphics applications across all of the major computer operating systems. Its uses range from creating applications for scientific visualizations to computer-aided design, interactive gaming and entertainment, and with each new version, its capabilities reveal the most up-to-date features of modern graphics hardware.
This course provides an accelerated introduction to programming OpenGL with an emphasis on the most modern methods for using the library. OpenGL has undergone numerous updates in recent years, which have fundamentally changed how programmers interact with the application programming interface (API) and the skills required for being an effective OpenGL programmer. The most notable of those changes was the introduction of shader-based rendering, which was introduced into the API many years ago, but has recently expanded to subsume almost all functionality in OpenGL. The course summarizes each of the shader stages in OpenGL version 4.0 and methods for specifying data to be used in rendering with OpenGL.
While the annual SIGGRAPH conferences have presented numerous courses on OpenGL over the years, recent revisions to the API, culminating in OpenGL version 4.0, have provided a wealth of new functionality and features that enable creation of ever-richer content. This course builds from demonstrating the use of the most fundamental shader-based OpenGL pipeline to introducing all of the latest shader stages.











This course introduces how simulations have used flesh mesh, cloth mesh, and rigid bodies in VFX movies such as "Avatar", "Terminator", "Indiana Jones", and "Transformer". The main focus is on how CG artists plan the simulation setup to achieve efficient dynamic effects.
The first section of the course describes the process of setting up a realistic tree simulation in a very heavy and complex 3D model for "Avatar". Attendees learn how to plan and prepare a simulation setup from complex and heavy geometry. The second section explains the destruction simulation that was developed for "Indiana Jones 4" and "Transformer 2". Attendees learn how to use fracture techniques to break the geometry and how to use the fragment-clustering system to control rigid simulation. In the third section, attendees learn how various dynamic constraints and deformable rigid simulation were used in car-crash simulations and the multiple-layer simulation setup on "Terminator 4".



Blizzard Entertainment has been creating some of the world's most engaging entertainment software for nearly two decades. Each Blizzard game is built on a rich universe that provides the backdrop for some of the most epic, elaborate stories found in the games medium. The Blizzard Film Department has played a key role in telling these stories ever since the premiere of their cinematics for WarCraft II in 1995. The team hit its stride with Diablo II in 2000, showcasing the tale of the Wanderer. Shortly after that, it won awards for the cinematics featured in Warcraft III: Reign of Chaos and its expansion Warcraft III: The Frozen Throne, and went on to create the epic opening piece for the world's most popular subscription-based, massively multiplayer online role-playing game, World of Warcraft. Subsequently, the Blizzard Film Department created cinematics for each of the game's expansions (World of Warcraft: The Burning Crusade, World of Warcraft: Wrath of the Lich King, and the upcoming World of Warcraft: Cataclysm). This year marks the department's return to the StarCraft universe with the release of the greatly anticipated StarCraft II: Wings of Liberty.
In this course, members of the Blizzard Film Department share some of the secrets of creating their on-screen magic for StarCraft II: Wings of Liberty, including the creative and technical processes, from story to completion. They also shed some light on sequences rendered in the game's engine, a technique that allows for creation of much more content than would be practical using only traditional rendering methods. Course topics include: 3D modeling, animation, rigging, simulations, lighting, compositing, rendering, and artistic and technical direction.



This full-day course is an intensive, hands-on practical introduction to Pixar's RenderMan.
In the first part of the course, attendees gain sufficient familiarity with RenderMan's scene-description protocol to edit and manipulate RIB files, which allow modeling and animation applications to communicate with RenderMan. The second part of the course introduces the RenderMan Shading Language (RSL). The goal of this section is to provide an overview of the creative potential of the shading language so attendees can continue their own independent exploration of the shading language. During the final part of the course, attendees are introduced to the Python scripting language and Pixar's PRMan.






Significant recent advances in collision detection and other proximity queries have made it quite challenging for beginners to keep up with all the published papers and existing rendering systems. This half-day course explains current algorithms and their efficient implementation for interactive games, movies, physically based simulations, robotics, and CAD/CAM.
The course summarizes how recent developments achieve interactive performance for large-scale rigid, articulated, deforming, and fracturing models in various applications. Then it explores how various proximity computations can be optimized in recent GPUs and integrated with efficient GPU-based simulation methods. This overview of existing techniques and practical solutions helps attendees understand how the field will change in the coming years.





Hardware and software companies
Production studios
Association pavilions
Educational institutions
Research laboratories
And more
Try the latest systems, talk with the people who developed them, and meet buyers, distributors, and resellers. The SIGGRAPH Asia 2010 Exhibition provides the networking opportunities you need for another year of evolving ideas and successful business.









Khronos Group open standards are the foundation of all the products on display at the Khronos Pavilion at SIGGRAPH Asia 2010. If you develop multimedia content, you should attend this 90-minute session to learn from the world's leading experts on computer graphics and interactive techniques. It is a highly-condensed overview of how royalty-free Khronos APIs let you tap into cutting edge graphics and media processing on platforms ranging from high-end workstations to mobile phones.
Topics include:
• Applications driving next-generation handset requirements
• Opportunities opened up by innovation and standardization in graphics and mobile gaming
• Technological advances in multimedia handset technology





Learn how NVDIA's range of professional software makes it easy for application developers to maximize the power of professional GPUs to deliver the ultimate experiences and capabilities within their products and for their customers. Also learn how many of these offerings benefit end users directly as they push the boundaries of what's possible in graphics and visual computing.


Hyung-tae Kim is a Korean artist who is well known for his outstanding contributions to the videogame industry. He gained his reputation for art creation in games such as the Magna Carta series and the later installments of the War of Genesis series (War of Genesis: Tempest, War of Genesis III, and War of Genesis III: Part 2). He began his videogame career working in the field of background music, then shifted to art and design, and since then, he has branched out into the comic book industry, serving as a guest cover artist and contributing to the Udon street fighter series. He is currently an art director at NCsoft, where he is working on a multiplayer online role-playing game called Blade & Soul, an Asian martial arts fantasy influenced mainly by Korean culture.
In this presentation, Hyung-tae Kim shares his experience in production of the highly anticipated massively multiplayer online role-playing game (MMORPG) Blade & Soul and reveals some of the early stages, such as concept art or trailers, on the journey to give life to one of the most popular games today. Focusing on the creation of new characters in the game, he also discusses how art direction and game design influence the way the game is played, how it is marketed to a worldwide audience.





Shader programming has become an indispensible part of graphics application development. But learning to program shaders is difficult, and it is especially difficult to understand the effect of shader parameters. This course presents shader development from an interactive standpoint. It discusses vertex, fragment, and geometry shaders, shader-specific theory, and the GLSL 4.0 shader language, then reviews the graphics pipeline, including features rarely taught in beginning courses, but exposed in shaders, and shows how shaders fit into the pipeline operations. Each class of shaders is introduced with glman examples that explain details of the concept.
The OpenGL 4.0 and GLSL 4.0 specifications were recently released. While most attendees will not yet have compatible hardware on their laptops, the course explains what is new in this release and the extra functions it can perform. Attendees receive free software so they can follow along and interact with examples.



Temporal coherence, the correlation of content between adjacent rendered frames, exists across a wide range of scenes and motion types in practical real-time rendering. Taking advantage of temporal coherence can save redundant computation and significantly improve the performance of many rendering tasks with only a marginal decrease in quality. This not only allows incorporation of more computationally intensive shading effects in existing applications, but it also offers exciting opportunities to extend high-end graphics applications to reach lower-spec, consumer-level hardware.
This introduces the concepts of temporal coherence and provides the working practical and theoretical knowledge required to exploit temporal coherence in a variety of shading tasks. It begins with an introduction to the general idea of temporal coherence in rendering and an overview of the recent developments in the field. Then it focuses on a key technique: reverse reprojection cache, which is the foundation of many applications. The course explains a number of extensions of the basic algorithm for assisting in multi-pass shading effects, shader antialiasing, casting shadows, and global-illumination effects. And it introduces several more general coherence topics beyond pixel reuse, including visibility-culling optimization and object-space global-illumination approximations. For all the major techniques and applications covered, implementation and practical issues involved in development are addressed in detail.
The course emphasizes "know how" and the guidelines related to algorithm choices. After attending the course, participants are encouraged to find and utilize temporal coherence in their own applications and rapidly adapt existing algorithms to meet their requirements.



Processing [7] is a Java-based programming language and environment used as a teaching, prototyping, and production tool for creative media. Many interactive installations, generative art pieces, data visualizations, and physical computing systems have been implemented with Processing. Android [8], on the other hand, is an operating system and software stack for mobile devices, which is steadily gaining popularity all over the world.
Processing is widely used by designers, artists, students, etc., but until recently it has been limited to the PC and Mac platforms. In early 2010, the port of Processing for the Android OS was initiated. Although it is still at an early stage of development, significant portions of its functionality, such as 2D and 3D rendering, are already usable. The combination of Processing's simple API and extensible architecture with the unique characteristics of mobile devices and their ubiquitous presence will certainly open up new creative uses of such devices.
This course helps attendees get started with Processing development on Android devices. It introduces the main characteristics of the Android platform, explains how to run simple graphic applications and upload them to the phones, and summarizes the possibilities offered by more advanced features (OpenGL, GPS). Audience comments, questions, suggestions, etc. are encouraged.






A two-hour overview of the best animations, visual effects, and scientific visualizations produced in the last year. The jury assembled this show to represent the must-see works in computer graphics for 2010. The Electronic Theater also includes a few pieces shown by special invitation. On opening night, 16 December, the Electronic Theater begins with presentation of the Computer Animation Festival's Best of Show and Best Technical Awards.








A two-hour overview of the best animations, visual effects, and scientific visualizations produced in the last year. The jury assembled this show to represent the must-see works in computer graphics for 2010. The Electronic Theater also includes a few pieces shown by special invitation. On opening night, 16 December, the Electronic Theater begins with presentation of the Computer Animation Festival's Best of Show and Best Technical Awards.
Links:
[1] http://www.siggraph.org/asia2010/for_attendees/schedule/all_sessions/printable?vdt=Session_Scheduler%7Cpage_1
[2] http://www.siggraph.org/asia2010/for_attendees/schedule/all_sessions/printable?vdt=Session_Scheduler%7Cpage_2
[3] http://www.siggraph.org/s2009/sessions/courses/details/?type=course&id=55
[4] http://www.autodesk.co.kr/adsk/servlet/event/search?siteID=1169528&id=15983123
[5] http://www.contactsingapore.sg/contact/south_korea/
[6] http://www.interactive.org
[7] http://www.processing.org
[8] http://www.android.com