SIGGRAPH '19- ACM SIGGRAPH 2019 Courses

Full Citation in the ACM Digital Library

A deep dive into universal scene description and hydra

An introduction to physics-based animation

Physics-based animation has emerged as a core area of computer graphics finding widespread application in the film and video game industries as well as in areas such as virtual surgery, virtual reality, and training simulations. This course introduces students and practitioners to fundamental concepts in physics-based animation, placing an emphasis on breadth of coverage and providing a foundation for pursuing more advanced topics and current research in the area. The course focuses on imparting practical knowledge and intuitive understanding rather than providing detailed derivations of the underlying mathematics. The course is suitable for someone with no background in physics-based animation---the only prerequisites are basic calculus, linear algebra, and introductory physics.

We begin with a simple, and complete, example of a mass-spring system, introducing the principles behind physics-based animation: mathematical modeling and numerical integration. From there, we systematically present the mathematical models commonly used in physics-based animation beginning with Newton's laws of motion and conservation of mass, momentum, and energy. We then describe the underlying physical and mathematical models for animating rigid bodies, soft bodies, and fluids. Then we describe how these continuous models are discretized in space and time, covering Lagrangian and Eulerian formulations, spatial discretizations and interpolation, and explicit and implicit time integration. In the final section, we discuss commonly used constraint formulations and solution methods.

Advances in real-time rendering in games, part 1

Advances in real-time rendering in games, part 2

Are we done with ray tracing?

Real-time graphics has come a long way since the "brute-force approach" of rasterization had been classified "ridiculously expensive" in 1974. Henceforth the promise "Ray tracing is the future and ever will be" drove the development of ray tracing algorithms and hardware, and resulted in a major revolution of image synthesis. This course will take a look at how far out the future is, review the state of the art, and identify the current challenges for research. Not surprisingly, it looks like we are not done with ray tracing, yet.

Capture4VR: from VR photography to VR video

Cinematic scientific visualization: the art of communicating science

The Advanced Visualization Lab (AVL) is part of the the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign. The AVL is led by Professor Donna Cox, who coined the term "Renaissance Team", with the belief that bringing together specialists of diverse backgrounds creates a team that is greater than the sum of its parts, and members of the AVL team reflect that in our interdisciplinarity. We specialize in creating high-quality cinematic scientific visualizations of supercomputer simulations for public outreach.

Color fundamentals for digital content creation, visualization & exploration

Computational fabrication

Manufacturing Today

CreativeAI: deep learning for graphics

In computer graphics, many traditional problems are now better handled by deep-learning based data-driven methods. In applications that operate on regular 2D domains, like image processing and computational photography, deep networks are state-of-the-art, often beating dedicated hand-crafted methods by significant margins. More recently, other domains such as geometry processing, animation, video processing, and physical simulations have benefited from deep learning methods as well, often requiring application-specific learning architectures. The massive volume of research that has emerged in just a few years is often difficult to grasp for researchers new to this area. This course gives an organized overview of core theory, practice, and graphics-related applications of deep learning.

Deep learning: a crash course

Concepts, terminology, structures, no math, no code. Free open-source libraries do the hard work. My background: consultant, writer, director, etc.

Differentiable graphics with TensorFlow 2.0

Geometric computing with python

This course is a group endeavor by Sebastian Koeh, Teseo Sehneider, Francis Williams, and Daniele Panozzo. Please contact us if you have questions or comments. For troubleshooting, please post an issue on github. We are grateful to the authors of all open souree C++ libraries we are using. In particular, libigl, tetwild, polyfem, pybind11, and Jupyter.

The course will mainly use

• igl (Section 2)

• polyfem (Section 3)

• ABC Dataset CAD Processing (Section 4)

• TetWild

• 3D Viewer

We provide documentation for the first 3 libraries in these course notes and we refer to https://geometryprocessing.github.io/geometric-computing-python/ for a complete and live version.

Geometric algebra and computer graphics

What is the best representation for doing euclidean geometry on computers? This question is a fundamental one for practitioners of computer graphics, as well as those working in computer vision, 3D games, virtual reality, robotics, CAD, animation, geometric processing, discrete geometry, and related fields. While available programming languages change and develop with reassuring regularity, the underlying geometric representations tend to be based on vector and linear algebra and analytic geometry (VLAAG for short), a framework that has remained virtually unchanged for 100 years. These notes introduce projective geometric algebra (PGA) as a modern alternative for doing euclidean geometry and shows how it compares to VLAAG, both conceptually and practically. In the next section we develop a basis for this comparison by drafting a wishlist for doing euclidean geometry.

Introduction to real-time ray tracing

Whenever you start a renderer, you need a way to see an image. The most straightforward way is to write it to a file. The catch is, there are so many formats and many of those are complex. I always start with a plain text ppm file. Here's a nice description from Wikipedia:

Lighting design for stylized animation

The purpose of this course is to study how to construct a consistent lighting style for animation, as well as, the techniques used to achieve that style. The How to Train Your Dragon franchise and specifically How to Train Your Dragon: The Hidden World is used as a case study. None of the design principles presented here are specific to How to Train Your Dragon, they are basic design principles found in many films, both live action and animated. It is the choice of which design principles to use and how we combine and prioritize them that creates a unique style. The goal is to show that our choices are not random, but carefully constructed to serve a story and the vision of how it needs to be told. Using a single film, which is part of a larger franchise for this study provides two advantages. First, the key to consistent style is making consistent choices. By using a single film we are able to dive deeper into the choices and their motivations and also build a stronger case for demonstrating consistency. Second, having access to before and after examples provides a much clearer visual example of the principles being employed, why they are chosen and then finishing with the mechanics of how they are actually achieved.

My favorite samples

Light transport simulation is ruled by the radiance equation, which is an integral equation. Photorealistic image synthesis consists of computing functionals of the solution of the integral equation, which involves integration, too. However, in meaningful settings, none of the integrals can be computed analytically and, in fact, all these integrals need to be approximated using Monte Carlo and quasi-Monte Carlo methods. Generating uniformly distributed points in the unit-hypercube is at the core of all of these methods. The course teaches the algorithms behind and elaborates on the characteristics of different classes of uniformly distributed points to help selecting the points most efficient for a task.

On hybrid lagrangian-eulerian simulation methods: practical notes and high-performance aspects

Open problems in real-time rendering

OpenVDB

This course will cover the compact volume data structure and various tools available in the open source library OpenVDB. Since its release in 2012 it has set an industry standard and has been used for visual effects in over 150 feature movies. Last year this open source project was the first to be adopted by the new Academy Software Foundation (ASWF) and the Linux Foundation. This means that the project is now changing governance and more importantly has open up for numerous external contributions. We will describe this new governance, specifically what it means for would-be contributors, and of course what has already been added to the project since last time this course was offered at SIGGRAPH in 2017.

Path guiding in production

Path guiding is a family of adaptive variance reduction techniques in physically-based rendering, which includes methods for sampling both direct and indirect illumination, surfaces and volumes but also for sampling optimal path lengths and making splitting decisions. Since adoption of path tracing as a de facto standard in the VFX industry several years ago, there has been an increased interest in producing high-quality images with low amount of Monte Carlo samples per pixel. Path guiding, which has received attention in the research community in the past few years, has proven to be useful for this task and therefore has been adopted by Weta Digital. Recently, it has also been implemented in the Walt Disney Animation Studios' Hyperion and Pixar's Renderman. The goal of this course is to share our practical experience with path guiding in production and to provide self-contained overview of recently published techniques and to discuss their pros and cons. We also take audience through theoretical background of various path guiding methods which are mostly based on machine learning - used to adapt sampling distributons based on observed samples - and zero-variance random walk theory - used as a framework for combining different sampling decisions in an optimal way. At the end of our course we discuss open problems and invite researchers to further develop path guiding in their future work.

Path tracing in production: part 1: modern path tracing

In the past few years the movie industry has switched over from stochastic rasterisation approaches to using physically based light transport simulation: path tracing in production has become ubiquitous across studios. The new approach came with undisputed advantages such as consistent lighting, progressive previews, and fresh code bases. But also abandoning 30 years of experience meant some hard cuts affecting all stages such as lighting, look development, geometric modelling, scene description formats, the way we schedule for multi-threading, just to name a few. This means there is a rich set of people involved and as an expert in either one of these aspects it is easy to lose track of the big picture.

This is part I of a full-day course, and it focuses on the necessary background knowledge. In this part, we would like to provide context for everybody interested in understanding the challenges behind writing renderers intended for movie production work. In particular we will give an insight into movie production requirements for new students and academic researchers. On the other side we will lay a solid mathematical foundation to develop new ideas to solve problems in this context.

To further illustrate, part II of the course will focus on materials (acquisition and production requirements) and showcase practical efforts by prominent professionals in the field, pointing out unexpected challenges encountered in new shows and unsolved problems as well as room for future work wherever appropriate.

Path tracing in production: part 2: making movies

The last few years have seen a decisive move of the movie making industry towards rendering using physically based methods, mostly implemented in terms of path tracing. While path tracing reached most VFX houses and animation studios at a time when a physically based approach to rendering and especially material modelling was already firmly established, the new tools brought with them a whole new balance, and many new workflows have evolved to find a new equilibrium. Letting go of instincts based on hard-learned lessons from a previous time has been challenging for some, and many different takes on a practical deployment of the new technologies have emerged. While the language and toolkit available to the technical directors keep closing the gap between lighting in the real world and the light transport simulations ran in software, an understanding of the limitations of the simulation models and a good intuition of the trade-offs and approximations at play are of fundamental importance to make efficient use of the available resources. In this course, the novel workflows emerged during the transitions at a number of large facilities are presented to a wide audience including technical directors, artists, and researchers.

This is the second part of a two part course. While the first part focuses on background and implementation, the second one focuses on material acquisition and modeling, GPU rendering, and pipeline evolution.

Perception of virtual characters

This course will introduce students, researchers and digital artists to the recent results in perceptual research on virtual characters. It covers both how technical and artistic aspects that constitute the appearance of a virtual character influence human perception. We will report results of studies that addressed the influence of low-level cues like facial proportions, shading or level of detail and higher-level cues such as behavior or artistic stylization. We will place emphasis on aspects that are encountered during character development, animation, and achieving consistency between the visuals and storytelling. The insights that we present in this course will serve as an additional toolset to anticipate the effect of certain design decisions and to create more convincing characters, especially in the case where budgets or time are limited.

Practical course on computing derivatives in code

Derivatives occur frequently in computer graphics and arise in many different contexts. Gradients and often Hessians of objective functions are required for efficient optimization. Gradients of potential energy are used to compute forces. Constitutive models are frequently formulated from an energy density, which must be differentiated to compute stress. Hessians of potential energy or energy density are needed for implicit integration. As the methods used in computer graphics become more accurate and sophisticated, the complexity of the functions that must be differentiated also increases. The purpose of this course is to show that it is practical to compute derivatives even for functions that may seem impossibly complex. This course provides practical strategies and techniques for planning, computing, testing, debugging, and optimizing routines for computing first and second derivatives of real-world routines. This course will also introduce and explore auto differentiation, which encompasses a variety of techniques for obtaining derivatives automatically. The goal of this course is not to introduce the concept of derivatives, how to use them, or even how to calculate them per se. This is not intended to be a calculus course; we will assume that our audience is familiar with multivariable calculus. Instead, the emphasis is on implementing derivatives of complicated computational procedures in computer programs and actually getting them to work.

RTX accelerated ray tracing with OptiX