SIGGRAPH '23: ACM SIGGRAPH 2023 Courses

Full Citation in the ACM Digital Library

A Gentle Introduction to ReSTIR Path Reuse in Real-Time

In recent years, reservoir-based spatiotemporal importance resampling (ReSTIR) algorithms appeared out of nowhere to take parts of the realtime rendering community by storm, with sample reuse speeding direct lighting from millions of dynamic lights [1], diffuse multi-bounce lighting [2], participating media [3], and even complex global illumination paths [4]. Highly optimized variants (e.g. [5]) can give 100x efficiency improvement over traditional ray- and path-tracing methods; this is key to achieve 30 or 60 Hz framerates. In production engines, tracing even one ray or path per pixel may only be feasible on the highest-end systems, so maximizing image quality per sample is vital.

ReSTIR builds on the math in Talbot et al.'s [6] resampled importance sampling (RIS), which previously was not widely used or taught, leaving many practitioners missing key intuitions and theoretical grounding. A firm grounding is vital, as seemingly obvious "optimizations" arising during ReSTIR engine integration can silently introduce conditional probabilities and dependencies that, left ignored, add uncontrollable bias to the results.

In this course, we plan to:

1. Provide concrete motivation and intuition for why ReSTIR works, where it applies, what assumptions it makes, and the limitations of today's theory and implementations;

2. Gently develop the theory, targeting attendees with basic Monte Carlo sampling experience but without prior knowledge of resampling algorithms (e.g., Talbot et al. [6]);

3. Give explicit algorithmic samples and pseudocode, pointing out easily-encountered pitfalls when implementing ReSTIR;

4. Discuss actual game integrations, highlighting the gotchas, challenges, and corner cases we encountered along the way, and highlighting ReSTIR's practical benefits.

A Whirlwind Introduction to Computer Graphics

An Introduction to Quantum Computing

Quantum computing is a radically new approach to creating algorithms and programs. By exploiting the unusual behavior of quantum objects, this new technology invites us to re-imagine the computer graphics methods we know and love in revolutionary new ways. This course presents a math-free introduction to quantum computing.

Building a Real-Time System on GPUs for Simulation and Rendering of Realistic 3D Liquid in Video Games

Modern video games employ a variety of sophisticated algorithms to produce groundbreaking 3D rendering of water, which are pushing the visual boundaries and interactive experience of rich virtual environments. However, simulation and rendering of a large number of water particles is very time consuming and it is very hard to achieve real-time frame rate (due to the huge computational cost required), e.g., those found in feature movies and offline products [Flip Fluids2022], or tools (e.g., Houdini and Blender). That is why most water visual effects in modern games are either simulated as only a 2D shallow-water in which simulation is calculated in 2D grid and projected onto the heightfield, as in [Fluid Flux 2022], or are baked at preprocessing stage which does not allow the player to dynamically interact with the liquid at runtime, as a result, lots of the fun of interactivity is lost.

This course will discuss the state-of-the-art and production-proven techniques involved in building a real-time system on GPUs for simulation and rendering of realistic 3D liquid with millions of particles. It also discusses how to integrate the system into modern game engines (like UE5), with some show-cases of real applications in gaming environments.

Computational Interferometric Imaging

Deep Learning for Physics Simulation

Numerical simulation of physical systems has become an increasingly important scientific tool supporting various research fields. Despite its remarkable success, simulating intricate physical systems typically requires advanced domain-specific knowledge, meticulous implementation, and enormous computational resources. With the surge of deep learning in the last decade, there has been a growing interest in the machine-learning and graphics communities to address these limitations of numerical simulation with deep learning. This course provides a gentle introduction to this topic for audiences interested in exploring this trend but with little to modest machine-learning or physics-simulation backgrounds. We begin with a brief overview of the numerical simulation framework on which we ground our discussion of deep-learning methods. Next, the course provides a possible classification of several hybrid simulation strategies based on the roles of learning and physics insights incorporated. We then review the implications of such deep-learning strategies and discuss some practical considerations in combining deep learning and physics simulation. Finally, we briefly mention several advanced machine-learning techniques for further exploration. The full course information can be found in https://people.iiis.tsinghua.edu.cn/~taodu/dl4sim/.

SIGGRAPH 2023 Course on Diffusion Models

Diffusion models have been successfully used in various applications such as text-to-image generation, 3D assets generation, controllable image editing, video generation, natural language generation, audio synthesis, and motion generation. The rate of progress on diffusion models is astonishing. In the year 2022 alone, diffusion models have been applied to many large-scale text-to-image foundation models, such as DALL-E 2 [Ramesh et al. 2022], Imagen [Saharia et al. 2022], Stable Diffusion [Rombach et al. 2022], and eDiff-I [Balaji et al. 2022]; video generation models such as Imagen Video [Ho et al. 2022] and Make-a-video [Singer et al. 2022]; 3D asset generation models such as Magic3D [Lin et al. 2022] and DreamFusion [Poole et al. 2022]. This course covers the advances in diffusion models over the last few years and will be tailored to the computer graphics community. We will first cover the fundamental machine learning and deep learning techniques relevant to diffusion models. Next, we will present state-of-the-art techniques for the application of diffusion models to high-fidelity image synthesis, controllable image generation, compositional representation learning, and 3D asset generation. Finally, we will conclude with a discussion on the future application of this technology, societal impact and open research problems. After the course, the attendees will learn basic knowledge about diffusion models and how such models can be applied to different applications such as image generation, image editing, and 3D asset generation.

Exterior Calculus in Graphics: Course Notes for a SIGGRAPH 2023 Course

The demand for a more advanced multivariable calculus has rapidly increased in computer graphics research, such as physical simulation, geometry synthesis, and differentiable rendering. Researchers in computer graphics often have to turn to references outside of graphics research to study identities such as the Reynolds Transport Theorem or the geometric relationship between stress and strain tensors. This course presents a comprehensive introduction to exterior calculus, which covers many of these advanced topics in a geometrically intuitive manner. The course targets anyone who knows undergraduate-level multivariable calculus and linear algebra and assumes no more prerequisites. Contrary to the existing references, which only serve the pure math or engineering communities, we use timely and relevant graphics examples to illustrate the theory of exterior calculus. We also provide accessible explanations to several advanced topics, including continuum mechanics, fluid dynamics, and geometric optimizations. The course is organized into two main sections: a lecture on the core exterior calculus notions and identities with short examples of graphics applications, and a series of mini-lectures on graphics topics using exterior calculus.

Machine Learning & Neural Networks

Neural Fields for Visual Computing: SIGGRAPH 2023 Course

Neural fields---popularized by NeRFs ('neural radiance field')---seem to be everywhere in the popular press (e.g., Corridor Crew) for applications such as shape and image synthesis and human avatars. But, beyond writing research papers and fancy demos, what benefits might they bring to the broad SIGGRAPH community - to artists, game developers, or graphics engineers - through their inherent properties?

Neural fields let us solve computer graphics problems by representing physical properties of scenes or objects across space and time using coordinate-based neural networks. Their key properties as continuous and compressed representations of shape and appearance are especially useful in tasks that reconstruct scenes from real-world images for content creation. These properties are so compelling that new graphics hardware architectures are being proposed to accelerate their use, and the acronym NeRF has now become generic. In sum, neural fields represent a wider inflection point within computer graphics, and people beyond researchers should know why this is.

Over half a day, this course aims to provide an overview of neural fields techniques for visual computing, an understanding of the mathematical and computational properties that determine their practical uses, and examples of how we can use that understanding to solve many kinds of problems. We will identify the common components of neural field methods: their representations, their forward maps as differentiable renderers, the neural network architectures, their ability to generalize to different scenes and objects, and their ability to manipulate representations.

The course features an invited industry speaker (Alex Yu of Luma AI) who will share how neural fields are used in practice, providing the audience with insights into how the latest developments can make practical tools for media production.

OpenVDB

Path tracing in Production: The Path of Water

Physically-based light transport simulation has become a widely established standard to generate images in the movie industry. It promises various important practical advantages such as robustness, lighting consistency, progressive rendering and scalability. Through careful scene modelling it allows highly realistic and compelling digital versions of natural phenomena to be rendered very faithfully. The previous Path Tracing in Production courses have documented some of the evolution and challenges along the journey of adopting this technology, yet even modern production path-tracers remain prone to costly rendering times in various classes of scenes, of which water shots remain among those most notoriously demanding.

While this series in the past years covered a wide range of different topics within one course, this year we took the unusual step to focus on just one, the water-related challenges that we encountered during our work on Avatar: The Way of Water. Despite its seemingly simple nature, water causes a very multifaceted range of issues: specular surfaces cause spiky and sparse radiance distribution at various scales and in different forms, such as underwater caustics, godrays as well as fast-moving highlights and complex indirect on FX elements such as splashes, droplets and aeration bubbles. The purpose of this course is to share knowledge and experiences on the current state of the technology to stimulate active exchange in the academic and industrial research community that will advance the field on some of the challenging industrial benchmark problems.

We will first give an overview of the nature of the singularities and its practical implications and then dive deeper into appearance and material aspects of water and the objects it interacts with. In the remaining sections, the course will focus on some specific aspects in more technical detail, providing both a solid mathematical background as well as practical strategies. Furthermore, we discuss some of the remaining unsolved problems that hopefully will inspire future research.

Polarization-Based Visual Computing

Real-Time Ray-Tracing with Vulkan for the Impatient

The author accepts no responsibility for the accuracy, completeness or quality of the information provided, nor for ensuring that it is up to date. Liability claims against the author relating to material or non-material damages arising from the information provided being used or not being used or from the use of inaccurate and incomplete information are excluded if there was no intentional or gross negligence on the part of the author. The author expressly retains the right to change, add to or delete parts of the text or the whole text without prior notice or to withdraw the information temporarily or permanently.

Rockets, Robots, and AI: Lessons Learned from Science-Fiction Movies and TV for HCI/UX

This tutorial present examples from notable science-fiction films and videos that incorporate human-computer interaction (HCI) and user-experience (UX) design and shows what lessons can be learned. The course begins with the advent of movies in the early 1900s (e.g., Melies' "A Trip to the Moon," which was later referenced in the movie "Hugo", 2011) and concludes with the latest sci-fi movies/videos. Originally, many science-fiction movies, taking their cue from pulp science-fiction, focused on rocket ships and interplanetary travel. Later the scope of the stories broadened and deepened to future consumer products, psychological/social issues, and new technologies such as exoskeletons, robots, and artificial intelligence. Once a rarified genre and primarily products of Hollywood (with notable films from Germany, the United Kingdom, and Japan), these films/videos now occupy a primary place in modern international popular media, and originate in China, India, or South Korea, and Japan, as well as North America and Europe.

Shader Writing in Open Shading Language

State of the Art in Telepresence (Part 1)

the use of virtual reality technology, especially for remote control of machinery or for apparent participation in distant events

Opportunities for AI-Mediated 3D Telepresence

The Vulkan Computer Graphics API

USD in Production

Web Programming Using the WebGPU API

Today's web-based programming environments has become more multifaceted for accomplishing tasks that go beyond 'browsing' web-pages. The process of developing efficient web-based programs for such a wide array of applications poses a number of challenges to the programming community. Applications possess a number of workload behaviors, ranging from control intensive (e.g., searching, sorting, and parsing) to data intensive (e.g., image processing, simulation and modeling, and data mining). Web-based applications can also be characterized as compute intensive (e.g., iterative methods, numerical methods, and financial modeling), where the overall throughput of the web application is heavily dependent on the computational efficiency of the underlying hardware. Of course, no single architecture is best for running all classes of workloads, and most applications possess a mix of the workload characteristics. For instance, control-intensive applications tend to run faster on the CPU, whereas data-intensive applications tend to run fast on massively parallel architectures (like the GPU), where the same operation is applied to multiple data items concurrently. To extend and support these various workload classes so that browser-based applications wouldn't be hindered, a new generation of API needed to be developed (open the door for developers so that they can access the power of new hardware/technologies). One example of this, is the WebGPU API which exposes the capabilities of GPU hardware for the Web. The course is intended to help you get started with the WebGPU API while understanding both the HOW and WHY behind it works, so you can create your own solutions. This course is designed to teach you the new WebGPU API for graphics and compute techniques without any prior knowledge. All you need is some JavaScript experience and preferably an understanding of basic trigonometry. Whether you're new to graphics and compute development or an old pro, everyone has to start somewhere. Generally, that means starting with the basics which is the focus of this course. You'll learn through simple, easy-to-learn hands-on exercises that help you master the subject. It does this by using multiple task-based activities and discussions which complement and build upon one another.

SIGGRAPH 2023 - Course Notes What We Talk About, When We Talk About Story

Today's digital media is defined by not only the latest, greatest technical innovations but also how original story solutions are adapting to these new technological changes. No longer are the story/narrative solutions the exclusive responsibility of directors and writers but increasingly fall on the shoulders of technical directors, software engineers, animators, VFX creators, and game/XR designers whose work is essential in making "the content" come to life.

Understanding story is particularly useful when communicating with screenwriters, directors, and producers. This course answers the question, "What is Story?" (and you don't even have to take a course in screenwriting). Knowing the basics of story enables the crew to become collaborators with the producer and director. As a director talks about their story goals; and the crew will know what specific story benchmarks they are trying to meet. This information is more than a story being "a sequence of events (acts) to a "dramatic" story that that builds from Setup through Resolution.

Having an understanding of story structure allows one to understand a story's elements in context (i.e., theme, character, setting, conflict etc.) and their relationship to classic story structure (i.e., setup, inciting incident, rising action, climax, resolution, etc.). This information is for all whose work makes the story better, but their job is not creating the story. The following course notes have been adapted from Story Structure and Development: A Guide for Animators, VFX Artists, Game Designers, and Virtual Reality Creators. CRC Publishers, a division of Taylor and Francis.