SIGGRAPH '21: ACM SIGGRAPH 2021 Courses

Full Citation in the ACM Digital Library

Advances in neural rendering

Loss functions for Neural Rendering Jun-Yan Zhu

Contact and friction simulation for computer graphics

Efficient simulation of contact is of interest for numerous physics-based animation applications. For instance, virtual reality training, video games, rapid digital prototyping, and robotics simulation are all examples of applications that involve contact modeling and simulation. However, despite its extensive use in modern computer graphics, contact simulation remains one of the most challenging problems in physics-based animation.

This course covers fundamental topics on the nature of contact modeling and simulation for computer graphics. Specifically, we provide mathematical details about formulating contact as a complementarity problem in rigid body and soft body animations. We briefly cover several approaches for contact generation using discrete collision detection. Then, we present a range of numerical techniques for solving the associated LCPs and NCPs. The advantages and disadvantages of each technique are further discussed in a practical manner, and best practices for implementation are discussed. Finally, we conclude the course with several advanced topics, such as anisotropic friction modeling and proximal operators. Programming examples are provided on the course website to accompany the course notes.

Introduction to WebXR: SIGGRAPH 2021 course

WebXR seamlessly combines XR technologies (VR, AR and MR) with the flexibility and accessibility of your browser to help you easily and quickly develop versatile and creative XR solutions. In this course, you'll learn definitions, terminologies and implementation details. The course goes through the basic concepts using uncomplicated working examples. As we believe, a strong understanding of the underlying principles is important if you're to leverage the full potential of WebXR. The purpose of this course is to introduce you to WebXR from the ground-up. As you'll learn in this course, WebXR is a powerful interface that pulls together elements from extensible technologies (VR, AR and MR), enabling you to rapidly connect hardware and software seamlessly. WebXR's versatility and improvisation will allow you to rapidly and freely develop expressive prototypes. While WebXR offers unprecedented means to immerse and interact with your audiences, it also enables you to balance and manage the ever-changing and diverse XR landscape (evolving hardware and standards). This is partly due to the fact that WebXR blend the strength and control of your browser. WebXR is a fusion of Javascript, WebGL and other libraries that allow you to connect movement and visuals in unique ways (e.g., interpret expressive emotions or tell stories through action and movement). Through WebXR, you'll be able to nurture your creativity and encourage yourself to explore designs that work in interesting and novel ways. Once you've mastered the basics of WebXR you'll have opportunities to invent new interactive interfaces for your applications, instead of following traditional designs which may not fit the style or approach of your system. Another characteristic of WebXR is the deliberate use of Javascript (which is simple, light and flexible). This lets you easily write and prototype ideas, such as stories with emotional content that embrace the user's surrounding or training simulations that immersive users in realistic situations. Overall, WebXR will allow you to support special hardware effortlessly (let your browser manage compatibility issues), while helping you develop applications that possess coordinated, powerful visual and emotional experiences.

OPENVDB

An interactive introduction to WebGL

Geometry processing with intrinsic triangulations

Future reality lab: inventing the XR future

Color management with OpenColorIO V2

Gaze-aware displays and interaction

Being able to detect and to employ gaze enhances digital displays. Research on gaze-contingent or gaze-aware display devices dates back two decades. This is the time, though, that it could truly be employed for fast, low-latency gaze-based interaction and for optimization of computer graphics rendering such as in foveated rendering. Moreover, Virtual Reality (VR) is becoming ubiquitous. The widespread availability of consumer grade VR Head Mounted Displays (HMDs) transformed VR to a commodity available for everyday use. VR applications are now abundantly designed for recreation, work and communication. However, interacting with VR setups requires new paradigms of User Interfaces (UIs), since traditional 2D UIs are designed to be viewed from a static vantage point only, e.g. the computer screen. Adding to this, traditional input methods such as the keyboard and mouse are hard to manipulate when the user wears a HMD. Recently, companies such as HTC announced embedded eye-tracking in their headsets and therefore, novel, immersive 3D UI paradigms embedded in a VR setup can now be controlled via eye gaze. Gaze-based interaction is intuitive and natural the users. Tasks can be performed directly into the 3D spatial context without having to search for an out-of-view keyboard/mouse. Furthermore, people with physical disabilities, already depending on technology for recreation and basic communication, can now benefit even more from VR. This course presents timely, relevant information on how gaze-contingent displays, in general, including the recent advances of Virtual Reality (VR) eye tracking capabilities can leverage eye-tracking data to optimize the user experience and to alleviate usability issues surrounding intuitive interaction challenges. Research topics to be covered include saliency models, gaze prediction, gaze tracking, gaze direction, foveated rendering, stereo grading and 3D User Interfaces (UIs) based on gaze on any gaze-aware display technology.

Practical machine learning for rendering: from research to deployment

• Give insights into closing the gap between taking a research neural model to deployment

• Understand the challenges in development, training, deployment, and iteration of neural networks for rendering

• Show practical use cases, tools, and networks to start your path toward neural rendering in production software

User interfaces for high-dimensional design problems: from theories to implementations

We introduce techniques for effectively performing tasks encompassing manipulation or exploration in high-dimensional spaces by the user. Such tasks emerge from applications involving many parameters or high-dimensional latent variables, with examples ranging from image editing, material editing, and shape design, to sound generation, arising in both general design problems and those with generative models from machine learning. Mathematically, such a task can be formulated as an optimization problem, where the user wants to maximize his or her subjective goodness over candidates generated by a model that has way too many control parameters for the user to handle. The solution is to bypass direct manipulation in high-dimensional spaces by extracting much lower-dimensional meaningful subspaces, which in turn give rise to tractable user interfaces. We introduce two core techniques for extracting such subspaces: one based on Bayesian optimization and the other on differential subspace search. Bayesian optimization is useful when only point-sampling is possible for the relation between the goodness and the control parameters (thus, the user can treat the system as a black box), while differential subspace search is useful when differential information is further available for the given model. We introduce both theoretical and implementation aspects of these techniques, and show applications to image editing, material editing, shape design, and sound generation.

Color matters for digital media & visualization

Comparison of the RGB and CMYK colors models. This image depicts the differences between how colors appear on a color display (RGB) compared to how the colors reproduce in the CMYK print process.

Least squares for programmers: with color plates

This course explains least squares optimization, nowadays a simple and well-mastered technology. We show how this simple method can solve a large number of problems that would be difficult to approach in any other way. This course provides a simple, understandable yet powerful tool that most coders can use, in the contrast with other algorithms sharing this paradigm (numerical simulation and deep learning) which are more complex to master.

Spectral imaging in production: course notes Siggraph 2021

Unlike path tracing, spectral rendering is still not widely used in production although it has been around for more than thirty years. Traditionally connected to spectral effects such as dispersion and interference, spectral rendering - and, more importantly, the use of spectral data in general - is predominantly a way to guarantee colour fidelity. Additionally, with the rise of path tracing and the growing use of LED lights on-set as well as the recent shift to LED walls in virtual production, it becomes increasingly evident that the traditional way of seeing colour and light as RGB triplets is insufficient if colour accuracy is required.

The purpose of the course is two-fold. First and foremost, we want to share what we learned on our way towards a spectral image pipeline. We will talk about the unique opportunities and challenges the use of spectral data brings in a modern production pipeline and our motivation to build a spectral renderer. Since spectral data influences every step of the pipeline, the course will go beyond rendering aspects. We will discuss data acquisition and will shed some light on how to tackle the special problem of LED lights in production as well as its practical usage.

The second aim of the course is to build a community. We want to see the topic evolve over the next few years and connect people to shape the future together until spectral imaging is as ubiquitous as path tracing is in production.

Visual analytics for large networks: theory, art and practice

New techniques in interactive character animation

The application of deep learning for physics-based character animation and for cinematic controllers for interactive animation is changing how we should think about interactive character animation in video games and virtual reality. We will review the benefits and drawbacks of the techniques used and the implementations available to get started.

TensorFlow graphics: differentiable computer graphics in tensorflow

Computer graphics is a sub-field of computer science which studies methods for digitally synthesizing and manipulating visual content.

An introduction to deep learning on meshes

The irrefutable success of deep learning on images and text has sparked significant interest in its applicability to 3D geometric data. Instead of covering a breadth of alternative geometric representations (e.g., implicit functions, volumetric, and point clouds), this course aims to take a deep dive into the discrete mesh representation, the most popular representation for shapes in computer graphics.

In this course, we provide different ways of covering aspects of deep learning on meshes for the virtual audience. Our course videos outline the key challenges of using deep learning on irregular mesh representation and the key ideas on how to combine machine learning with classic geometry processing to build better geometric learning algorithms. This course note complements the course videos by providing a brief history from image convolution to mesh convolutions and extended discussion on important works on this subject. Lastly, our course website (https://anintroductiontodeeplearningonmeshes.github.io/) offers a toy dataset, mesh MNIST, and some hands-on exercises to cover the actual implementation details. Our goal is to provide a permanent virtual resource that contains a combination of theoretical and practical aspects, that enables easily incorporating deep learning in geometry processing research.