SIGGRAPH '21: ACM SIGGRAPH 2021 Talks

Full Citation in the ACM Digital Library

SESSION: Volumes

NanoVDB: A GPU-Friendly and Portable VDB Data Structure For Real-Time Rendering And Simulation

We introduce a sparse volumetric data structure, dubbed NanoVDB, which is portable to both C++11 and C99 as well as most graphics APIs, e.g. CUDA, OpenCL, OpenGL, WebGL, DirectX 12, OptiX, HLSL, and GLSL. As indicated by its name, NanoVDB is a mini-version of the much bigger OpenVDB library, both in terms of functionality and scope. However, NanoVDB offers one major advantage over OpenVDB, namely support for GPUs. As such it is applicable to both CPU and GPU accelerated simulation and rendering of high-resolution sparse volumes. In fact, it has already been adopted for real-time applications by several commercial renders and digital content creation tools, e.g. Autodesk’s Arnold, Blender, SideFX’s Houdini, and NVIDIA’s Omniverse just to mention a few.

Spatially Adaptive Volume Tools in Bifrost

The level set method offers many advantages over e.g. meshes for modelling and visual effects, but a naïve implementation is both computationally expensive and memory intensive. Narrow band level sets alleviate both issues but are still limited by the finest detail resolved due to uniform resolution along the surface. Voxel structures that are adaptive along the surface improve this [Frisken et al. 2000], but have not seen wide adoption. This is presumably due to difficulties matching the performance of optimized narrow band implementations like industry standard OpenVDB [Museth 2013]. We present the adaptive level set implementation in Bifrost which is competitive with OpenVDB in speed while offering lower memory usage thanks to spatial adaptivity. Our contributions include novel algorithms for adaptive sharpened B-spline interpolation of volumes in general, voxelizing meshes and points into adaptive level set volumes, and meshing adaptive level sets.

Unbiased Emission and Scattering Importance Sampling For Heterogeneous Volumes

We present two new distance-sampling methods for production volume path tracing. We extend the null-collision integral formulation to efficiently gather heterogeneous volumetric emission, achieving higher-quality results. Additionally, we propose a tabulation-based approach to importance sample volumetric in-scattering through a spatial guiding data structure. Our methods improve the sampling efficiency for scenarios where low-order heterogeneous scattering dominates, which tends to cause high variance renderings with existing null-collision methods.

SESSION: Pipeline 1: USD

Using USD in Pixar’s Digital Backlot

At Pixar we have developed a set of tools to resurrect more than 33,000 previously-unusable set & prop models from our old films to be used as a studio-wide resource for previs, set dressing, cameos, automated testing, shader library development test subjects, short film and streaming projects, VR projects, and research. Based on the extensive use of USD [Disney/Pixar 2016], the Digital Backlot has now been used for four released feature films, three feature films still in production, and six completed short-form projects (with several more under development). Our pipeline also ensures that future asset development will continue to build up this library.

UsdShade in the Pixar Pipeline

The VFX and animation industry is widely adopting Pixar’s USD (Universal Scene Description) format to describe and manipulate scene information throughout production of CG content. A key part to a complete description of a scene is the representation of shading of all the parts, which is the description of materials that will be used to render it. UsdShade is the submodule of USD that is designed to handle material description. UsdShade was developed at Pixar in 2014 during the production of Finding Dory and has been used on all following productions. Since then we have learned a lot and have refined our practices to get the most out of UsdShade. We want to share our best practices and learnings to guide others to great success in using UsdShade in their pipelines.

Procedural Block-Based USD Workflows in Conduit

We present a procedural block-based approach for USD pipelines that minimizes up-front USD knowledge requirements while ensuring users can still leverage the power of native USD. Building on USD and Conduit, we define fundamental workflow principles and philosophies on artist-interaction that guide our modular Houdini-based toolsets. Finally, we discuss the successes and challenges in scaling these workflows into production.

A Pipeline Retrospective on USD & Conduit

Over the past three years, Blue Sky Studios built a USD-centric layer on top of its next generation pipeline framework, Conduit. This transition involved mapping the legacy Blue Sky workflows into USD constructs. In addition, direct artist feedback during the delivery of six short films provided insights that informed the evolution of the Conduit backend to support these modernized workflows.

SESSION: Rendering - Tech

Unbiased VNDF Sampling for Backfacing Shading Normals

Lessons Learned and Improvements when Building Screen-Space Samplers with Blue-Noise Error Distribution

Recent work has shown that the error of Monte-Carlo rendering is visually more acceptable when distributed as blue-noise in screen-space. Despite recent efforts, building a screen-space sampler is still an open problem. In this talk, we present the lessons we learned while improving our previous screen-space sampler. Specifically: we advocate for a new criterion to assess the quality of such samplers; we introduce a new screen-space sampler based on rank-1 lattices; we provide a parallel optimization method that is compatible with a GPU implementation and that achieves better quality; we detail the pitfalls of using such samplers in renderers and how to cope with many dimensions; and we provide empirical proofs of the versatility of the optimization process.

Photonic Rendering for Hair Cuticles using High Accuracy NS-FDTD method

The internal structure of hair consists of three layers: the cuticle, the cortex, and the medulla, and there are multiple membrane structures inside the cuticle. Light incident on these subwavelength structures is subject to scattering and interference phenomena, and the reflected highlights vary depending on the viewing angle. This phenomenon of light coloration due to the microstructure is called structural coloring or photonic coloration. It shows a unique specular highlight and determines the magnitude of the specular highlight, which is not observable in the linear optics of CG. In the case of hairs, this structural coloring or photonic coloration occurs in addition to the simple surface reflections and the backscattering lights, which penetrate the hair and are absorbed by the melanin pigment. In the present report, we mainly discuss the effects of the structural coloring first caused by the multi-layered structure in the cuticle region on the hair surface, simulating the electromagnetic field using the NS-FDTD (Non-Standard Finite Difference in Time Domain) method. In addition, we also discuss the backscattering phenomena inside the hair fibers.

SESSION: Machine Learning

NoR-VDPNet++: Efficient Training and Architecture for Deep No-Reference Image Quality Metrics

Efficiency and efficacy are two desirable properties of the utmost importance for any evaluation metric having to do with Standard Dynamic Range (SDR) imaging or High Dynamic Range (HDR) imaging. However, these properties are hard to achieve simultaneously. On the one side, metrics like HDR-VDP2.2 are known to mimic the human visual system (HVS) very accurately, but its high computational cost prevents its widespread use in large evaluation campaigns. On the other side, computationally cheaper alternatives like PSNR or MSE fail to capture many of the crucial aspects of the HVS. In this work, we try to get the best of the two worlds: we present NoR-VDPNet++, an improved variant of a previous deep learning-based metric for distilling HDR-VDP2.2 into a convolutional neural network (CNN). In this work, we try to get the best of the two worlds: we present NoR-VDPNet++, an improved version of a deep learning-based metric for distilling HDR-VDP2.2 into a convolutional neural network (CNN).

Passing Multi-Channel Material Textures to a 3-Channel Loss

Our objective is to compute a textural loss that can be used to train texture generators with multiple material channels typically used for physically based rendering such as albedo, normal, roughness, metalness, ambient occlusion, etc. Neural textural losses often build on top of the feature spaces of pretrained convolutional neural networks. Unfortunately, these pretrained models are only available for 3-channel RGB data and hence limit neural textural losses to this format. To overcome this limitation, we show that passing random triplets to a 3-channel loss provides a multi-channel loss that can be used to generate high-quality material textures.

A fast sparse QR factorization for solving linear least squares problems in graphics

A wide range of problems in computer graphics and vision can be formulated as sparse least squares problems. For example, Laplacian mesh deformation, Least Squares Conformal Maps, Poisson image editing, and as-rigid-as-possible (ARAP) image warping involve solving a linear or non-linear sparse least squares problem. High performance is crucial in many of these applications for interactive user feedback. For these applications, we show that the matrices produced by factorization methods such as QR have a special structure: the off-diagonal blocks are low-rank. We leverage this property to produce a fast sparse approximate QR factorization, spaQR, for these matrices in near-linear time. In our benchmarks, spaQR shows up to 57% improvement over solving the normal equations using Cholesky and 63% improvement over a standard preconditioner with Conjugate Gradient Least Squares (CGLS).

Towards Large-Scale Super Resolution Datasets via Learned Downsampling of Ray-Traced Renderings

Delivering high resolution content is a challenge in the film and games industries due to the cost of photorealistic ray-traced rendering. Image upscaling techniques are commonly used to obtain a high resolution result from a low resolution render. Recently, deep learned upscaling has started to make an impact in production settings, synthesizing sharper and more detailed imagery than previous methods. The quality of a super resolution model depends on the size of its dataset, which can be expensive to generate at scale due to the large number of ray-traced pairs of renders required. In this report, we discuss our experiments training an additional neural network to learn the degradation operator, which can be used to rapidly generate low resolution images from existing high resolution renders. Our testing on production scenes shows that super resolution networks trained with a large synthetic dataset produce fewer artifacts and better reconstruction quality than networks trained on a smaller rendered dataset alone, and compare favorably to recent state of the art blind synthetic data techniques.

SESSION: Rigging

Pose-weight Interpolation: a Lateral Approach to Pose-based Deformations

Sculpting character deformations that stay on-model for an arbitrary pose is a non-trivial task. Example based methods are desirable, depending on how few sculpted examples they require. After experimenting with various methods, we find the results are lacking from an artistic point of view. The core problem comes down to a matter of Pose-weight Interpolation, for which we present a novel, artist-friendly solution, Constrained Weight Smoothing. CWS computes Pose-weights on an n-dimensional mesh in pose-space such that weights for an arbitrary pose can be evaluated in O(1) time.

Real Time Interactive Deformer Rig Evaluations in Maya using GPUs

Maya has supported evaluating deformer nodes on GPU since 2016. However, such GPU support in Maya is limited to evaluating simple linear chains of deformer nodes. In feature productions, character rigs have a complex network of deformation chains resulting in most deformers being evaluated on CPU. Here we will detail such architectural limitations within Maya. Then we present our approach that overcomes these limitations to fully evaluate deformation networks on GPU, which has enabled our rigs to perform over 50fps on GPU, compared to 5fps on CPU.

Reinventing a Character Creation Pipeline Using Landmarking, Simulation, and Shared Character Data

Reinventing the humanoid character build pipeline at Blue Sky Studios presented several opportunities to create synergy between different processes, all centering around the concept of creating and maintaining a standardized Universal Mesh. The automation of rig argument placement, the building of rigging tools that use aspects of character geometry as inputs, the separation of character data from assets, and the creation of a stylized simulation-based approach to deform animated characters were all influenced by this base Universal Mesh. Ultimately, this approach offered new ways to get the most out of our character pipeline.

SESSION: Real-Time Technology

Simulation and Visualization of Virus Transmission for Architectural Design Analysis

The COVID-19 pandemic has made virus transmission a significant factor in designing buildings to ensure a safe and resilient environment. Simulation has been applied to analyze the potential risk of virus transmission within built spaces. Still, most existing simulations focused on a small region of the building, over a short period of time. Here we cover how we leveraged an occupancy simulation to inform and visualize the longitudinal impacts of virus transmission, in relation to a given building design and associated dynamic occupant behaviours. The flexibility of our system makes our simulation scalable and adaptable so that it can be applied to any building or context, with various types of occupants.

Colour-Managed LED Walls for Virtual Production

We present DNEG’s approach to colour-managing LED walls for in-camera VFX in virtual production. By characterising the entire imaging pipeline end-to-end with a closed loop, we enable filmmakers and visual effects artists to prepare virtual environments in advance of shooting, confident that their colour intent will be preserved in the final footage. Our system flexibly adapts to the Cinematographer’s choice of exposure and white-balance, allowing them to focus more on story-telling and less on technical constraints of the wall.

Our contribution takes place in two stages; first we measure and characterise the response of the camera to the LED wall; and secondly we apply the results of this characterisation in real time to images as they are displayed.

Joji - 777 : Animated Multi-Character Paintings with a Single Performer: A novel approach to realtime motion capture and choreography

Tableau paintings, described as “living pictures” often take the form of a painting or photograph in which characters are arranged for picturesque or dramatic effect. These paintings and photographs are inherently defined by the motion of a character, but have traditionally been presented as static two dimensional images. What would it feel like to create a tableau painting that is not static, and is inherently three dimensional? This talk will center around an analysis of the creative and technical work-flows used during the production of Joji - 777, in which a small crew created an animated video where each scene was composed with intention of depicting a moving, 3D tableau painting. The technical complexities of achieving the desired result were amplified due to COVID lockdown restrictions, which at the time of production ruled out the ability to motion capture multiple per-formers at once. Using real time game engine technology in conjunction with skeletal retargeting in post production, a novel approach was developed to allow a single performer to play multiple characters simultaneously.

SESSION: Real-Time Rendering

The Tech and Art of Cyberspaces in Cyberpunk 2077

A deep dive into the technology and art behind cyberspace and braindances in Cyberpunk 2077. Braindances are the recorded memories and feelings of individuals, reprojected in the mind of the viewer. To bring this concept into reality, we decided to follow an unconventional approach to rendering environments and characters in real-time. The core visual concept was based around sparse point clouds and glitch effects. Post processes like datamoshing were used to further hide the underlying geometry, aiming for a surreal, out-of-body experience.

Modeling Asteroid (101955) Bennu as a Real-time Terrain

Using a commercial terrain engine [Englert 2012], we show how to model the complex surface of asteroid (101955) Bennu [Science 1999] with a hybrid approach that combines digital elevation models with static and dynamic displacement. Then we show how to adapt tri-planar material mapping (TRIMAP) to the curvature of planetary bodies, in order to effectively avoid all inherent texturing artefacts.

Swish: Neural Network Cloth Simulation on Madden NFL 21

This work presents Swish, a real-time machine-learning based cloth simulation technique for games. Swish was used to generate realistic cloth deformation and wrinkles for NFL player jerseys in Madden NFL 21. To our knowledge, this is the first neural cloth simulation featured in a shipped game. This technique allows accurate high-resolution simulation for tight clothing, which is a case where traditional real-time cloth simulations often achieve poor results. We represent cloth detail using both mesh deformations and a database of normal maps, and train a simple neural network to predict cloth shape from the pose of a character’s skeleton. We share implementation and performance details that will be useful to other practitioners seeking to introduce machine learning into their real-time character pipelines.

Dynamic Diffuse Global Illumination Resampling

Ray traced global illumination can be partitioned into direct contributions of the light sources that reflect to the camera after one bounce and indirect contributions that scatter for multiple bounces. We propose a new real-time solution called dynamic diffuse global illumination resampling that computes direct and indirect illumination accurately and with low noise. The key idea is to derive a new, unified algorithm from the principles of the state of the art ReSTIR many-lights direct shadowing [Bitterli et al. 2020] and the DDGI indirect light probes [Majercik et al. 2019] real-time algorithms. By this unification, global illumination resampling achieves higher quality than the combination of its two components at real-time framerates. At the cost of little bias, our technique also outperforms hardware accelerated path tracing in both runtime and noise.

SESSION: Facial Animation

Fast Facial Animation from Video

Real time facial animation for virtual 3D characters has important applications such as AR/VR, interactive 3D entertainment, pre-visualization and video conferencing. Yet despite important research breakthroughs in facial tracking and performance capture, there are very few commercial examples of real-time facial animation applications in the consumer market. Mass adoption requires realtime performance on commodity hardware and visually pleasing animation that is robust to real world conditions, without requiring manual calibration. We present an end-to-end deep learning framework for regressing facial animation weights from video that addresses most of these challenges. Our formulation is fast (3.2 ms), utilizes images of real human faces along with millions of synthetic rendered frames to train the network on real-world scenarios, and produces jitter-free visually pleasing animations.

Persona: Real-Time Neural 3D Face Reconstruction for Visual Effects on Mobile Devices

We present Persona, a real-time human face-oriented visual effect solution on mobile devices. Persona consists of a 3D face tracker with multi-scale reconstruction models for different-level of mobile devices and a visual effect authoring tool. Our face tracker is able to reliably predict a sequence of facial and illumination parameters from a monocular video in real-time. Those parameters can then be used to develop many interesting applications. We demonstrate that our method outperforms existing state-of-the-art work about 3D face reconstruction on mobile devices and showcase results generated by our tool.

Simplified facial capture with head mounted cameras

We present a unified pipeline for high-resolution facial capture that replaces the initial traditional seated capture with a head-mounted camera setup. At its core, our approach relies on improving roughly personalized blendshapes by fitting handle vertices, in a Laplacian framework, to depth and image data. Thus, refining the geometry. This pipeline has been used in production to generate high quality animation to train our proprietary marker-based solution, leading to large time and cost savings.

SESSION: Cloth Simulation

Wrapped Clothing on Disney’s Raya and the Last Dragon

This talk outlines novel techniques used to create the complex wrapped clothing on Walt Disney Animation Studios’ “Raya and the Last Dragon”. Inspired by traditional Southeast Asian designs, these wrapped garments are formed by deftly folding long panels of cloth, with little to no reliance on seams to hold the structure. This departure from a standard pattern-based pipeline made the construction and performance of these specialized garments in CG a very challenging task. Using the sampot, dhoti, and bust-wrap garments as production examples, we describe their real-world counterpart designs and construction, discuss what makes them challenging to create in CG, and then outline how we extrapolated their designs and realized them for the stylistic needs and performances of the characters on the film.

Adding Style, Folds, and Energy to the Costumes of Soul

The Human World of Soul takes place in a vibrant New York City, full of life, energy, and everyday people. Our character designers captured this in their expressive drawings, depicting clothing in a complex and artfully messy way. When creating costumes in CG, however, cloth TDs were accustomed to simplifying the designs to make cleaner, more graphic looks. These techniques have been established and refined on our previous films, such as Incredibles 2 and Up. On Soul, we sought to develop a different look for costumes that would support the story, style, and energy of the film.

DreamWorks Art-Driven Shot Sculpting Toolset

This talk presents DreamWorks’ art-driven shot sculpting toolset used by the Character Effects (CFX) Department to efficiently and logically sculpt shapes of character skin, clothing, hair/fur, and props in shots. The ability to visualize an Animator’s drawovers during CFX shot work introduced an improved visual communication language between Animators and CFX artists. The toolset’s wide range of shot sculpting abilities helps achieve the different artistic styles of various films and enhances the visual impact of animation. This efficient toolset makes the shot sculpting an intuitive process for an artist rather than one littered with cleanup work.

Simulating Cloth Using Bilinear Elements

The most widely used cloth simulation algorithms within the computer graphics community are defined exclusively for triangle meshes. However, assets used in production are often made up of non-planar quadrilaterals. Dividing these elements into triangles and then mapping the displacements back to the original mesh results in faceting and tent-like artifacts when quadrilaterals are rendered as bilinear patches. We propose a method to simulate cloth dynamics on quadrilateral meshes directly, drawing on the well studied Koiter thin sheet model [Koiter 1960] to define consistent elastic energies for linear and bilinear elements. The algorithm elides the need for artifact-prone geometric mapping, and has computation times similar to its fully triangular counterpart.

SESSION: Dynamics/Simulation

Weaving The Druun’s Webbing

The Druun in Walt Disney Animation Studio’s “Raya and the Last Dragon” is a unique character, both in design and in implementation. Over the course of this film, unique solutions were designed to overcome the technical challenges of having a creature made of both a fluid and a web like structure. In this talk we will present the techniques used to bring one of the Druun’s amazing features to life: the webbing.

FLIP Fluids as a Bi-directional Fuel Source in a Volumetric Fluid Simulation

We present a fast and flexible technique for creating Pyro simulations directly from and coupled with FLIP simulations for fuel. A typical Pyro simulation involves numerous ways to generate fuel but mostly as an input exclusively into the Pyro simulation not as a coupled partner into the simulation. None of the characteristics of the fuel and specifically the reaction of the fuel consumption is considered in a traditional Pyro simulation. We strived to create this relationship in a coupled simulation. This introduced shared fuel, divergence, and velocity data. All this sharing creates a more realistic simulation with little artificial initial velocity.

Fluid Fabrics in Trolls World Tour

We outline the development of natural effects like waterfalls, rivers and lava to the small scale world represented in Trolls World Tour.  Creating natural effects in a world made of various fabrics brought new challenges in representing movement and scale. We examine our problem solving methods and how we blended fluid motion with cloth/fabric motion to achieve the desired effect.

Cooking Southeast Asia-inspired Soup in Animated film

Walt Disney Animation Studios’ “Raya and the Last Dragon” is an animated film inspired by the people and culture of Southeast Asia. In “Raya and the Last Dragon”, we created a soup inspired by Thailand’s Tom Yum soup to help represent the food cultural richness of that region. Our goal was a more believable and authentic representation of food than we have previously achieved, which required a novel approach, especially on how to simulate the motion of the chili oil on top of the soup and multiple representative ingredients. This talk will describe the collaborative process of designing, simulating and creating materials to achieve the final look of the soup. Figure 1 shows three renders under different camera angles.

SESSION: Crowds and Hair

Wig: The Hair Story From Shrek 2 to The Croods: A New Age

This talk presents the features and history of DreamWorks’ Wig system, which has been used over the past twenty years on 28 animated feature films and over 2,000 hair setups and growing. The principal philosophy of the system since its inception has been to place control over simulations into the hands of animators. Over time, the system and its related tools have been updated to meet the challenges of evolving technologies and respond to animator needs. With its production proven history, and flexible architecture, the system continues to adapt to recent hair-heavy productions.

Hair Grooming with Imageworks’ Fyber

Fyber is our new proprietary standalone grooming software solution for hair and fur on all current and upcoming film projects.

It was developed at Sony Picture Imageworks (SPI) to address the need for a faster, more interactive and more artist-friendly tool to generate any kind of hair, ranging from a characters head hair to fully furred animals, with the goal to significantly lower grooming times and an easier learning experience for new artists.

As a node based software, Fyber offers great flexibility to achieve any look an artist might desire while also providing fast visual feedback thanks to its highly multi-threaded computation graph and integrated OpenGL and Arnold viewports.

Fyber’s underlying engine is fully separated from its user interface which allows for a seamless and easy integration into 3rd party applications like Maya and Katana where it can be used to compute hair on the fly or to render it to their respective viewports.

It was first used for SPI’s work on Disney’s Mulan and has replaced our previous grooming solution on all shows ever since.

Mathematical Tricks for scalable and appealing crowds in Walt Disney Animation Studios’ ”Raya and the Last Dragon”

The crowds department had to tackle a variety of challenging shots for Walt Disney Animation Studios’ ”Raya and the Last Dragon” such as beetles crawling on top of each other, immense fish simulation or dragon choreography.

In order to handle this level of complexity while keeping a good amount of artistic control, we implemented some effective technical solutions such as the use of anisotropic distances in Position based Dynamics (PBD) and boids simulations, procedural animation layers for fish and dragons or distance integral invariant to detect dragon foot contacts.

Populating the World of Kumandra: Animation at Scale for Disney’s “Raya and the Last Dragon”

Walt Disney Animation Studios’ 59th film “Raya and the Last Dragon” takes place in the fantasy world of Kumandra. We look at the challenges of casting and choreographing diverse people, creatures, and props to bring a varied spectrum of cultures and scenes to life: tense gatherings, intimate palace interiors, bustling floating markets with integrated boat traffic, panicked crowds, everyday life, and magical movements of dragons. The sheer diversity of characters presented a new set of obstacles that inspired us to extend our crowd system to efficiently address our increasingly diverse needs. Using production examples and results, we look at how our existing workflows and pipeline were leveraged and augmented to support these efforts. We discuss solutions for art-directed, simulated, and procedural approaches using our in-house Houdini-based system called Skeleton Library [El-Ali et al. 2016].

SESSION: Pipeline 2

Pixar’s OUT: Experimental Look Development in the SparkShorts program

Pixar’s OUT, released summer 2020 on Disney+, is a short film with a highly stylized look, produced under the in-house SparkShorts program. The program champions new creative voices and storytelling via tight-knit production teams that work with limited budgets to push the boundaries of animation production at Pixar. In this talk we present the conception, design and implementation of the film’s unique visual style. Armed with some early inspirational artwork and some pre-production technical exploration, much of the look for the short was found during the course of production. Ultimately we landed on a style that was somewhere on the bridge between 2D and 3D animation. When we had it right, shots felt like a ”living painting” with characters and sets that felt like a medium come alive, instead of a series of paintings per-frame. Our goal was not to emulate a particular traditional medium but to evoke a feeling of something crafted that stood on its own as a novel style, which we achieved with a small but enthusiastic crew.

Cartoons in the Cloud

The SparkShorts program at Pixar Animation Studios allows for directors to try new and different looks. Our short, Twenty Something is a 2d, hand-drawn animated film. When the pandemic forced all of our artist to work from home, we scrambled to create a workflow for managing, sharing, and reviewing 2d assets. While we have a long history of collaborating on 3d films, we did not have a solution for 2d imagery.

We created a cloud-based pipeline based around on-premises bucket storage, microservices, and event-driven workflows. The result was Toontown, a suite of technologies that allowed our artists to complete Twenty Something working from home.

The Right Foot in the Wrong Place: Half-Life Character LocomotionCharacter Locomotion in Half-Life: Alyx

This paper describes the non-player character locomotion system developed for the VR game Half-Life: Alyx. Our solution uses a stride retargeting system, footstep prediction, and a custom motion matching system to animate humanoid and non-humanoid characters as they navigate tight, dense virtual environments in real time.

SESSION: Art Installations

Unfinished Farewell

As COVID-19 spreads across the globe and the number of deaths continues to rise, the heartbreaking experiences are being replaced by collective mourning. As German journalist and pacifist Kurt Tukholsky once said: “The death of one man is a tragedy, the death of millions is a statistic”.

When we look back at the help-seeking posts of those who were lost, those who died of unconfirmed COVID-19 testing reports; those who committed suicide out of despair; those whose life-saving medical equipment were being taken away and those who lost their lives due to overwork and infection from their patients... Many of them were not included in the official statistics, and they are likely to be forgotten over time. They were not being treated fairly when they were alive, and they were not being mentioned after they passed away.

We spoke to one of those families. One daughter said: “After this pandemic, who will remember someone such as my mother – she had nowhere to go for medical treatment; she was rejected by the hospital, and she had to die at home?”

This is one of the reasons why we built this online platform. We want to document as many people who have left us because of the pandemic as possible. Our website also includes the help-seeking information these people posted before they passed away. These are the evidences they have left in a particular moment in this pandemic. We hope to provide a space for family members to express their grief and for the public to mourn the dead. Behind every number is a life.

”Unfinished Farewell” can be viewed at www.farewell.care and www.jiabaoli.org/covid19

Beeing - A nature inspired immersive VR journey designed to enhance public transportation

Climate change, environmental protection and sustainability dominate media and political affairs. The demand for increased use of public transport instead of cars is obvious. Beeing – the nature inspired VR journey is intended to raise awareness of these topics and at the same time create an example for added value for public transportation. Additionally, a prototype for a new content platform is being elaborated, which will also enable future-oriented developments by providing a variety of entertaining content between the train stations. The prototype presented is designed for short distance trains in the metropolitan area of Stuttgart, Germany with an approximate duration of three minutes. The physical conditions of the train ride are reflected in VR. With this special form of customer experience, the user is to be picked-up in the real world, i.e. in the real train, in order to experience a fantastic ride that far exceeds the experience of a normal train ride.

Interacting with Humanoid Robots: Affective Robot Motion Design with 3D Squash and Stretch Using Japanese Jo-ha-kyu Principles in Bunraku

The Bunraku puppets’ affective motions are often praised as “one of the most beautiful motions in the world” by UNESCO. We characterize 3D “squash and stretch” motion in Bunraku puppet plays and realize them in a real life-size robot with unique mechanical structures. Our results reveal that the music tempos and the puppet movements of “squash and stretch” follow the principle so-called “Jo-Ha-Kyū,” which is artistic modulations in traditional Japanese performances. Our research reveals that the affective robot motion design with the 3D “squash and stretch” and Jo-Ha-Kyū principle is one of the keys in affective human-robot interactions.

Freezing Fire – Automated Light-Passes for Stop-Motion VFX

This work proposes and evaluates a method of image-based rendering for integrating light-emitting CG assets with digital photography. The framework consists of a capture stage, in which footage of the scene under varied lighting is acquired, and a reconstruction stage, which outputs the calculated light contribution of the CG element upon the scene. This form of relighting is novel as it embraces scenarios where the light source is intended to be in the frame. The freedom to introduce emissive objects in post opens creative room for light animation and was assessed here as employed in the production of a stop-motion short-film.

SESSION: Cyberpunk 2077

Transparency Rendering in Cyberpunk 2077

This talk discusses in-detail the transparencies pipeline of Cyberpunk 2077. We present a high-level overview of the system, and then provide details on individual components. Particularly, we discuss our take on decoupled particle lighting (DPL), our distortion approach, and a parallel slab refraction approximation. We also provide performance details on target platforms for these features.

Moving Cyberpunk 2077 to D3D12

Shadows Optimizations in Cyberpunk 2077

This talk presents significant rendering-related CPU and GPU optimizations concerning shadows in Cyberpunk 2077. Given the scale of the game and the variety of platforms that it has to support, different solutions had to be implemented and coupled in order to fulfil the time and memory requirements. We also introduce our take on runtime and offline techniques for object culling and shadow maps caching in order to minimize potential overhead. The article concerns three different shadow types that can be spotted around Night City: cascaded, distant, and local shadows.

Area Light Sources in Cyberpunk 2077

This talk discusses in detail the area lights technique used in Cyberpunk 2077. We outline the limitations and challenges we had, and then present our solution. Particularly, we describe our 100% analytical capsule lights, our spot-capsule solution, and our capsule light shadow technique. We then discuss our artistic pipeline, our performance results, and limitations.

SESSION: Rendering - Art

The Atmosphere of Raya and the Last Dragon

The cultures of South-east Asia provided plentiful inspiration for the setting and art direction in Walt Disney Animation Studios’ ”Raya and the Last Dragon”. This fantasy adventure required many unique environments ranging from desert landscapes to tropical forests, each describing rich lighting scenarios paired with the appropriate atmospherics.

Many departments collaborated to create the extensive amount of atmospherics required by such varied and lush locations. Simultaneously, emphasis was placed on making the atmospheric Lighting workflow more efficient. We focused on improvements to allow Lighting artists more flexibility and control over making complicated atmospheric setups without having to request new assets or assistance from the Effects department on every shot. This in turn would save time and relieve significant production strain.

Stylizing Metals and More with the Glint Filter

We present a novel and artist-controllable system for stylizing metallic surfaces. This technique filters a beauty render with a Cryptomatte-encoded identifiers pass to generate a new stylized image. The ids pass drives an image-space color flood fill algorithm that uniformly colors regions to create a faceted metal appearance. The ids are generated using a variety of methods that target different aspects of the reflective surfaces. Tools that further modulate the facet ids give artists control over the effect in both render and compositing. The result is a smooth and temporally coherent effect that complements other non-photorealistic imagery. We can expand and generalize this technique to apply to non-faceted metallic surfaces.

Time and Memory Efficient Displacement Map Extraction

Displacement maps are used to enhance the final rendered animation at most high-end studios including Blizzard Entertainment. However, commonly available products (Mudbox, Zbrush) that are used to sculpt high resolution models and extract displacements from low resolution counterparts are not conducive for quick iterative workflow that artists normally demand. Here we will summarize the drawbacks in accommodating such commercial products in our pipeline. We then present our custom solution developed using open industry standard libraries like OpenSubDiv (Pixar), OptiX (NVIDIA) and Embree (Intel). We also highlight how we employed frequency analysis (Discrete Cosine Transform) to extract time efficient and memory optimal displacement samples, all of which has objectively improved overall productivity in our workflows involving displacement maps.

Stylizing Volumes with Neural Networks

In ”Raya and The Last Dragon”, a blast of energy rings across the desiccated lands of the ancient world of Kumandra. This climactic story point represents a powerful force of magic and transformation, and as such, is art directed to be composed of stylized wave patterns and harmonic textures, as if created by sound vibrations. To achieve this, we artistically stylize a simulated volume using Neural Style Transfer. In this talk, we describe the integration of deep learning-based tools into our effects pipeline to accomplish this.

SESSION: Set Creation

Underwater Procedural Vegetation on Pixar’s Luca

On Luca, we extended the procedural vegetation and debris system, called Moss, which has been in use at Pixar since Brave, with capabilities to create lush underwater seascapes. The Moss system was modernized for Onward to be OSL (Open Shading Language) based, enabling artists without C++ expertise to develop new Moss types. For Luca, we added a deformable mesh geometry type for complex underwater vegetation. Additionally we added SeaWind, an OSL-based procedural motion module, to hit the specific art direction of flowing curves in underwater currents. We also improved the pipeline which brings procedural geometry into Houdini for integrating the hero simulation with the procedural motion seamlessly.

Trolls World Tour: Desert Bling

The fantastically realistic environment of Trolls World Tour took a detour into a blistering desert made purely with flecks of glitter. In order to capture the Trolls Glitter Desert experience, we blended the visual expectations of a sand-filled desert with the physical nature of flattened glitter pieces. We found the need to develop mathematical procedurals to integrate with various simulation techniques, and create a hand-drawn keyframe system to choreograph the glitter with 2d artistic control. We built custom USD software with performance increases of almost 10 times native cook times in order to work with the millions of sparkly plastic glitter instances, and integrated it with shaders for our proprietary MoonRay renderer.

Imagining the Great Before

During the making of Pixar’s 2020 animated movie Soul, we were tasked with having to create a brand-new world, the Soul World, which posed as a challenge to design and express within traditional art forms. As with most animated features, we had to execute within in a limited amount of time, while responding to an ever-evolving storyline. In order to achieve this goal, we had to approach it unconventionally. One method that helped greatly, was expanding an in-house texture development tool into a real-time look development environment. This new tool enabled us to iterate much more quickly and frequently, which in turn, helped us receive more timely feedback from the Director and Production Designer in order to hit the look they were after.

We also used more established tools to explore new looks. Maya, for example, allowed us to effectively and quickly mock up entire environments with enough color and lighting information to make broad decisions. Since one of the new worlds we were building had to emulate a child’s playground while incorporating symmetry, we were able to establish the language quite effectively thank’s to Maya’s familiar interface. For the parts of the set that were unfamiliar to us, like ‘etherial Pavilions’, we turned to Houdini to try capture something fresh and new. We achieved the look of these by freezing motion blur into still kinetic forms. All three techniques, whether using new tools or existing, ended up contributing to the ‘not-of-this-Earth’ look, desired by the Director and Production Designer on Soul.

Creating Diversity and Variety in the People of Kumandra for Disney’s Raya And The Last Dragon

In Walt Disney Animation Studios’ “Raya and the Last Dragon”, the fantasy world of Kumandra is composed of five lands, representing five parts of a dragon. All aspects of the character designs were inspired by the many cultures of Southeast Asia. Each land is inhabited by a unique clan and the crowds assets need to reflect this diversity and variety both within and between the clans. This was achieved by introducing a novel approach that is modular in both design and construction of the assets. Key aspects include strategic reuse and refit, and new look techniques for creating additional variation between clans. We also employed a tracking and management system for visually validating the assets which played an important role in the efficient use of the data downstream. In addition, an extremely collaborative workflow between all departments involved was critical, including the Visual Design, Character Asset, and Crowds Simulation departments. The overall enhancements to the workflow made it possible to creatively generate the thousands of crowd assets with the desired art direction for the film.