SIGGRAPH '22: ACM SIGGRAPH 2022 Talks

Full Citation in the ACM Digital Library

SESSION: Virtual Production

Green Screens, Green Pixels and Green Shooting

Sustainability and green producing are in high demand in all sectors of creative industries. Fortunately, this topic is received very well among film students providing an excellent opportunity for upcoming talent willing to apply new methods in creative processes. Virtualisation and Virtual Production in particular are predestined to play an essential role in fulfilling this demand. Factors that can be considered here are travel needs, lighting energy consumption, post-production complexity, energy sources and many more. The pandemic did propel these Virtual Production technologies to common practice, in particular large LED walls for In-Camera VFX (ICVFX). Some reports on the environmental impact of traditional film productions are available [albert 2020] estimating an average CO2 demand of 2840 tonnes for tentpole film productions. However, these tentpole productions did not consider VFX. To date, there is little to no knowledge on the sustainability of Virtual Production and how it compares to traditional offline VFX productions. We take a closer look at two comparable productions, one using traditional offline rendering and post-production, the other using an LED wall and ICVFX. Energy requirements, creative opportunities and scalability are subjects of investigation and further discussion.

This abstract is a summary of a self published report on Virtual Production and its opportunities for sustainable film productions 1.

“OpenVPCal”: An Open Source In-Camera Visual Effects Calibration Framework

This presentation introduces the OpenVPCal toolset, an open source in-camera visual effects calibration framework. The toolset includes reference patch generation, creation of calibration color transforms, and custom 1D roll off lookup tables (LUTs) to control content brightness while attempting to maintain a linear light output from the LED panels. The resulting transforms can be expressed via an OpenColorIO Config [OCIO Contributors 2022] file, making tracking and transport easier for productions to manage and reproduce. We hope that by making this workflow open source and accessible, it will increase dialogue between practitioners in the space, leading to new novel improvements for creating more amazing content worldwide.

“Comandante’’: Braving the Waves With Near Real-Time Virtual Production Workflows

The naval feature film “Comandante” is to be shot on a water stage within a waterproof but very low resolution LED volume. We developed our Near Real-Time (NRT) workflow to immediately improve the quality of a shot, seamlessly bridging In-Camera VFX (ICVFX) and post-production, allowing VFX to start on-set as soon as the director says ‘cut’. The NRT workflow brings together key advancements in lens calibration, machine learning and real-time rendering to deliver higher quality composites of what was just shot to the filmmakers, in a matter of minutes.

SESSION: Surface Rendering and Lighting

On Fairness in Face Albedo Estimation

Digital avatars will be crucial components for immersive telecommunication, gaming, and the coming metaverse. Unfortunately, current methods for estimating the facial appearance (albedo) are biased to estimate light skin tones. This talk raises awareness of the problem with an analysis of (1) dataset biases and (2) the light/albedo ambiguity. We show how these problems can be ameliorated by recent advances, improving fairness in albedo estimation.

Countering Racial Bias in Computer Graphics Research

Advances in Spatial Hashing: A Pragmatic Approach towards Robust, Real-time Light Transport Simulation

Spatial hashing is a battle-tested technique for efficiently storing sparse spatial data. Originally designed to optimize secondary light bounces in path tracing, it has been extended for real-time ambient occlusion and diffuse environment lighting. We complement spatial hashing by introducing support for view-dependent effects using world-space temporal filtering. Optimizing the hash key generation, we improve performance using a much better cache coherence and aliasing reduction. Finally, we enhance the sampling quality using methods including visibility-aware environment sampling.

Practical Multiple-Scattering Sheen Using Linearly Transformed Cosines

We introduce a new volumetric sheen BRDF that approximates scattering observed in surfaces covered with normally-oriented fibers. Our previous sheen model was motivated by measured cloth reflectance, but lacked significant backward scattering. The model presented here allows a more realistic cloth appearance and can also approximate a dusty appearance. Our sheen model is implemented using a linearly transformed cosine (LTC) lobe fitted to a volumetric scattering layer. We detail the fitting process, and present and discuss our results.

SESSION: Engaging FX & Visualization

”Encanto” - Let’s Talk About Bruno’s Visions

In Walt Disney Animation Studios’ ”Encanto”, Mirabel discovers the remnants of her Uncle Bruno’s mysterious visions of the future. Developing the look and lighting for the emerald shards required close collaboration between our Visual Development, Look Development, Lighting, and Technology departments to create a holographic effect. With an innovative new teleporting holographic shader, we were able to bring a unique and unusual effect to the screen.

Spatial Storytelling and Scientific Data Visualization

With misinformation spreading as quickly as the coronavirus itself, we saw an urgent need to illustrate complicated scientific concepts in a clear way to help readers assess risk and empower them to stay safe. We're going to dive into reporting/design/production and visualization of three pieces that were published recently.

Fracture-aware Tessellation of Subdivision Surfaces

We introduce a new tessellation algorithm for production rendering of fractured subdivision surfaces that produces higher quality results with less distortion than previous approaches. Our tessellator is provided with a fractured subdivision control mesh along with the corresponding unfractured mesh. We use the unfractured mesh to evaluate limit positions of the fractured mesh vertices before tessellation, and we apply displacements at this time. During tessellation, we iteratively split edges, evaluating displaced limit positions at added points. Additionally, we measure deformations that were performed during the fracture process and apply these during tessellation to allow effects like animated crack spreading. Our approach produces seamless results with no distortion of the unfractured surface, and enables a more efficient fracture pipeline.

SESSION: Pipeline Potpourri

Visualizing the Production Process of “Encanto” with the Command Center

Walt Disney Animation Studios presents the Command Center, a web application for communicating and monitoring high dimensional production metrics at the studio. Developed and utilized during the production of “Encanto”, the Command Center provides near-real time insights into render performance metrics, department staffing, completion data, and film production progression statistics, all integrated into a singular novel film interface. The source data is collated into multiple “buckets” of aggregation, allowing observation at high, medium, and low levels of granularity. Since its inauguration on “Encanto”, the Command Center has grown in adoption among several new productions and has shifted our studio’s direction and perception of collecting and monitoring our production metrics.

Making Encanto with USD: Rebuilding a Production Pipeline Working from Home

In 2017, Walt Disney Animation Studios began a transition towards Universal Scene Description (USD) as its primary data interchange format [Pixar 2016]. The design and roll out encompassed production on Walt Disney Animation Studios’ “Raya and the Last Dragon” and “Encanto”. In addition to the general challenge of revamping a production pipeline, the studio was fully remote for a majority of the deployment. Even though pipeline changes were extensively tested during development, exercising fundamental pipeline changes in production invariably uncovered many issues. As such, the undertaking required a significant amount of communication and coordination among software developers, technical directors, artists and show leadership.

Editorial Pipeline Conversion: Animal Logic’s Transition to OpenTimelineIO

We introduce Animal Logic’s editorial pipeline refactor from a rigid and overly complex in-house solution, towards a more modern, flexible approach, based on the open source technologies OpenTimelineIO and Electron.js. This upgraded design greatly increases flexibility over the previous effort, enabling cross-platform user adoption and further decoupling our tools from the editorial software of choice. The new pipeline is now rolled out onto our most recent productions and we are already starting to see the benefits of its extensibility and ease of troubleshooting.

Wizart DCC Platform: extensible USD-based toolset

As an indie animation studio for Secret Magic Control Agency (SMCA), we moved to Pixar’s Universal Scene Description (USD) as the backbone of our pipeline. To make this possible, we introduced our in-house application framework, Wizart DCC Platform as a USD-native toolset. We used its extensibility to successfully implement new scene assembly, shading, hair grooming, and lighting workflows.

SESSION: Face Capture and Acquisition

Desktop-based High-Quality Facial Capture for Everyone

Advances for Digital Humans in VFX Production at Goodbye Kansas Studios

Digital humans and digital doubles are core products at Goodbye Kansas Studios and in order to further improve their quality and related workflows, a number of new tools have been developed and integrated into our existing pipeline.

This talk will cover some of these implementations, such as an offline generation of blendshapes based on facial scans, our take on the sticky lips problem and an efficient implementation of render time dynamic skin microstructure. Furthermore, we look into future usages and how Universal Scene Description (USD) [Pixar Animation Studios 2021] can serve as a helpful tool for facial rigs.

High-fidelity facial reconstruction from a single photo using photo-realistic rendering

We propose a fully automated method for realistic 3D face reconstruction from a single frontal photo that produces a high-resolution head mesh and a diffuse map. The photo is input to a convolutional neural network that estimates the weights of a morphable model to produce an initial head shape that is further adjusted through landmark-guided deformation. Two key features of the method are: 1) the network is exclusively trained on synthetic photos that are photo-realistic enough to learn real shape predictive features, making it unnecessary to train with real facial photos and corresponding 3D scans; 2) the landmarking statistical errors are incorporated in the reconstruction for optimal accuracy. While the method is based on a limited amount of real data, we show that it robustly and quickly performs plausible face reconstructions from real photos.

ABBA Voyage: High Volume Facial Likeness and Performance Pipeline

For the ABBA: Voyage concert experience, Industrial Light & Magic (ILM) was tasked with digitally time traveling the iconic band′s members Agnetha, Anni-Frida, Björn and Benny back to their prime time appearances. For the duration of this fully computer graphics generated concert four continuous photo-real digital human facial performances had to be synthesised driven by their original current day counterparts and stand-in young actors. This talk will dive into the extensive research and development that was undertaken to cater for high volume facial capture and processing, true-to-likeness face-retargeting and additional techniques for breaking through the uncanny valley.

SESSION: Applying Intelligence: AI and ML

Accelerating facial motion capture with video-driven animation transfer

We describe a hybrid pipeline that leverages: 1) video-driven animation transfer [Moser et al. 2021] for regressing high-quality animation under partially-controlled conditions from a single input image, and 2) a marker-based tracking approach [Moser et al. 2017] that, while more complex and slower, is capable of handling the most challenging scenarios seen in the capture set. By applying the most suited approach to each shot, we have an overall pipeline that, without loss of quality, is faster and has less user intervention. We also improve the prior work [Moser et al. 2021] with augmentations during training to make it more robust for the Head Mounted Camera (HMC) scenario. The new pipeline is currently being integrated into our offline and real-time workflows.

Training a Deep Remastering Model

The success of video streaming platforms has pushed studios to make available TV shows from legacy catalog, and there is an increased demand for remastering this content. Ideally, film reels are re-scanned with modern devices directly into high quality digital format. However this is not always possible as parts of the original film reels can be damaged or missing, and the content is then available in its entirety only in the broadcast version, typically NTSC. In this work, we present a deep learning solution to bring the NTSC version to the new scan quality levels, which would be otherwise impossible with existing tools.

Creating Life-like Autonomous Agents for Real-time Interactive Installations

This talk briefly describes the implementation of a complex virtual ecosystem of autonomous agents for the purpose of an art installation. The autonomous agents, called Aerobes, are inspired by the lifecycle of the Aurelia sp. jellyfish, and use artificial life techniques to simulate the behavior of two distinct types of organisms. We describe the process of using ethological research of organisms to design an artificial life system in a way that both creates a cohesive simulacrum of life-like behavior and allows for compelling interactions with audiences. We created complex behaviors for the Aerobes using low-level schemata that encapsulate individual goal-directed behaviors, and combined schemata to build behaviors that appear biomimetic. In order to give each agent the appearance of individuation, we mapped the underlying parameters of individual schemata and behaviors to personality traits to create a cohesive psychographic resource for autonomous agents that allowed for variance in decision-making and behaviors without additional computational complexity. The final artificial life system was then used to control the Aerobes in In Love With The World, a public art installation hosted at the Tate Modern’s Turbine Hall for four months.

SESSION: LookDev and Procedural Patchwork

Embroidery and Cloth Fiber Workflows on Disney’s ”Encanto”

Walt Disney Animation Studios’ ”Encanto” tells the tale of an extraordinary family, the Madrigals, who live in the hidden mountains of Colombia. The garments are an important aspect of the characters’ design and express their individual personalities. Accurate cloth looks have been difficult to achieve with our traditional look development workflow. We present the techniques utilized to create the varied and complex fiber-level cloth features in the film. In order to produce the desired level of geometric detail, we developed a new workflow that procedurally models each cloth fiber in Houdini and then binds the resulting curves to the clothing geometry via Disney’s XGen. We also extended our embroidery workflow to support a wide variety of embroidery types and styles which are exemplified by Mirabel’s outfit and include needle paint, knot, and rope stitching.

How Ron’s Gone Wrong Went Right With Procedurally Generated Vector Graphics

In the feature animation film Ron’s Gone Wrong there are many robot characters designed to have their surface working like a display. The acting of those characters relied mostly on moving graphic elements represented on their surface: animated faces, costumes and props. All graphic elements had to be art-directed, requiring the animation department to be able to control them interactively.

We propose a procedural workflow to produce animated textures from a rig, apply filters and project them onto the character’s surface to mimic a screen-like effect. The workflow proved to be flexible and scalable to address many characters in production, with several levels of complexity and a lot of creative input.

Lightyear Look Development - Materials and Beyond

Lightyear presented interesting look development challenges. A widely familiar character, Buzz, needed to remain recognizable, but as a human being instead of a toy. Similarly, the environments in the film needed to buttress that look with cohesion and authenticity. Additionally, Buzz and the other characters needed stylized hair in order to fit in with the look of the film. All of this needed to work seamlessly together to create a cohesive look for Lightyear. In light of this, Lightyear consolidated the look teams from sets, characters and grooming all into one group to make a unified world on the screen. Come and learn about the journey we took to get there!

SESSION: Simulation Sampler

Hair Emoting with Style Guides in Turning Red

For Pixar’s feature film Turning Red, the grooming and simulation teams faced the challenge of handling characters with millions of fur and hair curves, which often needed to behave differently in each shot reflecting the characters’ emotional states. This work describes new tools developed to assist artists in managing and sculpting these large amounts of fur and hair. In particular, we present a novel surface-aware technique for curve deformation that interpolates hair sculpts at varying levels of detail, accompanied by a customized user interface for interactively browsing hair layers.

Sancho : The Cursed Conquistador In Jungle Cruise

In Jungle Cruise, Sancho is one of the cursed conquistadors whose partially-decomposed body is composed of honeycombs that the honeybees ride on and has honey dripping off all the time. His topology will change throughout the sequence and need to interact with surrounding environment, which creates unique visual and technical challenges. To achieve this we established a procedural workflow from modeling to rendering that is both efficient and art-directable.

Eternals: Tackling The Large Scale Water Whirlpools

In Eternals third act we were faced with the challenge of creating extremely large water whirlpools that would spread for kilometres wide and deep. It was imperative that we approached this in an optimal way that would allow artistic control, reasonable turnaround time whilst maintaining a plausible and realistic look. This article focuses in the methodology used and how we took advantage of our in-house package Loki to achieve the results.

SESSION: Virtual Presentations (arranged by title)

A Fast & Robust Solution for Cubic & Higher-Order Polynomials

We present a computationally-efficient and numerically-robust method for finding real roots of cubic and higher-order polynomials. It begins with determining the intervals where a given polynomial is monotonic. Then, the existence of a real root within each interval can be quickly identified. If one exists, we find the root using a stable variant of Newton iterations, providing fast and guaranteed convergence and satisfying the given error bound.

Artistically Directable Walk Generation

We present a framework for artistically directable walk generation. A generative network is trained using a motion capture dataset and a manually animated collection of walks. To accommodate an animator’s workflow, each walk is presented as a sequence of key poses. The generative framework allows to specify a set of traits including gender, stride, velocity and weight. A generated walk is designed to be the starting point when blocking an animation: an animator can introduce new keys on the controls.

Boss Baby: Foamy Business

In a world run by babies, the most potent weapon from The Boss Baby: Family Business was, appropriately, the loveable, children’s craft - foam. Yet needing to show foam as both a friendly, intimate plaything and a powerful, massively dangerous adversary, we extended our proprietary Material Point Method solver to represent close up interactions retaining the mass of the frothy material as well as creating large scale, explosive volumetric setups. To retain a consistent look, we created a robust geometric surface and textural shader that was able to hold its form whether fitting in a handheld glass or expanding at speeds and scales to fill an entire courtyard.

Building an Illustrated World in The Bad Guys

The artistic style of The Bad Guys is inspired by the strong and simplified details of 2D illustration with its hand-drawn imperfections. Similar to traditional artists, our film needed techniques to selectively apply detail & thoughtfully deconstruct objects, focusing on artistic flexibility while maintaining scalability.

Clothing Suite: Interactively Design Complex Garments

In this talk, we discuss the proprietary toolset created to move our clothing look-design process from 2D into 3D. The Clothing Suite is an easy to use and interactive toolset to create complex, art-directed garments constructed from woven curves. The Clothing Suite allows artists to build and decorate textiles in real time through a three-dimensional sculptural approach while accommodating a highly deformed animation style. This allows the look-dev artist to design the clothing in 3D space in a fast and highly iterative process instead of relying on refined flat art or complex geometry manually sculpted in the model.

Cracking the Snake Code on The Bad Guys

A snake is simply a tube and doesn’t have shoulders or hips that are traditionally complicated to rig on human characters. However, rigging a snake is complicated enough even just for slithering actions. Mr. Snake in the film The Bad Guys uses its tail like an arm and acts like a human, walks like a human, and even plays a guitar! This talk presents the wide variety of rigging techniques that were used to achieve the unique cartoony actions of Mr. Snake.

Creating a Planet and Clouds Lightyears Away

For Lightyear, Space Ranger Buzz Lightyear’s exploits take him on a journey around the planet, T’Kani Prime and its neighboring star. In order to create realistic planets as seen from his star ship, we built a new workflow for creating procedural planet terrains and volumetric clouds as seen from space. These techniques needed to produce realistic results but also be highly art directable to help the audience believe Buzz could get to Infinity and Beyond.

Deep Learned Face Swapping in Feature Film Production

In visual effects for film, replacement of stunt performers’ facial likeness for their doubled actor counterparts using traditional computer graphics methods is a multi-stage, labor intensive task. Recently, deep learning techniques have made a compelling argument to train neural networks to learn how to take an image of a person’s face and convincingly infer a rendered image of a second person’s face with a previously unseen perspective, pose and lighting environment.

A novel method is discussed for bringing deep neural network face swapping to feature film production which utilizes facial recognition for the discovery of training data. Our method further innovates in the area of utilizing traditional CG assets for informing some of the shortcomings of ML techniques. Connected with a technique for feature engineering during training dataset assembly, our Face Fabrication System enables Wētā FX to deliver final picture quality for use in production.

Demystifying the Python-Processing Landscape: An Overview of Tools Combining Python and Processing

Processing is composed of a programming language and an editor for writing and compiling code, providing a collection of special commands to draw, animate, and handle user input using Java. Python Mode for Processing (also referred to as Processing.py) leverages Jython, a Java implementation of Python, to interface with Processing’s Java core. One can install and activate Python Mode in Processing using a button in the IDE interface. Python Mode enables Python syntax in the IDE (instead of Java) but has its limitations: it is source-compatible with Python 2.7 (not 3+) and does not support CPython libraries (such as NumPy). Several promising new Python-Processing tools have emerged, but this proliferation of alternatives can confuse would-be users. This talk maps out the Python-Processing landscape, offering insight into the different options and providing direction for beginners, teachers, and more accomplished programmers keen to explore Python as a tool for creative coding projects.

FirstPersonScience: An Open Source Tool for Studying FPS Esports Aiming

First-person shooters (FPS) games are dominant in the competitive gaming and esports community. However, relatively few tools are available for experimenters interested in studying mechanics of these games in a controlled, repeatable environment. While other researchers have made progress with one-off applications as well as custom content and mods for existing games, we are not aware of a general purpose application for empirically studying a broad set of user interactions in the FPS context. For the past few years our team has developed, maintained, and deployed First Person Science (FPSci), a tool for controlled user studies in FPS gaming. FPSci experimenters configure their desired base environment, as well as conditions and user preferences using a simplified JSON-esque set of input configurations, and results are stored in an SQLite database. By allowing finer grained parametric control of the environment together with frame-wise logging of player state and performance metrics, we achieve a level of granularity of control not offered by other solutions. FPSci is available as an open source project 1 under a CC BY-NC-SA 4.0 license.

From Procedural Panda-monium to Fast Vectorized Execution using PCF Crowd Primitives

In animation and VFX, crowds are too often considered an “edge case”, to be handled by specialized pipeline outside the main workflows. Requirements of scale and traditional reliance on history based simulation have been obstacles to properly building crowd systems into the core functionality of digital content creation software. Pixar’s crowds team has worked to reverse this trend, developing a fast vectorized crowd system directly within the execution engine of our proprietary animation software, Presto. Dubbed Pcf, for Presto Crowds Framework, this system uses aggregate models, called crowd primitives, to provide artists directly manipulable crowds while maintaining proceduralism for mass edits. Like traditional models, they contain rigs (a graph of operators) which run parallelized through Presto’s execution engine [Watt et al. 2014], but rather than posing points, they set joint angles and blendshape weights to pose entire crowds. The core operations of crowd artists: placement, casting, clip sequencing, transitions, look-ats, and curve following, are all well expressed as rigging operators (known as “actions” in Presto parlance) in Pcf. They provide interactive control of entire crowds in context using the same animation tool as our layout artists, animators, simulation TDs, etc. The first film to use Pcf, Turning Red, reaped massive benefits by building a stadium’s worth of characters in a fraction of the time of previous films’ efforts. However, because Pcf is tightly integrated into Presto, the benefits extended beyond efficiency for the crowds team. By providing our layout department Pcf rigging controls, they were able to shoot inside the crowd and use procedural operators to clear room for the camera and maintain crowd density only where needed. Similarly, the principal animation team could animate main characters in context of the crowd they were acting in, providing the proper context which all too often is absent in crowd shots. Taken together Pcf, is a huge step forward in bringing crowds out of the margins and into the core of animation workflows at Pixar, demonstrating that fast vectorized crowds can be an integral part of digital content creation software.

Graphic 2D-Inspired Characters in The Bad Guys

The world of The Bad Guys is an homage to illustration and our characters take inspiration from elements of hand-drawn 2D animation. In this talk we present various challenges of developing graphic characters with solutions that scale for feature production.

Gravity Preloading for Maintaining Hair Shape Using the Simulator as a Closed-box Function

In animation, hair styles can often be modeled without the consideration of physics. One of the side effects of this workflow is that external forces such as gravity will deform the groom from the designed shape when simulated. We present a simple optimization algorithm that preloads the rest shape to compensate for the external forces to maintain the groom shape during simulation. The algorithm provides artistic control over how much force is compensated for at each vertex.

How We Reconstructed the Neighborhood Destroyed by the Tulsa Race Massacre

Making a Guinea Pig Monster in The Bad Guys

In the film The Bad Guys, the villain uses a mind control device to manipulate an army of guinea pigs which in themselves evolve through various stages of scale and coalescence, ultimately forming a large monster. In order to create congruity between these different scales of motion we created targeted solutions and worked cross departmentally in an effort to maintain artistic control throughout the process.

Modeling Animated Jumbo Floral Display on Disney’s ”Encanto”

Walt Disney Animation Studios’ “Encanto” called for many dazzling musical numbers with special effects. Isabela’s song “What Else Can I Do?” was one of the most technically challenging. In this sequence, Isabela finds freedom from the pressure and constraints of society’s expectations, and lets her true self shine. Throughout this musical number, her environment represents her emotional transformation through the colorful, expressive, and hypnotizing flower patterns that appear on the walls of her bedroom. This effect required hundreds of thousands of flowers to change colors and move in kaleidoscopic patterns. This talk discusses the technical challenges encountered in bringing Isabela’s room to life and the subsequent artistic and technical solutions.

Modular Scene Filtering via the Pixar Hydra 2.0 Architecture

Pixar’s Hydra project began as an abstract scene interface to an OpenGL-based renderer intended for interactive viewports across multiple applications. As the project added integrations with other renderers (including path tracers), it became clear that it needed a richer scene interface to convey a broader set of renderer features, as well as a more structured and modular way to resolve the scene into rendering primitives.

We’ll discuss the Hydra 2.0 architecture that was developed to address this problem, with examples from two scene filtering cases:

Powering up Rig Deformation: Shot Sculpting on DC League of Super-Pets

We present the shot sculpting toolset used on DC League of Super-Pets to enhance rig deformation in order to meet creative goals in animation shots. This toolset repurposes rigging tools, primarily Animal Logic’s proprietary deformation system Bond [Baillet et al. 2020], to give animators intuitive and flexible ways to expand upon a rig’s deformation stack and push the rig past its intended range of motion. Shot sculpting workflows have been fully integrated into our pipeline and designed for minimal playback speed loss, providing an overall seamless experience for artists.

Revamping the Cloth Tailoring Pipeline at Pixar

This work presents the most recent updates to the cloth tailoring pipeline at Pixar. We start by reviewing the evolution of cloth authoring tools used at Pixar from 2001 to the present day. Motivated by previous approaches, we introduce a structured workflow for cloth tailoring that manages multiple mesh versions concurrently. In our implementation, artists interact primarily with a low-resolution quad-dominant mesh, which defines the garment look as well as setups for rigging and simulation. Our system then converts this coarse input model into a triangulated mesh for simulation and a quadrangulated subdivision surface for rendering. To this end, we developed a new remeshing tool that outputs surface triangulations with adaptive resolution and conforming to edge constraints. We also devised procedural routines to generate render meshes by applying fold-over thickness, refining the mesh, and inserting seams. In addition, we introduced a suite of algorithms for transferring input attributes onto the derived meshes, including UV shells, face colors, crease edges, and vertex weights. Our revamped pipeline was deployed on Pixar’s feature films Turning Red and Lightyear, producing hundreds of high-quality garment meshes.

Self-Intersection-Aware Deformation Transfer for Garment Simulation Meshes

Deformation Transfer is a well-known technique which copies deformations between two meshes to re-use animation. One drawback of existing methods is that they can introduce self-intersections on the resulting mesh, even when there is no self-intersections on the source mesh. We propose a novel deformation transfer technique that suppresses self-intersections on the resulting target mesh, called Self-Intersection-aware Deformation Transfer (SIDT). We demonstrate that SIDT helps to produce simulation friendly mesh for garment simulation.

Sex and Gender in the Computer Graphics Research Literature

Space Rangers with Cornrows: Methods for Modeling Braids and Curls in Pixar’s Groom Pipeline

This presentation is a debrief of the processes and methods added to Pixar’s groom pipeline to create the hairstyles of Lightyear characters Alisha and Izzy Hawthorne. The processes include novel ways of generating braids, curls, braid partitioning hairs (edge hairs), and graphic shapes populated with hair.

The Art of Cloth simulation on The Boss Baby : Family Business

DreamWorks Animation’s The Boss Baby : Family Business has a strong design language. Clothing is an essential part of characterization we needed to maintain strong graphic silhouettes on garments. The characters love changing clothes pretty frequently. Additional Challenges included dream sequences, animated scaling and downsizing of characters, with an extensive catalog and large crowds wearing multiple layered winter clothes. This talk illustrates various techniques used to tackle those and the use of animation-driven physics based garment simulation to emphasize the art of dressing an animated feature.

Thin films in production with extended anisotropic kernels

Particles offer a convenient way of modeling fluid simulations, but ultimately the goal in photorealistic VFX is to extract a surface representing the fluid which looks true-to-life. A characteristic of dynamic fluid simulations is thin fluid sheets which are challenging to extract a surface from. Bumpy artifacts are frequently exposed in these features. Anisotropic smoothing kernels [Yu and Turk 2013] are a method to combat this, but they also present their own usability challenges. Similar ideas have been used before in the film industry [Museth et al. 2007].

This paper extends the anisotropic kernels technique to provide an automatic scaling correction feature so artists may work in normalized terms. We also use a metaball implicit function over smoothed-particle hydrodynamics (SPH) kernels to achieve smooth surfaces. With these extensions, artists can quickly create high quality surfaces with fewer particles.

Tracking Character Diversity in the Animation Pipeline

As we explore a broad range of characters and stories in our films, it has become increasingly valuable to view breakdowns of our character pools and selections by demographic: to build and use our assets efficiently, reinforce storytelling and world building choices, and ensure consistent decision-making across the pipeline. With the Character Linker App within Traction (Traction is Pixar’s asset and shot-tracking tool), production is able to see a live breakdown of the character pool as assets are built, and sequence/shot composition, as they are populated–with the ability to visualize by a range of categories, including gender, ethnicity, body-type, and age, among others. Each film can define and populate these categories specific to their story, set breakdown goals to measure progress against, and iterate on crowd asset selections to ensure each character is utilized to the fullest.

Using STS to Bridge Long Histories of Blackness, Specularity, and Rendering

Science and Technology Studies (STS) is an academic interdisicpline that uses sociological and historical methods to study the interrelations of society and technoscience. This paper uses an STS approach to examine the historical feedback loops between ”rendering” the shine and specularity of Black skin–across painting, video, and photography–and how computer graphics programmers and artists should question some of the fundamental assumptions of their rendering workflows to both create more equitable representation of human form, and also to understand how computational renderings influence the real world they represent.

WIZ: DreamWorks GPU-Accelerated Interactive Hair/Fur Deformation and Visualization Toolset

WIZ is a hair/fur visualization toolset at DreamWorks that uses the power of the GPU for both interactive deformation and display in the 3-D viewport. WIZ closely approximates the final render hair motion on the GPU based on skin and guide curve motion, and provides the artist with various options for render hair deformation. It can use the character’s hair textures and closely approximate the final look of the hair. Hundreds of thousands or millions of curves in motion can be visualized close to real-time which can help the artist evaluate hair/fur motion iteratively in a shot without having to do a render or having to use much slower existing viewport displays, thus providing significant time and resource savings.