DigiPro '25: Proceedings of the Digital Production Symposium

Full Citation in the ACM Digital Library

OpenPBR Surface: An open shading model for physically based materials

The interchange of computer graphics scene assets, particularly for surface appearance, remains a significant challenge due to the diverse shading systems, languages, and pipelines used by different renderers and 3D engines. OpenPBR addresses this issue by providing a specification for a standardized uber-shader model that caters to the needs of industries engaged in physically based rendering. This model, developed through a collaboration between Autodesk and Adobe, builds upon the previously defined Autodesk Standard Surface and Adobe Standard Material models. It is defined via a formalism of layering and mixing of slabs, which is straightforward to implement in existing material frameworks such as MaterialX, Open Shading Language (OSL), and Material Definition Language (MDL). The specification aims to create a physically based, artist-friendly model that is powerful enough for the representation of complex surface appearances, while also practical for real-time applications. Thanks to its open governance, the project has seen significant industry adoption and collaboration, with implementations in major software such as Autodesk's Maya, 3ds Max, and Arnold, Adobe's proprietary path tracer, Blender's Cycles, and Maxon's Redshift.

Realistic Woven Cloth Shading

Realistic cloth shading has been the subject of significant and ongoing computer graphics research. Some approaches have used modeling processes to create a geometric representation of various types of fabric, from knitted yarns to woven threads. Other methods use volumetric representations of the woven pattern and/or require machine learning methods to capture and recreate the appearance of a given fabric.

Although effective, these methods lack the ability to quickly change the weave pattern or other significant characteristics of the cloth’s rendered appearance without remodeling or regenerating the required data, or they require rendering systems that are specifically designed to incorporate geometry generation, the evaluation of a volumetric representation, or a machine learning network.

In our approach, a single pattern shader written in a common shading language and operating as a point process within a generic rendering system is capable of producing all the requisite signals to create a realistic woven cloth appearance. The only data necessary to drive this shader is a simple file specifying the desired weave pattern (or alternatively, a static boolean array) and an optional color texture file that can be used to color the threads. This texture coloration can be applied either as a printed pattern or as a Jacquard-woven thread coloration. All characteristics of the warp and weft cloth threads and the weave itself are controlled via shader parameters, including the number and variation of the thread’s individual fiber plies and variations within the weave and threads. This cloth pattern shading is then combined with cloth-specific BxDF response combinations to further enhance the realism of the cloth’s appearance.

Using this approach, the visual characteristics of the cloth can be interactively adjusted as needed (even right up until the final rendering, if necessary) without requiring any remodeling or other geometry generation, fabric analysis, or specialized rendering, shading, or modeling capabilities. The ability to interactively tune the shaded cloth’s appearance is critical in order to exactly match the appearance of real cloth when CG cloth is to be composited directly next to the real cloth used on stop-motion puppets, as any discrepancy between the two will be immediately noticeable and, therefore, totally unacceptable.

Prototype to Production: Building a Lighting Workflow in Houdini for Animation

Soon after adoption of USD, Walt Disney Animation Studios (WDAS) began exploring and then transitioning its legacy Maya-based lighting workflow into Houdini Solaris. We present the history and journey, workflow and tool development from prototype to use in the first feature production, "Moana 2". We will discuss lessons learned and adjustments made from the original MVP workflow to the one that is used today.

Automated Video Segmentation Machine Learning Pipeline

Visual effects (VFX) production often struggles with slow, resource-intensive mask generation. This paper presents an automated video segmentation pipeline that creates temporally consistent instance masks. It employs machine learning for: (1) flexible object detection via text prompts, (2) refined per-frame image segmentation and (3) robust video tracking to ensure temporal stability. Deployed using containerization and leveraging a structured output format, the pipeline was quickly adopted by our artists. It significantly reduces manual effort, speeds up the creation of preliminary composites, and provides comprehensive segmentation data, thereby enhancing overall VFX production efficiency.

Pseudo-Collisions: A method for preventing fur-skin intersections without physical simulation

We introduce Pseudo-Collisions (PC), a numerical and time independent method to reduce the collisions between short fur and the skin of an asset as it deforms during animation. Typically, solving these intersections in a standard workflow would require a lot of time to set up and simulate each strand or their guide curves. With our solution, this can be achieved with a light post-process on top of the applied fur. This is achieved by adjusting the transformation matrix used to place the strands on the animated skin so that its rotation component smoothly lifts the hair where necessary.

The algorithm dynamically adjusts the orientation of the animated strands to reduce the intersections using different pieces of information including the direction of the strands, the placement matrices for each strand on the rest and the animated skin meshes, as well as the change in the surface’s curvature due to the deformation. Pseudo-Collisions was successfully used during the production of Mufasa: The Lion King [Disney© 2024] and the set of parameters made available to the users to modulate the results of the correction is also presented.

Rubber Hose Revival: The DreamWorks Curvy-Limb Rig

The DreamWorks Curvy-Limb rig allows animators to pose complex curves without sacrificing intuitive usability. The system adds additional flexibility to character limbs (arms and legs) and digits (fingers and toes) while retaining the standard arm, leg, and digit controls to which animators are already familiar. Animators can activate the extra controls as needed on the fly without having to blend into any special "states." The robustness of the Curvy-Limb has proven to be instrumental to the character performances of multiple DreamWorks Animation productions, including Ruby Gillman, Teenage Kraken, Trolls Band Together, and The Wild Robot.

Rumba Rig: A Procedural Rigging Framework with Direct Graph-Based Control

We present Rumba Rig, a new rigging framework that enables riggers to work directly on the final rig graph in a modular and non-linear way, without the need of auto-rig script or abstraction layer. By reducing complexity, the framework makes rigging more accessible and easier to maintain. We demonstrate through practical examples that Rumba Rig is able to express complex rig constructions and delivers professional-quality rigs.

Predator: Killer of Killers--Rapid Stylized Animation Production Through Real-time Tools

In late 2023, The Third Floor’s newly born animation department was presented with an enormous challenge. Our first project was going to be a fully animated feature film featuring both human and alien characters, a large scope with complex battle sequences, and a painterly look that would require extensive look development and compositing. Furthermore, the finished film would need to be delivered in 18 months…from a standing start, with no preproduction beyond a treatment and concept art, and no crew in place for production.

By the laws that normally govern animated film production, we should have failed. Against all odds, Predator: Killer of Killers releases on Hulu on June 6th, 2025. To make this extraordinary feat possible, we embraced real-time production and developed a suite of avant-garde tools that cut some tasks from weeks down to hours. Our tools included Hummingbird, a real-time compositing tool running within the Unreal Engine, VATMobile, a Vertex-Animation-Texture-based geo cache importer and exporter that allowed efficient playback of geo caches in Unreal in shots with large numbers of characters, and Molecule, a component-based Maya rig assembly tool. These tools gave us extreme efficiency: using Hummingbird for in-engine compositing, for instance, the average shot for Killer of Killers was lit and composited to completion in less than a single artist day.

Balancing Stability and Innovation: Hybrid Runtime Environments

We present a hybrid approach to managing software runtime environments at Netflix Animation Studios (NAS), designed to address the common VFX and Animation industry challenge of balancing stability with the rapid adoption of innovative technologies. We combine containerization technology, Apptainer [Apptainer 2025], with a dynamic software environment management, Rez [Rez 2025], to effectively run multiple distinct VFX Reference Platform [Platform 2025] software stacks concurrently on a single host. This approach significantly streamlines the transition between Linux distributions and accelerates the adoption of new software, allowing studios to efficiently manage diverse production needs without compromising stability or performance.

Integrated Quality Control: Improving production efficiency to achieve scale at DNEG Animation

This paper presents an overview of the Workflow Analysis initiative conducted by DNEG Animation in the Spring of 2023. The primary objective of this strategic effort was to identify inefficiencies and bottlenecks across the production process and pipeline data flow. The insights informed a range of development projects focusing on quality control checks, which collectively led to measurable gains in production efficiency, tool reliability, and overall workflow consistency. This paper details the key findings, implemented solutions, and the positive impact these changes have had on the animation production pipeline.

Combining the Benefits of Nodes and Layers in a USD World

Katana’s [Foundry 2025a] strength is its procedural node graph architecture which provides dynamic and scalable workflows - studios are no strangers to reaching scenes containing tens of thousands of nodes. The layer-based nature of Universal Scene Description (USD) [Pixar Anmation Studios 2025] presents a challenge for the node-based procedural approach. At large scales, the current procedural method of creating a USD layer per node is inadvisable because modifying an early node could trigger a full USD Stage recomposition and reprocessing of USD layer generating functions. In this extended abstract, an insight into the journey towards our solution is presented: the UsdSuperLayer node type. Providing a “procedural on-demand” approach, enabling efficient management of thousands of primitives and extensive shading graphs represented as nodes.

Adopting Research in Production at Netflix Animation Studios

This talk shares the experiences and lessons learned by Netflix Animation Studios’ Procedural Geometry and Simulation R&D team in adapting academic research for production workflows.

We explore example cases from our work, draw conclusions about the types of risks involved, and discuss mitigation strategies. We hope this can serve as a guide to other studios as they consider how to apply learnings from academia to their own tools.