SIGGRAPH '19- ACM SIGGRAPH 2019 Talks

Full Citation in the ACM Digital Library

SESSION: Building character

Creating photoreal creatures that audiences can connect with

This Production Talk will explore the culture shift in advertising, particularly how consumer expectations for quality content has never been higher than it is today. It will explore the use of photoreal characters and creatures to allow brands to break through the noise and connect with audiences. Michael Gregory (Creative Director) and Dan Seddon (VFX supervisor) will take audiences though the key steps that need to be taken to achieve a photoreal creature, as well as considerations that need to be made when finalizing the look of the creature. Examples of Moving Picture Company's (MPC) creature and character work will be used, as well as an exploration into the proprietary tools developed and used by MPC in their creative process.

From comic book to movie screen: achieving symbiosis between rigging and creature effects for Venom

The primary challenge at the heart of Visual Effects lies in the ability to translate the director's creative brief into compelling visuals using a combination of art and technology. In the case of Venom a key requirement was to keep the character design as faithful to the comic books as possible. This talk describes the challenges tackled by the Rigging, Effects, and R&D departments at DNEG in order to bring this classic antihero to life in a photorealistic manner.

Recreating BoPeep for Toy Story 4

In Toy Story 4, audiences rediscover BoPeep, who returns after nearly 20 years away from the big screen. In adapting her design, we considered not only the cultural context of reviving one of our industry's first female characters, informing our story and design, but also the technology now available, which drove explorations in shading and simulation, among other areas. Our talk describes BoPeep's journey through production: from initial research into decades-old reference and visualization, to the modern results and strides we've taken across both creative and technical specialties.

SESSION: Making faces

Mesh wrap based on affine-invariant coordinates

We present a new technique to transfer the mesh connectivity between 3D models of different shapes. In contrast to prior work, our method is designed to wrap meshes under large, locally non-rigid deformations, which are commonly found in feature animations. To achieve this goal, we enrich the traditional iterative closest point scheme with mesh coordinates that parametrize the edge spans of a desired tessellation invariant to locally affine transformations. As a result, we produce surfaces that wrap a target geometry accurately, while resembling the patch layout of the source mesh. Our implementation also offers an interactive workflow to assist the authoring of curve correspondences. We employed this tool to wrap 600 humanoid assets to a reference mesh connectivity, spanning characters modeled over the last 15 years at Pixar.

Muscle-based facial retargeting with anatomical constraints

We present a physically based facial retargeting algorithm that is suitable for use in high-end production. Given an actor's facial performance, we first run a targeted muscle simulation on the actor in order to determine the actor blendshape muscles that best match the performance. The deformation of the actor blendshape muscles are then transferred onto the creature to obtain the corresponding creature blendshape muscles. Finally, the resulting creature blendshape muscles are used to drive a creature muscle simulation which yields the retargeted performance. In order to ensure the anatomically correct placement of the muscles, cranium, and jaw, we introduce novel anatomically motivated constraints during the transfer process. Compared to previous approaches, these constraints not only improve the expressiveness of the retargeted creature performance to the actor performance but also eliminate spurious visual artifacts.

Facial pipeline in playmobil: the movie

In this paper, we present the technical pipeline that has been deployed at ON Animation studios to manage the specificity of facial animation on the Playmobil movie. According to the artistic requirement of this production, we developed a complete texture-free solution that gives the artists the ability to animate 2D facial features on a three-dimensional face while having a realtime raytraced feedback in the viewport. This approach provides full control over the shapes and since the final result is computed at render time, the visual style can be controlled until the very end of the workflow.

The beauty of breaking rhythms: affective robot motion design using Jo-Ha-Kyū of bunraku puppet

One of the UNESCO intangible cultural heritages Bunraku puppets can play one of the most beautiful puppet motions in the world. The sophisticated motions of the Bunraku puppet can express emotions interactively with a fixed facial expression overcoming the so-called "Uncanny Valley", simultaneously. In the present paper, we study the Bunraku motions using the famous concept so-called "Jo(Introduction)-Ha(Breaking)-Kyū(Rapid)". These emotional motions are synchronized with the Jo-ha-kyū. As a result, an android robot can express different affective motion synchronized to the different type of emotional chant or narration with a story that we call Jōruri.

SESSION: VR/AR real magic

A look into five years of locomotion in virtual reality

Survios, a virtual reality (VR) game developer dedicated to building active, immersive experiences that push the limits of VR innovation, developed a range of proprietary locomotion systems in VR to solve for simulator sickness and immersive gameplay.

Mica: a photoreal character for spatial computing

Mica is an autonomous, photoreal human character that runs on the Magic Leap One spatial computing platform. The past ten years have seen tremendous improvements in virtual character technology on screen, in film and games. Spatial computing hardware such as Magic Leap One allows us to take the next step: interacting with virtual characters off the screen. We believe that virtual characters who are aware of and can interact directly with users in real-world environments will provide a uniquely compelling experience.

Porting your VR title to oculus quest

Survios, a virtual reality (VR) game developer dedicated to building active, immersive experiences that push the limits of VR innovation, ported their VR boxing title, Creed: Rise to Glory, to the newly anticipated Oculus Quest. Porting to this new mobile VR platform is complex and demands extra creativity from developers when compared to porting to the previous generation of consoles. In this paper, Eugene Elkin, Senior Software Engineer at Survios, will share insights and learnings from the process, covering target hardware and its capabilities, rendering techniques, game optimization, and more.

SESSION: Classic art, cutting edge

The making of "Age of Sail"

"Age of Sail" tells the story of William Avery, an old sailor adrift and alone in the North Atlantic. When Avery rescues Lara, who has mysteriously fallen overboard, he finds redemption and hope in his darkest hours. In this production talk we'll go behind the scenes in the making of this multi-platform immersive animated short. Some of the unique challenges we'll discuss: bringing a 2D illustrated style to life in the medium of 6DoF VR with new non-photorealistic rendering techniques; immersing the viewer in a storm-tossed ocean without making them seasick; adapting a single story to multiple mediums (desktop/mobile VR, 360° video, and 2D film); creating a better sounding and more responsive sound mix with multiple surround formats and a new spatialization model; and optimizing it all to run at 60fps on a mobile phone.

Bone mother: the challenges of making an indie 3D printed film

After almost 5 years with a small crew, Bone Mother is the first National Film Board of Canada film to explore 3d printing. The filmmakers used 3d printing to achieve a wide range of expressions and dialogue, but also to help build the puppets, sets and rigs. While developing new production pipelines that incorporated this technology into stop-motion, the team found that as one solution was solved, a new challenge surfaced.

2D animation in the VR clouds: the making of disney's "a kite's tale"

The experimental animated virtual reality short "a kite's tale" required cgi and hand-drawn characters to interact in a highly art-directed environment made of spectacular clouds. In this talk we'll examine the workflows developed to create the short, with particular emphasis on the integration of hand-drawn animation and performant real-time cloud rendering.

Preserving virtual reality artworks: a museum perspective

As artists increasingly engage with virtual reality (VR) technologies, the artworks they produce are beginning to enter the collections of cultural heritage institutions. Museums, libraries and archives are therefore assessing how these complex works might be brought into collections and how they might be stabilised to ensure they can be exhibited in the long-term. Reporting on ongoing research at Tate in London, in this talk we will introduce our perspective as conservators of time-based media (broadly understood as art with a technological component that unfolds over time) on the challenges we face in preserving virtual reality artworks. We expect this to be of interest to SIGGRAPH attendees who are considering the legacy of their creations and the ways in which virtual reality artworks (and related technologies) might be stabilised in order to secure their future.

Experiences of treating phantom limb pain using immersive virtual reality

Phantom limb pain (PLP) is a phenomenon that affects millions of amputees worldwide. Its causes are poorly understood, and traditional forms of pain relief are largely ineffective. For over a decade virtual reality (VR) has shown tantalising possibilities of treating or managing this debilitating condition. Until recently however, the cost, complexity and fragility of VR hardware made exploring this unorthodox approach at any meaningful scale challenging; patients have had to travel to the location of specialist equipment to participate in studies, and missed appointments, dropouts or broken hardware have hampered data-gathering. Improvements in 'consumer grade' VR headsets now makes larger trials of this visual approach to pain management viable. We describe a trial of a VR system for PLP reduction utilising lightweight, standalone and low-cost VR hardware suitable for independent home use.

Immersivemote: combining foveated AI and streaming for immersive remote operations

Immersivemote is a novel technology combining our former foveated streaming solution with our novel foveated AI concept. While we have previously shown that foveated streaming can achieve 90% bandwidth savings, as compared to existing streaming solutions, foveated AI is designed to enable real-time video augmentations that are controlled through eye-gaze. The combined solution is therefore capable of effectively interfacing remote operators with mission critical information obtained, in real time, from task-aware machine understanding of the scene and IoT data.

Architecture challenges in the Android 3D graphics stack

The increasing traction of high-fidelity games on mobile devices is highlighting the challenges game developers have to face in order to optimize their content within the Android ecosystem.

In this talk, we'll explain our understanding of these challenges through the lens of how Android's graphics stack works today. If you've ever wondered:

• Are Android graphics drivers as buggy as I've heard?

• Why is there so much difference from device to device?

• Why aren't there great profilers like on console?

• Why can't I just measure draw call timings like on desktop?

• Why aren't graphics drivers updatable?

... then this talk is for you! We'll cover the way the hardware ecosystem for Android works, including the quirks of SOC vs. OEM vs. IP makers and how that translates to unique challenges. Then we will cover how software flows through this ecosystem and out through carriers, and the challenges that brings. We will talk about how the unique architecture features on mobile translate to new types of tooling challenges. Finally, we will talk about parallels between these combined challenges and other more traditional driver models from Windows or Mac, and discuss some of the implications thereof.

SESSION: La Noria

Creating a robust online pipeline

This talk will cover the efforts that went into creating a fully remote pipeline to produce the award-winning CG short film, La Noria and the spinoff of the pipeline into its publicly available platform; Artella.

The talk will cover the creation of the main online pipeline tools which allowed the La Noria production to happen all the way through discussing the different technical and creative challenges that came along with producing a film entirely with remote artists.

Using behind the scenes examples, artwork, renders, and tests from each of the departments of the film attendees of this talk will be walked through this incredibly innovative production that will set the stage for the way we view making movies and how geography won't be a barrier in the future.

Additionally, Carlos Baena will bring up the different technical, creative and industry challenges as well as stories for how this film was completed and how the pipeline evolved over the course of the production into what is not available via the Artella virtual production platform.

SESSION: This is a-noise-ing

Machine-learning denoising in feature film production

We present our experience deploying and using machine learning denoising of Monte Carlo renders in the production of animated feature films such as Pixar's Toy Story 4, Disney Animation's Ralph Breaks the Internet and Industrial Light & Magic's visual effects work on photo-realistic films such as Aladdin (2019). We show what it took to move from an R&D implementation of "Denoising with Kernel Prediction and Asymmetric Loss Functions" [Vogels et al. 2018] to a practical tool in a production pipeline.

Sculpting color spaces

Color correction with a long chain of keyers and math operators, even in the hands of experienced artists, often induces artifacts such as muddy colors or malformed edges. Inspired by tools which display a 3D color histogram [COL 2007], and Flame's Color Wrapper [WRA 2019], we embarked on building a user-friendly 3D color space sculpting toolset which allows a user to make complex and elegant color alterations quickly, intuitively and effectively.

In this paper, we will show how smooth transitions can be achieved by our tool through a combination of soft-selection, LUT auto-filling, and tetrahedral interpolation. Multiple approaches for interactive selection and highlighting were introduced to overcome the inaccurate and esoteric manipulation when applying similar 3D based tools to point clouds of colors. Our tool has been robustly integrated with the color correction pipeline through the ability to import and export industry standard 3D LUT files. We have found success for our tool in a number of color-manipulation-related tasks, such as: denoising, despilling, and standard grading.

Neural pixel error detection

Current video quality control entails a manual review of every frame for every video for pixel errors. A pixel error is a single or small group of anomalous pixels displaying incorrect colors, arising from multiple sources in the video production pipeline. The detection process is difficult, time consuming, and rife with human error. In this work, we present a novel approach for automated pixel error detection, applying simple machine learning techniques to great effect. We use an autoencoder architecture followed by statistical post-processing to catch all tested live action pixel anomalies while keeping the false positive rate to a minimum. We discuss previous dead pixel detection methods in image processing, and compare to other machine learning approaches.

Boosting VFX production with deep learning

Machine learning techniques are not often associated with artistic work such as visual effects production. Nevertheless, these techniques can save a lot of time for artists when used in the right context. In recent years, deep learning techniques have become a widely used tool with powerful frameworks that can be employed in a production environment. We present two deep learning solutions that were integrated into our production pipeline and used in current productions. One method generates high quality images from a compressed video file that contains various compression artifacts. The other quickly locates slates and color charts used for grading in a large set of images. We discuss these particular solutions in the context of previous work, as well as the challenges of integrating a deep learning solution within a VFX production pipeline, from concept to implementation.

SESSION: Project management

Enhancing emotional intelligence in project management: strategies for better outcomes and community with limited financial overhead

During the SIGGRAPH 2018 BOF "Emphasizing Empathy in Pipeline Project Management," group consensus stated that highly effective project management can only be achieved when emphasis is placed on demonstrated empathy for any and all project contributors at the project management level and when challenges are framed as opportunities to enhance both the team and the project manager's own emotional intelligence. The reality faced in the industry, however, can present unique challenges, specifically relating to toxic cultural folkways, lack of leadership support, and lack of designated monetary resources. Based on subsequent discussions borne from the initial presentation with industry professionals and team leaders, it seems imperative to address not only the theories of Emotional Intelligence in greater depth, but also to acknowledge the potential obstacles in applying this basic theory in the real world. This talk aims to illuminate opportunities for individual production professionals to both challenge their own perceptions of the industry culture and make effective changes pertaining to their management and communication styles to affect positive change in their work environment, increase employee morale, and build community, barring financial allotment, to the overall benefit of their team members and their project health.

SESSION: How to make a world

DMP without DMP, full-CG environments for The Lion King

The Lion King presented the unique challenge of creating a full CG feature film that could cross the border of photo-realism and be perceived as live action by the audience. The director and the production design team strongly pushed for a naturalistic look, heavily influenced by the imagery of African landscapes made popular by documentaries. In order to create a world that could be fully explored by a wide variety of virtual lenses, including dramatic wides and tight telephotos, the MPC Environments Team had to abandon some traditional Digital Matte Painting techniques (DMP), and focus on delivering full CG Environments to a scale and scope they never handled before.

Dust and cobwebs for Toy Story 4

The Toy Story universe makes its home at small scales, with the camera sometimes just a few centimeters from surfaces where typical shader approaches are unable to provide the desired level of detail. For environments like the Second Chance Antiques Store for Toy Story 4, the Set Extensions team developed systems to generate dimensional, granular elements such as dust, small debris, and cobwebs to enhance storytelling and ambiance. In addition to improving realism, these elements help indicate how hidden or exposed an area is from human observation and elevated the sense of drama and history.

Procedural system assisted authoring of open-world content for Marvel's Spider-Man

Crimes and vignettes are placed throughout the game city space using procedural systems to find appropriate locations. Editing of roads or buildings necessitates re-authoring the placement of dynamic encounters in that area of the environment. In many cases the dynamic encounters may have already undergone "final" manual adjustment to the space. To accommodate iterations on city layout, we add three main improvements to our procedural systems: preserve hand-authored work if it continues to meet specifications; place encounter components with higher fidelity; and provide the artists and designers guidance for crimes and vignettes needing more attention.

SESSION: Winning at game production

A scalable real-time many-shadowed-light rendering system

In this paper, we present a new shadow rendering system, with a number of novel design to support large amount of shadowed lights in a large virtual environment, with real-time performance on mainstream GPUs.

Mortal Kombat 11: high fidelity cached simulations in real-time

Mortal Kombat 11 introduces an Alembic-based asset pipeline that enables artists to leverage new workflows previously unattainable in real-time games. Blood and gore are the cornerstone of the franchise, and the art direction for our Fatalities and Fatal Blows focuses on close-up, high-fidelity, slow-motion shots showing extreme amounts of blood, which traditional sprite particles would struggle to achieve.

Why you should(n't) build your own game engine

Developing a modern game engine from the ground up has become an increasingly rare opportunity, and with good reason. It is a costly commitment and coupled with the existing technologies readily available at reasonable pricing models, it is a hard sell for any startup to take on such a burden.

This paper focuses on a few key issues when developing such a technology base to serve as both a guide and a warning. Rather than discussing the implementation details and features of the engine, the paper will delve into the importance of efficient workflows; the challenges of outsourcing, and finally the lessons learned from building the technology and a game that runs on it.

Practical dynamic lighting for large-scale game environments

Dynamic lighting techniques are vital in giving artists rapid feedback and reducing iteration time. In this technical postmortem, we will talk about dynamic lighting techniques for large-scale game environments that reflect changes in the time of day and include many dynamic lights. Moreover, a unified atmospheric scattering technique with clouds will also be discussed.

SESSION: Kaleidoscope eyes - displays and tricks

Adaptive environments with parallel realityTM displays

It is challenging to provide signage for the individual needs of each person in crowded, public spaces. Having too many signs leads to a cluttered environment, while having too few signs can fail to provide needed information to individuals. Personal display devices, such as smart phones and AR glasses can provide adaptive content, but they require each user to have a compatible device with them, turned on, and running appropriate software. PARALLEL REALITY™ displays are a new type of shared, public display that can simultaneously target personalized content to each viewer, without special glasses. In this way, adaptation becomes a feature of a venue that is available to all without encumbrances. Examples include adaptation based on language needs, visual acuity and relative location of the display.

Depth boost: extended depth reconstruction capability on volumetric display

A key challenge of volumetric displays is presenting a 3D scene as if naturally existed in the physical space. However, the displayable scenes are limited because current volumetric displays do not have a substantial depth reconstruction capability to show scenes with significant depth. In this talk, we propose a dynamic depth compression method that modifies the 3D geometries of presented scenes while considering changes to the spectator's view point such that entire scenes are fitted within a smaller depth range while maintaining the perceptual quality. Extensive depth compression induces a feeling of unnaturalness in viewers, but the results of an evaluation experiment using a volumetric display simulator indicated that a depth of just 10 cm was needed to show scenes that originally had about 50 m without an unacceptable feeling of unnaturalness. We applied our method to a real volumetric display and validated our findings through an additional user study. The results suggest that our method works well as a virtual extender of a volumetric display's depth reconstruction capability, enabling hundreds of times larger depth reconstruction than that of current volumetric displays.

From light to sound: prisms and auto-zoom lenses

In this talk, we show how acoustic metamaterials can be used to build the acoustic equivalent of optical devices. We demonstrate two key devices: (1) an acoustic prism, used to send the different notes in a melody towards different directions, and (2) an auto-zoom lens, used to send sound to a moving target. We conclude, discussing potential applications and limitations.

Visualization of putting trajectories in live golf broadcasting

We developed a system for visualizing golf putting trajectories that can be used in live broadcasting. The trajectory computer graphics (CGs) in a golf putting scene are useful for visualizing the results of past plays and the shape of the green. In addition, displaying past trajectories that were shot near the position of the next player helps TV viewers predict the ball movement of the next play. Visualizing the putting trajectories in this way offers TV viewers a new style of watching live golf tournaments and helps make the programs more understandable and exiting.

SESSION: THRIVE

Foundational principles & technologies for the metaverse

Science-fiction notions of "the metaverse" are slowly becoming a reality as products such as Fortnite, Minecraft, and Roblox bring immersive social experiences to hundreds of millions of people and blur the boundaries between games and social networks. This talk will explore the foundational technologies, economies, and freedoms required to build this future medium as a force for good.

SESSION: Combustion

Avengers: Endgame, a new approach for combustion simulations

In Avengers: Endgame we wanted to improve the physical accuracy of our simulations and level of artistic control for the look of our combustion simulations, i.e. explosions, grenades and fire. Our in-house Simulation RnD department developed an improved combustion solver, internally referred as Combustion 2. This paper will focus on some of the technical aspects of the fluid solver, however the presentation will pivot more towards the production work related to its adoption on Avengers: Endgame.

Physics-based combustion simulation in bifrost

Fire, from small-scale candle flames to enormous explosions, remains an area of special interest in visual effects. Even compared to regular fluid simulation, the wide range of chemical reactions with corresponding generated motion and illumination makes for highly complex visual phenomena, difficult for an artist to recreate directly. The goal of our software is to provide attractive physical and chemical simulation workflows, which enable the artist to automaticall achieve "physically plausible" results by default. Ideally, these results should come close to matching real-world footage if the real-world parameters are known (such as what fuel is being burned). To support this, we aim to provide a user interface for artistic direction where the controls map intuitively to changes in the visual result. More user-demanding proceduralism will only occasionally be required for final artistic tweaks or hero shots.

Retiming of fluid simulations for VFX: distributed non-linear fluid retiming by sparse bi-directional advection-diffusion

We present a novel approach to retiming of fluid simulations, which is a common yet challenging practice in visual effects productions. Unlike traditional techniques that are limited to dense simulations and only account for bulk motion by the fluid velocities, our approach also works on sparse simulations and attempts to account for two (vs one) of the fundamental processes governing fluid dynamics, namely the effect of hyperbolic advection and parabolic diffusion, be it physical or numerical. This allows for smoother transitions between the existing and newly generated simulation volumes, thereby preserving the overall look of a retimed fluid animation, while significantly outperforming both forward simulations, which tend to change the look, and guided inverse simulations, which are known to be computationally expensive.

What time is it?: efficient and robust FX retiming workflow for spies in disguise

We present our FX retiming workflow developed on Blue Sky Studios' latest feature, Spies in Disguise. Retiming refers to the slowing down and speeding up of FX assets in a shot. These include point particles, rigid bodies, volumetric elements like smoke and fire, and fluids. Our solution is shown to be robust and efficient, even for the challenging cases of retiming heavily dynamic events, such as fire and explosions.

SESSION: Here comes the sun

Practical lighting on Toy Story 4

Since the adoption of Global Illumination on Monsters University and Renderman RIS on Finding Dory, Pixar has pushed closer and closer to photorealism with physically-based lighting and shading. With each show, Lighting artists have needed to create, organize, and creatively balance more and more light sources of increasing complexity. Because Pixar has also traditionally worked with a "fixed" camera exposure, the light colors and intensities chosen by the artist would also drive the overall brightness of the final image.

Toy Story 4, which takes place in both an antiques mall and a traveling carnival, was a challenge to this workflow. The large variety of light sources in the antiques mall would be difficult to group into uniform categories; complex light animation on carnival rides required light intensities driven by upstream assets; and seeing the same lights under day and night illumination meant that production-wide choices for light intensities could not be made under a single, fixed exposure. We needed assets that behaved like in the real world and just worked when placed together in a scene.

We developed a method on Toy Story 4 for tagging modeled assets with physical light properties that would automatically be converted into functioning light sources in a shot by a script called Bakelite. This pipeline gave the Lighting department more time for creative iteration with minimal setup, allowed pre-lighting visualization of shots by upstream departments, and ensured a final image that was both rich in surface detail and also accurate in HDR.

Light pruning on Toy Story 4

Pixar films have recently seen drastically rising light counts via procedural generation, resulting in longer render times and slower interactive workflows. Here we present a fully automated, scalable, and error-free light pruning pipeline deployed on Toy Story 4 that reduces final render times by 15--50% in challenging cases, accelerates interactive lighting, and automates a previously manual and error-prone task.

Streamlining IBL workflows with computer vision and USD

DNEG is constantly improving its global tools and processes to make them more efficient and artist-friendly, while leveraging state-of-the-art technologies and trends in the industry. The special requirements for Sony Pictures'Venom movie were the perfect opportunity for us to improve our Image Based Lighting (IBL) workflows. In this paper we present the iblManager: a semi-automated system to allow fast and artist-friendly extraction of numerous lights from HDR images (HDRIs), using computer vision. The system uses LightCache, a USD-based implementation of light descriptions, to allow cross-DCC usage and pipeline integration.

DeepLight: learning illumination for unconstrained mobile mixed reality

We present a learning-based method to infer plausible high dynamic range (HDR), omnidirectional illumination given an unconstrained, low dynamic range (LDR) image from a mobile phone camera with a limited field of view (FOV). For training data, we collect videos of various reflective spheres placed within the camera's FOV, leaving most of the background unoccluded, leveraging that materials with diverse reflectance functions reveal different lighting cues in a single exposure. We train a deep neural network to regress from the LDR background image to HDR lighting by matching the LDR ground truth sphere images to those rendered with the predicted illumination using image-based relighting, which is differentiable. Our inference runs at interactive frame rates on a mobile device, enabling realistic rendering of virtual objects into real scenes for mobile mixed reality. Training on auto-exposed and white-balanced videos, we improve the realism of rendered objects compared to the state-of-the art methods for both indoor and outdoor scenes.

SESSION: Getting new pipes

Conduit: a modern pipeline for the open source world

We present our modern pipeline, Conduit, developed for Blue Sky's upcoming feature film, Nimona. Conduit refers to a set of tools and web services that allow artists to find, track, version and quality control their work. In addition to describing the system and implementation, we will discuss the challenges and opportunities of developing and deploying a pipeline with the intention of open sourcing the resulting toolset. We found that communicating concepts and progress updates both internally and externally throughout the development process ultimately resulted in a more robust solution.

A portal for managing reviews and beyond

We present Portal, a modern web interface for creating and managing media and review sessions at Blue Sky Studios. Utilizing a horizontally scalable stack, Portal allows artists and production management to quickly search for and play back media, capture notes with draw-overs, and manage review sessions all from within a web browser. To help ensure success, our media tools team conducted rigorous user story mapping sessions with key management staff across the studio. As a result, Portal has become an integral part of the dailies workflow at Blue Sky.

Building modern VFX infrastructure

In order to meet the rapidly evolving needs of the VFX industry, studios need to be able to adapt and upgrade quickly. However, the infrastructure stack at most companies is complex, usually set up over a number of years with significant customizations and proprietary software. Maintaining this stack requires dedicated teams. Upgrades can take months and are usually fraught with risk.

The engineering team at MPC drastically reduced this time to deployment from months to a few days by using cloud native solutions. Built on a foundation of microservices, the infrastructure stack provides an asset management system, storage, sync and compute capabilities. Within the first year it was deployed across two sites in different timezones, supporting up to 200 artists. Thus proving the ability for VFX studios to scale rapidly.

Integrate USD the nodal way, a visual VFX pipeline

Pixar Universal Scene Description technology is well known in the VFX world. With its promising goal to unify and ease the interchanges of data between Digital Content Creation tools (DCC), it has convinced the industry since its initial open source release in 2016. The question is no more Should USD be adopted? but How to integrate it the good way in our pipeline?

SESSION: Eleanor "rigging"-by

Hierarchy models: building blocks for procedural rigging

Hierarchy Models provide an encapsulation mechanism for joint hierarchies that yield an essential building block for procedural rigging. With Hierarchy Models, joints travel through dependency graphs (DGs) as an atomic entity. Operation nodes in the DG can modify all aspects of input hierarchy and even perform topological modifications like adding or removing joints. The Hierarchy Model reduces complexity in character rigs, improves separation between data and behavior, provides a clean interface, and simplifies understanding and debugging rigs. It offers geometric evaluation optimizations, and promotes parallelism in the DG structure.

Flap flap away: animation cycle multiplexing

The use of animation cycle multiplexing technique was first deployed on How to Train Your Dragon at Dreamworks to accomplish the ambitious task of animating many winged characters with limited resources. It has since been adopted through software platform changes, working with award winning software Premo, to its current form in How to Train Your Dragon: The Hidden World. It would not have been possible to accomplish the film with the level of visual complexity due to production budget and time constraints.

Sliding the pieces into place: rigging the pigeons of spies in disguise

The birds of Spies in Disguise required several technological advancements and techniques to achieve the simple graphic style of the film. One technology was a re-designed wing rig with unique mechanics that allowed for clean lines and graphic shapes rather than our previous anatomical based wing rig. The production style also required extreme posing involving sliding limbs, large open mouth ranges and jiggly eyes. These requirements were achieved with a combination of new workflow techniques, updates to the pipeline and the creation and updating of proprietary deformers.

Fast, interpolationless character animation through "ephemeral" rigging

I present an alternative CG character animation methodology that eschews both keyframes and conventional hierarchical rigging. Primary rig controls have no hierarchy or built-in behavior-instead the animator calls for "ephemeral" rig behavior as needed. The system also facilitates "interpolationless" animation by removing keyframes as we know them, replacing them with discrete poses and inbetweening tools.

SESSION: Perception in rendering & hardware

Autofocals: evaluating gaze-contingent eyeglasses for presbyopes

Presbyopia, the loss of accommodation due to the stiffening of the crystalline lens in the eye, affects nearly 20% of the population worldwide. Traditional forms of presbyopia correction use fixed focal elements that inherently trade off field of view or stereo vision for a greater range of distances at which the wearer can see clearly. However, none of these offer the same natural refocusing enjoyed in youth. In this work, we built a new type of presbyopia correction, dubbed "autofocal," which externally mimics the natural accommodation response of the eye by combining data from eye trackers and a depth sensor, and then automatically drives focus-tunable lenses. We evaluated autofocals against progressives and mono-vision in a user study; compared to these traditional corrections, autofocals maintain better visual acuity at all tested distances, allow for faster and more accurate visual task performance, and are easier to refocus with for a majority of wearers.

Gaze-contingent ocular parallax rendering for virtual reality

Current-generation virtual reality (VR) displays aim to generate perceptually realistic user experiences by accurately rendering many perceptually important effects including perspective, disparity, motion parallax, and other depth cues. We introduce ocular parallax rendering, a technology that renders small amounts of gaze-contingent parallax capable of further increasing perceptual realism in VR. Ocular parallax, small depth-dependent image shifts on the retina created as the eye rotates, occurs because the centers of rotation and projection of the eye are not the same. We study the perceptual implications of ocular parallax rendering by designing and conducting a series of user experiments. We estimate perceptual detection and discrimination thresholds for this effect and demonstrate that it is clearly visible in most VR applications. However, our studies also indicate that ocular parallax rendering does not significantly improve depth perception in VR.

Foveated displays: toward classification of the emerging field

There is not yet consensus in the field on what constitutes a "foveated display". We propose a compromise between the perspectives of rendering, imaging, physiology and vision science that defines a foveated display as a display designed to function in the context of user gaze. This definition enables us to describe 2 axes of foveation, gaze interaction and resolution distribution, which we then subdivide to provide useful categories for classification. We view this proposal as the start of a discussion among the community rather than a final taxonomy.

Deepfovea: neural reconstruction for foveated rendering and video compression using learned natural video statistics

Recent advances in head-mounted displays (HMDs) provide new levels of immersion by delivering imagery straight to human eyes. The high spatial and temporal resolution requirements of these displays pose a tremendous challenge for real-time rendering and video compression. Since the eyes rapidly decrease in spatial acuity with increasing eccentricity, providing high resolution to peripheral vision is unnecessary. Upcoming VR displays provide real-time estimation of gaze, enabling gaze-contingent rendering and compression methods that take advantage of this acuity falloff. In this setting, special care must be given to avoid visible artifacts such as a loss of contrast or addition of flicker.

SESSION: Here comes the groom and rig

Holding the shape in hair simulation

Hair simulation models are based on physics, but require additional controls to achieve certain looks or art directions. A common simulation control is to use hard or soft constraints on the kinematic points provided by the articulation of the scalp or explicit rigging of the hair [Kaur et al. 2018; Soares et al. 2012]. While following the rigged points adds explicit control during shot work, we want to author information during the setup phase to better follow the groomed shape automatically during simulation (Figure 1). We have found that there is no single approach that satisfies every artistic requirement, and have instead developed several practical force-and constraint-based techniques over the course of the making of Brave, Inside Out, The Good Dinosaur, Coco, Incredibles 2, and Toy Story 4. We have also discovered that kinematic constraints can sometimes be adversely affected by mesh deformation and discuss how to mitigate this effect for both articulated and simulated hair.

Hummingbird: dreamworks feather system

This talk presents DreamWorks' Feather System Hummingbird, which is used for grooming body feathers and modeling scales interactively in real time. It is also used for feather motion such as secondary motion or wind, for feather special effects such as ruffling or puffing up, and for feather finaling. The system has been used in several shows at DreamWorks including How to Train Your Dragon 2, Bilby, and How to Train Your Dragon: The Hidden World.

Mesh-driven generation and animation of groomed feathers

MPC's proprietary grooming software - Furtility - has been used to create all hair, fur and feathers on our characters with great success for over a decade. However, the creation of feather grooms has always been a time consuming and technically challenging task for our Groom department, often keeping a senior artist occupied for months. Due to narrower deadlines and a constant push for higher quality, we recently extended our feather tool set, which has allowed our artists to significantly streamline their feather workflow. After adopting our new geometry-based feather system in production, our Groom artists have been able to reduce the time frame for finalizing a hero character from months to weeks.

Grasshopper: dreamworks environmental motion system

This talk presents DreamWorks' Grasshopper, an environmental motion system for creating believable animation for grass, plants, and other vegetation. The system was used extensively on the complex and expansive sets in How to Train Your Dragon: The Hidden World and is currently being used on more productions at DreamWorks.

Optimizing rig manipulation with GPU and parallel evaluation

Rig speed plays a critical role in animation pipelines. Real-time performance provides instant feedback to artists, thereby allowing quick iterations and ultimately leading to better quality animation. A complete approach to real-time performance requires both playback and manipulation at interactive speeds. A pose-based caching system (PBCS) addresses the former, but the manipulation of complex rigs remains slow. This paper speeds up rig manipulation by taking advantage of modern multi-core architectures and the GPU, and by constructing rigs that evaluate efficiently on parallel processing hardware. This complete approach, including tool updates and rig optimizations, was used successfully to significantly improve interactive rig manipulation performance on Frozen 2.

SESSION: All together now - crowds

Directable stadium crowds from image based modelling for "Bohemian Rhapsody"

To deliver photoreal, dynamic and directable rock concert audiences for "Bohemian Rhapsody" to a demanding client brief, lead VFX vendor DNEG developed a novel crowd simulation solution based on multi-view video capture and image based modeling. Over 350 choreographed performances by individual crowd extras, totalling more than 70 hours of footage, was acquired on set using a video camera array. A system was developed to convert the video data to lightweight 3D sprites that could be quickly laid out, synchronised, edited and rendered on a large scale. Efficient artist workflow tools and scalable video processing technology was developed so that crew with little previous experience in crowd simulation could fill a virtual Wembley Stadium with a dancing, cheering crowd responding to Freddie Mercury's electrifying performance.

Optimizing large scale crowds in ralph breaks the internet

In Walt Disney Animation Studio's 57th animated feature Ralph Breaks the Internet, the vastness of the internet is imagined as a bustling city where websites are buildings, Netizen characters represent algorithms, and Net Users travel from site to site. The enormous scope of bringing the world of the internet to life required the Crowds department to rethink how we go about populating our scenes. We extended the Zootopia Crowd Pipeline [El-Ali et al. 2016] to support pose reuse based on level-of-detail, and developed a procedural workflow to populate the world with millions of agents and efficiently render only those visible to camera.

Creating ralphzilla: moshpit, skeleton library and automation framework

Composed of over 550,000 crowd agents, Ralphzilla of Ralph Breaks the Internet, one of the largest movie monsters ever created and presented a huge technical and artistic challenge.

We introduce a new crowd solver, Moshpit, which performs high resolution inter-body collision among crowd agents. We will also explain how Moshpit was incorporated into Disney's Skeleton Library (SL) and proprietary pipeline automation framework.

A ragdoll-less approach to physical animations of characters in vehicles

Recently the use of vehicles has increased in importance for many games. This is not only for open-world games, where the use of vehicles are a crucial element of world traversal, but also for scenario based games where the use of vehicles adds a more varied game-play experience. In many of these games, however, the characters inside the vehicles lack the animations to connect their motion to that of the vehicle. The use of a few poses or small number of animations makes in-vehicle characters too rigid and is particularly noticeable in open vehicles or those with excessive motion such as tractors, speedboats or motorbikes. This can break the connection the player has to the vehicle experience. To solve this problem, several games have used a method to control a ragdoll with physical parameters to follow the input poses [Fuller and Nilsson 2010] [Mach 2017]. However, this solution has several complications regarding controllability and stability when simulating a ragdoll and a vehicle at the same time. I would like to introduce a new approach using particle-based dynamics rather than using a ragdoll. We present two methods: a particle-based approach to physical movement (see Figure 1) and modifying goal positions to generate plausible target poses (see Figure 3).

SESSION: Lucy in the sky with diamonds - processing visuals

A low-discrepancy sampler that distributes monte carlo errors as a blue noise in screen space

We introduce a sampler that generates per-pixel samples achieving high visual quality thanks to two key properties related to the Monte Carlo errors that it produces. First, the sequence of each pixel is an Owen-scrambled Sobol sequence that has state-of-the-art convergence properties. The Monte Carlo errors have thus low magnitudes. Second, these errors are distributed as a blue noise in screen space. This makes them visually even more acceptable. Our sampler is lightweight and fast. We implement it with a small texture and two xor operations. Our supplemental material provides comparisons against previous work for different scenes and sample counts.

Global adaptive sampling hierarchies in production ray tracing

An image-space hierarchy is introduced to reduce artifacts from prematurely stopping in adaptive sampling in a Monte Carlo ray tracing context, while maintaining good performance and fitting into an existing sampling architecture.

Multiple scattering using machine learning

Microfacet-based reflection models are widely used in visual effects applications ranging from computer games to animation and feature film rendering. However, the standard microfacet BRDF supports single scattering only. Light that is scattered more than once is not accounted for, which can lead to significant energy loss. As physically based rendering becoming more prevalent in production, the lack of energy preservation has become problematic. This has lead to several recent works on multiple scattering. Heitz et al. [2016] presented a volumetric approach to model multiple scattering accurately but its stochastic evaluation increases variance. Xie and Hanrahan [2018] presented an analytical multiple scattering model that is efficient but has a singularity in the direction of mirror reflection.

Taming the shadow Terminator

A longstanding problem with the use of shading normals is the discontinuity introduced into the cosine falloff where part of the hemisphere around the shading normal falls below the geometric surface. Our solution is to add a geometrically derived shadowing function that adds minimal additional shadowing while falling smoothly to zero at the terminator. Our shadowing function is simple, robust, efficient and production proven.

SESSION: Practical fluids

A practical guide to thin film and drips simulation

We present a practical approach to model close-up water interaction with characters. We specifically focus on high-fidelity surface tension and adhesion effects as water sheds off skin. We show that an existing particle-in-cell (FLIP/APIC) solver can be adapted to capture small-scale water-solid interaction dynamics and discuss the role and implementation details of the relevant key components: treatment of surface tension and viscosity, enforcement of contact angle, and maintenance of contact with fast-moving collision geometry. The method allows for resolution of effects on a scale of a fraction of a millimeter and is performant enough to be able to cover a whole human body with a layer of water. We demonstrate successful use of the approach in a shot from Alita: Battle Angel.

Instafalls: how to train your waterfalls

The environments of How To Train Your Dragon: The Hidden World are immersed in waterfalls, from lush Nordic islands to mysterious, submerged kingdoms of lakes and crystals. Because of the variety and complexity of these sets (about 4000 waterfalls had to be created), a traditional approach would have consumed too many artistic and rendering resources. We decided to develop a collection of tools named InstaFalls with a few goals in mind: streamlining the creative process by minimizing the time spent on simulations, iterating faster thanks to real-time feedback, and managing the large quantity of generated data. Using this system, artists were able to create all the water elements of an environment set, from misty calderas to foamy, aerated ponds.

Procedural approach to animation driven effects for Avengers: Endgame

This article describes the workflow for delivering real-time FX elements to animation in an omni-directional way and how they get processed for further stages, as it was used on 'Avengers: Endgame' at Weta Digital. We will discuss our procedural approach to solving some of the challenges around what we call 'AnimFx', focusing on production requirements and our ensuing technological solution.

The rigid body and fluid dynamics of LAIKA's "Missing Link"

LAIKA the animation studio is known for stop-motion and a unified fusion of art and stunning visual effects technology. This presentation will cover the techniques used to generate water effects and a collapsing bridge of ice for LAIKA's Missing Link.

Building on techniques for incorporating stylized water effects with stop motion animation developed for LAIKA's Kubo and the Two Strings [Montgomery 2016], the FX team used SideFX Houdini's [SideFX 2019b] guided ocean and narrow band FLIP tools to guide the action and manage the larger scale and more numerous shots required for Missing Link.

The collapsing ice bridge sequence presented an unusual challenge integrating stop motion animated characters with an animated CG set piece. To accomplish this difficult task, the normal course of production was reversed, and the CG animation was used to guide the character animation.