We present an approach to simulating underwater bubbles. Our method is sparse in that it only simulates a thin band of water around the region of interest allowing us to achieve high resolutions in turbulent scenarios. We use a hybrid bubble representation consisting of two parts. The hero counterpart utilizes an incompressible two-phase Navier-Stokes solve on an Eulerian grid with air phase also represented via FLIP/APIC particles to facilitate volume conservation and accurate interface tracking. The diffuse counterpart captures sub-grid bubble motion not “seen” by the Eulerian grid. We represent those as particles and develop a novel scheme for coupling them with the bulk fluid. The coupling scheme is not limited to sub-grid bubbles and may be applied to other thin/porous objects such as sand, hair, and cloth.
High resolution fluid simulations are commonly used in the visual effects industry to convincingly animate smoke, steam, and explosions. Traditional volumetric fluid solvers operate on dense grids and often spend a lot of time working on empty regions with no visible smoke. We present an efficient sparse fluid solver that effectively skips the inactive space. At the core of this solver is our sparse pressure projection method based on unsmooth aggreggation multigrid that treats the internal boundaries as open, allowing the smoke to freely move into previously-inactive regions. We model small-scale motion in the air around the smoke with a noise field, justifying the absence of reliable velocities in the inactive areas.
Ambient occlusion is often approximated in real-time using screen-space techniques, leading to visible artifacts. Raytracing provides a unique way to increase the rendering fidelity by accurately sampling the distance to the surrounding objects, but it introduces sampling noise. We propose a real-time ray-traced ambient occlusion technique in which noise is filtered in world space. Using extended spatial hashing for efficient storage, multiresolution AO evaluation and ad-hoc filtering, we demonstrate the usability of our technique as a production feature usable in CAD viewports with scenes comprising hundreds of millions of polygons.
A well known artifact in production rendering from the use of shading normals is the shadow terminator problem: the abrupt interruption of the light’s smooth cosine falloff at geometric horizons. Recent publications introduced several ad-hoc techniques, based loosely on microfacet theory to deal with these issues. We show that these techniques can themselves introduce artifacts and suggest a new technique that is an improvement in many situations. More importantly we introduce a framework for analyzing these different techniques so artists and researchers can choose appropriate solutions and more reliably predict and understand expected results.
Walt Disney Animation Studios’ ”Frozen 2” takes place in the Enchanted Forest, which is full of vegetation (e.g. distinctive leaves and foliage) that is manipulated by other characters, including the wind character, Gale. ”Frozen 2” also has multiple scenes where a large portion of the forest is on fire. The quantity and scale of vegetation effects in ”Frozen 2” presented a challenge to our Effects department. We developed two workflows, the Vegetation Asset workflow and the Fire Tree workflow, to help us achieve high quality artistic performance of procedural tree animation and fire tree simulations on ”Frozen 2”. Using the new workflows we not only saw an order of magnitude improvement in the work efficiency of our Effects artists, but also saw an increase in work satisfaction and overall artistic quality since the workflows handled the data management of various assets in the shot, allowing artists to concentrate more on their craft.
At the peak of our climactic third act in “Spies in Disguise,” our heroes unleash a barrage of saturated powder grenades around the villain’s army of drones. This meant filling the screen with billions of finely detailed, multi-color voxels and particles (aptly named “Fifty Shades of Yay”) with a distinct performance and look. We needed to come up with a solution that was quick to turn around, yet highly directable and allowed for unfettered lighting and shading development. The resulting approach was an extremely tight collaboration between several departments and spearheaded by a few folks in Lighting and Effects.
This talk presents DreamWorks’ Termite, an environmental rigging and simulation utility used by the Character Effects Department (CFX) for rapidly creating simulation setups for environment assets without pipeline dependent complexity requirements or additional data carried by the geometry itself, such as hierarchy attributes. With minimal artist input, high quality environment effects can be achieved quickly and easily. The system was used on DreamWorks’ films How to Train Your Dragon: The Hidden World, Abominable, and Trolls World Tour.
During the SIGGRAPH 2018 BOF ”Emphasizing Empathy in Pipeline Project Management,” group consensus stated that highly effective project management can only be achieved when emphasis is placed on demonstrated empathy for any and all project contributors at the project management level and when challenges are framed as opportunities to enhance both the team and the project manager’s own emotional intelligence. The reality faced in the industry, however, can present unique challenges, specifically relating to toxic cultural folkways, lack of leadership support, and lack of designated monetary resources. Based on subsequent discussions borne from the initial presentation with industry professionals and team leaders, it seems imperative to address not only the theories of Emotional Intelligence in greater depth, but also to acknowledge the potential obstacles in applying this basic theory in the real world specifically regarding the operational changes introduced to the working environment due to COVID-19 and current world events.
A specific concern to address is that middle-tier management, for whom budget allocation is given and not guaranteed – want to improve their team environment, they are often not granted a financial allotment to do so most effectively for their specific teams. This talk aims to illuminate opportunities for individual production professionals to make effective changes pertaining to their management and communication styles to affect positive change in their work environment, increase employee morale, and build community, barring financial allotment, to the overall benefit of their team members and their project health.
Using real production examples from Blue Sky Studios’ Spies in Disguise, we demonstrate how to break down effects within the production, taking into consideration a variety of aspects from optimization to collaboration. We walk through our thought process on how to go about designing workflows best suited for the job and plan ahead to maximize efficiency and flexibility during shot work.
The animated movie How to Train Your Dragon: The Hidden World introduces a new species of dragon in the franchise: the Deathgripper. This dragon possesses the ability to spit green acid that both dissolves and sets ablaze objects that it touches. In this talk we present the various challenges posed by this somewhat unique effect from the visual development phase to production shots.
Achieving a performance-based, physics-defying, melting of a bronze bust was essential for maximum comedic timing in our film Spies In Disguise. With specific story-boarded facial gestures in mind, the directors wanted to convey a feeling, rather than a physical execution of melting metal. In addition to crafting a simulation based on pose to pose animation, the material of the bust had to evolve from bronze to copper. This required a non-linear, multi-departmental collaboration. Our solution allows fine control over the melt while keeping the believability of a substance phase change. It also maintains high-resolution details of the model, preserving our curvature-based procedural animated texture, which carries across a temporally incoherent topology. In our case, signal passes generated from both the materials and the simulation were needed to assemble all the rendered layers and integrate the effect into the scene. Our method allows for quick iteration and is intuitive to implement. We demonstrate our approach by categorizing them into departments: Animation, Effects and Look Development, for clarity.
This unique character look was inspired by the simulation tests done by the Character Simulation Department, which were so successful, they became the inspiration for the ”Kimura” sequence. We will talk about the workflow we used for creating ”Gooey Kimura”, how we worked together with animation, as well as how simulation was used to guide animation to achieve more fluid motion. We will also discuss the challenges we had to overcome to create an animated, yet dynamic character, which felt natural and maintained the animator’s intention while delivering a physical simulation that conveyed the essence of the character.
A key “Spies in Disguise” plot point was when Walter, a spy agency technician creates a potion that can transform humans into pigeons. The Effects Department was tasked with creating two distinct chemical reaction looks: “Success”– a pleasant foam-based effect used for when Walter creates the formula, and “Failure” – a disgusting, slimy effect showing Walter’s failed attempt at synthesizing an antidote. Because the effect is so close to the camera, director notes on the performance of each hero element were very specific and evolving over time. To achieve the directors’ vision, we developed new ways to segment the many procedural and simulated elements into smaller problem domains and combined procedural, simulation and rendering/compositing techniques for maximum flexibility.
Texturing is a ubiquitous technique used to enrich surface appearance with fine detail. While standard filtering approaches, such as mipmapping or summed area tables, are available for rendering diffuse reflectance textures at different levels of detail, no widely accepted filtering solution exists for multiresolution rendering of surfaces with fine specular normal maps. The current state of the art offers accurate filtering solutions for specular reflection at the cost of very high memory requirements and expensive 4D queries. We propose a novel normal map filtering solution for specular surfaces which supports data pre-filtering, and with an evaluation speed that is roughly independent of the filtering footprint size. Its memory usage and evaluation speed are significantly more favorable than for existing methods. Our solution is based on high-resolution binning in the half-vector domain, which allows us to quickly build a very memory efficient data structure.
We present a method for integrating surface and volume caustics in a production path tracer. We combine progressive photon mapping with photon guiding in an on-demand framework that avoids the overhead of general purpose bidirectional algorithms. We turn bias into noise during rendering to improve adaptive image refinement and denoising.
Point lights with an inverse-square attenuation function are commonly used in computer graphics. We present an alternative formulation of point light attenuation that treats point lights as simplified forms of spherical lights. This eliminates the singularity of the inverse-square light attenuation function and makes it easier to work with point lights in practice. We also present how the typical ad hoc modifications of the inverse-square formula can be improved based on our formulation.
We present a visual effects production workflow for using spectral sensitivity data of DSLR and digital cinema cameras to reconstruct the spectral energy distribution of a given live-action scene and perform rendering in physical units. We can then create images that respect the real-world settings of the cinema camera, properly accounting for white balance, exposure, and the characteristics of the sensor.
In the past decade, we’ve discovered over 4,000 exoplanets1, or faraway worlds orbiting other stars. However, we can’t yet take a clear picture of any of them, so how can we begin to imagine what they look like?
Three teams at NASA’s Goddard Space Flight Center developed a unique approach to tackling this problem for an exoplanet called TOI-700 d. By combining 3D climate models, a newly-developed module for an exoplanet visualization tool, and Adobe After Effects, they successfully visualized possible scenarios for the planet (Figure 1) and offer an effective model for approaching future discoveries.
Signs of Life is Griffith Observatory's new 35-minute, 8K, 60fps planetarium show in which we discover what it took to put life in the universe in the one place where we know it exists. In this program we'll discuss how the team was guided by Griffith Observatory's legacy in science-based storytelling. The team will describe the artistic process of merging cinematic sensibilities with scientific accuracy to create worlds in the universe. We'll also explore the importance and effectiveness of presenting science information artistically.
In Walt Disney Animation Studios'”Frozen 2”, the wind spirit Gale appears in multiple forms throughout the film. Gale can only be seen in its effects on other objects in the world. This includes interactions with leaves, debris and other environmental elements, as well as its effect on the cloth and hair of the characters: creating moments where the character is present, but not visible. Having to animate, simulate, and understand the character's emotional intent, all without true visual characteristics, posed many challenges. This talk addresses the solutions developed and the process we took to get there.
In Walt Disney Animation Studios’ “Frozen 2”, the Nokk appears as a horse made of water. Throughout the film, he assumes different forms: at times wild, and others calm; above and below the waterline; frozen and transitioning between states. This talk will describe the collaborative process of design, animation, simulation and lighting to achieve the look and feel of a character made of water.
In ”Frozen 2”, a key story point is centered around the destruction of a large dam. The scale and scope of this effect necessitated the development of a cross-departmental, effects-driven workflow. Effects were introduced and planned at the layout stage before animation to choreograph the dam collapse sequence and to enable the animators to have the character react to the destruction. During this show, we also further developed the ILM workflow integration at Walt Disney Animation Studios (WDAS) [Harrower et al. 2018].
Pose-space sculpting is a key component in character rigging workflows used by digital artists to create shape corrections that fire on top of deformation rigs. However, hand-crafting sculpts one pose at time is notoriously laborious, involving multiple cleanup passes as well as repetitive manual edits. In this work, we present a suite of geometric tools that have significantly sped up the generation and reuse of production-quality sculpts in character rigs at Pixar. These tools include a transfer technique that refits sculpts from one model to another, a surface reconstruction method that resolves entangled regions, and a relaxation scheme that restores surface details. Importantly, our approach allows riggers to focus their time on making creative sculpt edits to meet stylistic goals, thus enabling quicker turnarounds and larger design changes with a reduced impact on production. We showcase the results generated by our tools with examples from Pixar’s feature films Onward and Soul.
In Pixar’s Onward, the character Dad had an upper half which consisted of a stuffed hoodie, puffy vest, and garden gloves. The arms were floppy, stuffed sleeves able to swing freely while the head was a cinched, stuffed hood topped with a cap and wearing sunglasses. His lower body was rigged and simulated like a typical character. Knowing it was unrealistic to hand-animate the loose swinging arms and squishy upper body for a feature-length project, we developed a hybrid simulation/animation rig using tetrahedral volumes with complex rest state deformation and animatable targeting. This resulted in a robust, iterative workflow where simulation and animation were used together.
One of the main obstacles to applying the latest advances in motion synthesis to feature film character animation is that these methods operate directly on skeletons instead of high-level rig parameters. Loosely known as the “rig inversion problem”, this hurdle has prevented the crowd department at Pixar from procedurally modifying character skeletons close to camera, knowing that these procedural edits would not be reliably transferred to the equivalent motion on the full character for polish.
Prior attempts at solving this problem have tended to involve hard-coded heuristics, which are cumbersome for production to debug and maintain. To alleviate this overhead, we have adopted an approach of solving the inversion problem with an iterative least-squares algorithm. However, although there are numerous existing methods for solving this problem, the computation of the rig Jacobian is a frequent requirement, which in practice is prohibitively expensive. To accelerate this process, we propose a method wherein an approximation of the rig is derived analytically, through an offline learning process. Using this approximation, we invert full character rigs at real-time rates.
The soul characters in Disney/Pixar’s Soul have a stylized appearance that sets them into a unique world, which introduced many new challenges. Everyone from the art department and character modelers and shaders to the technical directors and developers in the effects, lighting, and software groups collaborated to bring this new visual style to screen. The soul world is abstract and ethereal; this needed to be balanced with visual clarity and design appeal.
As our character rigging and animation tools use rigged surfaces, a key challenge was presenting a new representation derived from this data that meets our visual goals. To achieve softness of volumetric form and dynamically changing linework, we built a system to procedurally generate this data in Houdini. Significant numerical computation was required to create this data at the fidelity required. We developed an automated system for managing this computation in a configurable way, while keeping data for downstream renders in sync with changes to character performances.
Spies in Disguise was the largest and most challenging show the Blue Sky Studios Crowds Team has delivered to date. Crowds were used in 449 of the film’s shots, necessitating a scalable pipeline that could handle a wide variety of crowd characters and performances. From supermassive pigeon flocks to drone swarms, tightly choreographed rings of henchmen to naturalistic groups of pedestrians across multiple cities around the globe, more crowd simulations had to be delivered with greater finesse and artistic fidelity than ever before.
In order to produce shots with hundreds of multi-volume crowd characters for Soul, we could not rely on the same I/O heavy pose-cache pipeline used for the hero characters [Coleman et al. 2020]. Our rendering and systems teams rated the total necessary storage for the ”soul world” at over 100 TBs for an average of two hero characters per shot.1 For expansive crowds of these characters to hit the same volumetric look while avoiding this I/O limitation, two new render-time technologies were developed. The first leveraged an existing volume rasterizer to pose volumes at render-time, informed by a lattice deformation. The second allowed for rasterization of surface primvars to be delivered to the volume shaders.
The unique, stylized look of Trolls World Tour presented complexity challenges to our rendering pipeline. Each asset was covered in high-density fuzz, with millions of curves processed each scene. We revisited the way we handle curve geometries with new optimization methods and a new caching system to achieve interactive loading speeds and scalable render capacity in our lighting tools.
To manage the ever-increasing complexity and collaboration in the creation of physical assets for LAIKA’s hybrid stopmotion/CG features, we developed an integrated browser-based workflow that enables automated resource leveling and scheduling in a centralized system that is accessible to the entire studio.
Animal Logic recently overhauled its outmoded lighting workflow for the film Peter Rabbit 2. Since Pixar’s Universal Scene Description (USD) was being adopted as the primary scene description format throughout the studio pipeline, this technology became a natural backbone around which to implement the new lighting toolkit. Following previous work to integrate USD into our animation pipeline[Baillet et al. 2018] we introduce Grip, a USD-native library which provides a node-based approach to authoring procedural modification of scenes; and Filament, a Qt-based application serving as the artist front end for interacting with a USD scene, the Grip engine, the production renderer, and pipeline tools.
We introduce Bond, a proprietary deformation system able to load geometry data directly from Pixar™ Universal Scene Description (USD) and to compute complex deformation chains on the GPU using NVIDIA® CUDA®. Bond also integrates tightly with Autodesk Maya®. This system follows on from our work to integrate USD into our animation pipeline [Baillet et al. 2018].
Bond has been used to deform all characters and props on the Peter Rabbit 2 movie’s 1300 shots to achieve high frame rate during playback and rig interaction.
Everest from Abominable is a main character with fur, he doesn’t talk, and is in over 650 shots. His huge size, abundance of fur, the fur’s emotional response, wide range of biped, quadraped and rolling motions, magical abilities, along with interactive characters, windy environments, and stylized shapes created by Animators, produced numerous challenges to his fur grooming and motion. This talk presents the various techniques used to tame those challenges encountered on that fantastic fluffy snow-white fur-covered beast.
Framestore has been producing award winning creature effects for over 20 years, with hair, fur and feathers being crucial elements of these creatures’ visual fidelity. Simulating how these elements interact with other geometry, wind, cloth and media of varying viscosity across many hundreds of shots in a film is a time consuming and laborious process, typically requiring many refinement iterations to achieve the desired result. In this talk, we present Fibre, a stable, robust and highly parallel dynamics solver designed to help maximize production efficiency. With Fibre integrated into its proprietary fur pipeline, Framestore has been able to reduce manual post-simulation fixing by 80% and reduced the simulation time for fur and feathers by up to 50% and 80%, respectively.
DNEG’s in-house fur software, Furball, has been in continuous production use since 2012. During this time it has undergone significant evolution to adapt to the changing needs from production. We discuss how recent work on films such as Avengers: Endgame and Togo has led to a complete shift in the focus of our fur tools. This has helped us scale up to meet the requirements of ever more fur-intensive shows, while also opening up exciting opportunities for future development.
Since Brave, Pixar has used a system called Moss to manage procedural ground cover such as grass and dense debris. The Moss system made it easy for a show to develop a series of looks (or ”types”) implemented in C++ with APIs mimicking a standard shading language. For Onward and Soul, we wanted to make development of types simpler for shading artists less familiar with C++, while still preserving the workflows and performance crucial to our vegetation heavy shows. Inspired partially by the new shading interface in RenderMan’s RIS path tracer, we reimagined Moss as having a small number of core types defining both the structure (debris, particles, single-blade grass, feathered grass, etc.) and features (keep alive, simulation, etc.). Look development and customization of these types would now be handled by Open Shading Language.
In an ongoing collaboration, LAIKA and Intel are combining expertise in machine learning and filmmaking, to develop AI powered tools for accelerating digital paint and rotoscope tasks. LAIKA’s stop-motion films, with unique character designs and 3d printed facial animation, provide challenging use cases for machine learning methodologies. Intel’s team has focused on tools that fit seamlessly into the workflow and deliver powerful automation.
We propose practical anime-style colorization of an input line-drawing. The key idea is the strategic withdrawal which reflects the prediction confidence that indicates the expected accuracy of the predicted color labels. Furthermore, we investigate the relation between the proposed confidence, prediction accuracy, and number of automatically colorized regions to maximize the efficiency of the colorization process including both automatic prediction and manual correction for practical use in production.
Upscaling techniques are commonly used to create high resolution images, which are cost-prohibitive or even impossible to produce otherwise. In recent years, deep learning methods have improved the detail and sharpness of upscaled images over traditional algorithms. Here we discuss the motivation and challenges of bringing deep learned super resolution to production at Pixar, where upscaling is useful for reducing render farm costs and delivering high resolution content.
We propose an unsupervised learning technique for image ranking of photos contributed by Google Maps users. A density tree is built for each point-of-interest (POI), such as The National Mall or the Louvre. This tree is used to construct clusters, which are then ranked based on size and quality. We choose a representative image for each cluster, resulting in a ranked set of high-quality, diverse, and relevant images for each POI. We validated our algorithm in a side-by-side preference study.
Using a commercial terrain engine [Englert 2012] as an example, we discuss both advantages and disadvantages of the continuous level-of-detail approach [Lindstrom and Pascucci 2001] to terrain-rendering while comparing our implementation with other hybrid [Dick et al. 2010] and implicit [Jad Khoury and Riccio 2018] approaches. We show how mesh shader and inline ray-tracing features of DirectX 12 Ultimate can be used to remedy those disadvantages.
In our previous publication [Goodhue 2017], we presented a prototype based on promising new techniques for how the state-of-the-art in animation compression for video game engines might be advanced. Later that year, we completed development on a production-quality version of that technology which has since seen active use in the ongoing production of future AAA titles. Many of our previous hypotheses were put to the test, and the algorithms were generalized to support translation and scale animations keys in addition to rotations.
Having used this new technology for quite some time now, we were able to confirm our expectations regarding the sort of technical problems it presents, as well as how to solve them. We are also able to compare our results to other state-of-the-art techniques for the first time, thus confirming the efficacy of our method.
Need for Speed Heat is the latest installment in EA’s racing game franchise running on the Frostbite engine. The gameplay is centered around daytime and nighttime high speed races with exotic sports cars in a large open world. We improved on the depiction of the cars through a new material system and combination of reflection techniques. For the indirect day and night time lighting, we implemented a sparse GPU irradiance probe grid.
”Myth: A Frozen Tale” is a VR short film created at Walt Disney Animation Studios. Inspired by the folklore of ”Frozen”, the film is a fairy tale within a fairy tale which explores the past, present, and future of the elemental spirits of ”Frozen 2”. The result is a visual poem which combines Disney Animation’s heritage of 2D animation, music, and real-time technology in novel ways.
To create an engaging, emotional experience on par with the initial designs, our team was presented with a number of technical and procedural challenges. This is the studio’s first use of Unreal Engine, so a significant amount of experimentation was necessary to find the workflow and VR integration techniques needed to achieve the high level of production quality we strive for.
In this talk, the team will explore many of these aspects of the production process. Using two case studies, we will discuss the design, challenges, collaborative workflows, and technological execution needed to bring ”Myth: A Frozen Tale” to life.
With “Frozen 2” wrapped in late summer 2019, a small team of effects artists set out to learn an entirely new element creation pipeline, utilizing real-time technology at its core. These artists were able to build upon experience gained from previous VR efforts at the studio including “Cycles” [Gipson et al. 2018] and “a kite’s tale” [Wright et al. 2019]. With an aggressively short schedule and the unflagging support of their Myth peers, the team created uniquely stylized artwork using a new palette of tools to help translate these designs into real-time effects.
The performance challenges of real-time rendering for VR are well documented - this poses a particular challenge for rendering realistic human characters, with many of the usual techniques popularised in games not scaling well to high pixel counts and framerates. For Blood & Truth on PlayStation VR, we adapted existing techniques to fit our VR focused forward renderer, and invented novel replacements for expensive screen-space effects.
After having spent a year and a half working on the Looking Glass mask for HBO's celebrated Watchmen series, Monsters Aliens Robots Zombies Senior VFX Supervisor Nathaniel Larouche outlines how his team tackled the many 1) creative, 2) hardware and 3) software/pipeline challenges of creating the mask on a TV timeline and budget.
Simulating thousands of small squid-like creatures falling to their deaths, oozing a gel-like slime on impact and slowly melting away was no easy task. Managing the amount of pink one-eyed extraterritorial squids proved to be quite a challenge because they also had to interact with one another as well as with the different surrounding elements. We had complex fluid simulations to manage as the squids would first transform into gelatin and then completely dissolve. Shading the squids also proved to be quite a challenge because we had translucency and complex internal shadings to handle, which required very long render times since we were dealing with such a large quantity of creatures. We needed to be able to iterate quickly so we could work closely with the client in order to find that perfect balance of squid dynamics such as splashing, melting and shading.
This case study will expound on how a parallel pipeline structure became crucial in the creation of a photorealistic CG dromedary, one of the main stars of Jumanji: The Next Level.
Disney+’s ‘Togo’ is a testament to the critical creative partnership between DNEG's Build, Rigging and Animation departments, in the pursuit of a realistic CG dog. This talk will explore the intricacies of creating photorealistic dogs from ideation to finish, demonstrating the process from an Animation standpoint, while addressing the collaborative nature of the project.
The Uber Advanced Technologies Group (ATG) aims to bring safe, reliable self‑driving transportation to everyone, everywhere; In order to facilitate this mission, a suite of simulation technologies for exercising the self driving vehicle (SDV) needs to be created. These simulations must be scale-able, believable, and human interpretable in order to provide consistent value.
Here we will cover how ATG leveraged a game engine to meet these demands. Topics covered include: evaluating self driving performance, authoring self driving tests, simulating robot-realistic worlds, and critical lessons learned while building these tools.
Described are two applications using immersive augmented reality (AR) and virtual reality (VR) for informal learning research. A critical design factor resulting from the authentication process in sourcing all text, media, and data is the high information fidelity (truth) in all signals transmitted to the human. The AR Perpetual Garden App was developed to annotate the Carnegie Museum of Natural History's dioramas and gardens to bring learning to all visitors. The Virtual UCF Arboretum was developed to represent the real UCF Arboretum in VR for immersive learning research. More like a virtual diorama or virtual field trip, they are open to independent exploration and learning. Unlike fantasy games or creative animations, these environments used accurate content, high information fidelity, to enhance immersion and presence. As data visualizations or simulations, and not point-clouds or interactive 360 VR video, they can show past, present, and future scenarios from data. As an application intended for informal learning, the needs of learners as well as the institutional stakeholders were integrated in a participatory design process by extending traditional user-centered design with expert-learner-user-centered design. The design patterns will be of interest to a broad community concerned with perception, emotions, learning, immersion and presence, and any who are developing educational, training and certification, or decision support applications with respect to improving natural knowledge.
Invite Only VR: A Vaping Prevention Game is a virtual reality (VR) videogame intervention focused on e-cigarette prevention in teens. To our knowledge, Invite Only VR is the first theory-driven e-cigarette prevention game to be developed for a VR platform, making it unique among the limited pool of existing e-cigarette intervention programs for adolescents. Invite Only VR capitalizes on the use of VR by delivering an intervention that arms teens to deal with peer-pressure situations surrounding e-cigarettes. VR is especially well-suited to this type of intervention because VR facilitates greater social presence, the subjective experience of being present with a “real” person, than other forms of technology. Invite Only VR not only simulates the presence of plausible peers, but it also uses voice recognition software throughout the game to allow the player to practice refusing peers in real time using their own voice. The game was developed with input from 4 focus groups comprised of 5 adolescents each to create a game narrative and situations that would feel authentic to the target audience. This careful background research ensured that the virtual characters would behave in the manner expected by players and use the appropriate colloquialisms when speaking about e-cigarettes. The intervention is currently being evaluated in a non-randomized cluster trial with 285 middle school students. Preliminary feasibility testing conducted on a prototype of the game indicates that playing Invite Only VR increases player e-cigarette knowledge and perceptions of e-cigarette harm. Moreover, teens who played the game reported a lower likelihood of experimenting with e-cigarettes in the future. In this initial evaluation, 83% of players agreed that they enjoyed playing the game and 78% said they would tell their friends to play, suggesting that Invite Only VR is an engaging way to convey the dangers of e-cigarettes to youth.
Character rigs are procedural systems that deform a character’s shape driven by a set of rig-control variables. Film quality character rigs are highly complex and therefore computationally expensive and slow to evaluate. We present a machine learning method for approximating facial mesh deformations which reduces rig computations, increases longevity of characters without rig upkeep, and enables portability of proprietary rigs into a variety of external platforms. We perform qualitative and quantitative evaluations on hero characters across several feature films, exhibiting the speed and generality of our approach and demonstrating that our method out performs existing state-of-the-art work on deformation approximations for character faces.
The recent “Phace” facial modeling and animation framework [Ichim et al. 2017] introduced a specific formulation of an elastic energy potential that induces mesh elements to approach certain prescribed shapes, modulo rotations. This target shape is defined for each element as an input parameter, and is a multi-dimensional analogue of activation parameters in fiber-based anisotropic muscle models. We argue that the constitutive law suggested by this energy formulation warrants consideration as a highly versatile and practical model of active elastic materials, and could rightfully be regarded as a “baseline” parametric description of active elasticity, in the same fashion that corotational elasticity has largely established itself as the prototypical rotation-invariant model of isotropic elasticity. We present a formulation of this constitutive model in the spirit and style of Finite Element Methods for continuum mechanics, complete with closed form expressions for strain tensors and exact force derivatives for use in implicit and quasistatic schemes. We demonstrate the versatility of the model through various examples in which active elements are employed.
Cyberpunk 2077 is a highly anticipated massive open-world video game, with a complex, branching narrative. This talk details new research and innovative workflow contributions, developed by jali, toward the generation of an unprecedented number of hours of realistic, expressive speech animation in ten languages, often with multiple languages interleaved within individual sentences. The speech animation workflow is largely automatic but remains under animator control, using a combination of audio and tagged text transcripts. We use insights from anatomy, perception, and the psycho-linguistic literature to develop independent and combined language models that drive procedural animation of the mouth and paralingual (speech supportive non-verbal expression) motion of the neck, brows and eyes. Directorial tags in the speech transcript further enable the integration of performance capture driven facial emotion. The entire workflow is animator-centric, allowing efficient key-frame customization and editing of the resulting facial animation on any typical facs-like face rig. The talk will focus equally on technical contributions and its integration and creative use within the animation pipeline of the highly anticipated aaa game title: Cyberpunk 2077.
Since the majority of the world of The Last of Us Part II has overcast lighting conditions, ambient lighting was a crucial component of our rendering system. Developing a game that is mainly ambient lit is already a challenge on its own, but we also had to deal with limited amount of processing power and memory on our target platform, Playstation 4. In this abstract we will mainly focus on improvements of our baked ambient lighting system that enabled us to produce convincing and consistent lighting results while maintaining our target of 30 fps and remaining within our limited memory budget.
The Last of Us: Part Two is set in multiple different environments with extensive atmospheric look. In addition, the game uses a lot of transparent objects such as glass, particle effects, etc. that require proper compositing with the fog. This required us to create a new volumetric system that would support different fog environments and seamlessly blend with and tie together the rest of the world. We have developed a new volumetric fog system that is based on a view space frustum voxel grid (froxel grid) that allows us to properly composite with the rest of the world. However, such a system also comes with a lot of production challenges. In this abstract we will focus on the developed methods to improve visual quality, reduce artifacts coming from the nature of grid-based fog, and fit within the performance budget.
Developing a game that satisfies high expectations on visual quality while running at 30 fps on a 6 year old console platform, PlayStation 4, is quite a challenging task. The majority of the world of The Last of Us Part II contains alpha geometry which has poor rendering pipeline performance and full-screen expensive shaders with heavy resource usage which required us to use plenty of hacks and tricks to push the hardware to its limits and make sure we don’t have to compromise needlessly on the visual fidelity of the game. This talk will mainly focus on case studies from the actual game highlighting the most expensive parts of our game’s frame and how we were able to come up with solutions to these performance hurdles.
Previous Naughty Dog titles have used standard effect authoring, where particle emitters were manually placed on the environment or attached to character joints. In addition to this there were render targets assigned to characters for the ability to apply blood, wetness, and other effects as part of the character rendering, but any particle rendering within them would be predetermined.
The Last of Us: Part Two introduced more detailed interactions between effects and the rest of the world. In addition, the effects were GPU driven, making them more efficient and allowing for bigger particle numbers. The effects required much less manual placement from the artists and utilized world information for spawning, and could attach to and simulate on geometry efficiently, following gravity and surface properties on animating geometry.
The next step forward in Immersive Storytelling: Felix & Paul Studios, the leading immersive entertainment company with a celebrated track record in virtual reality, makes its first foray into augmented reality, through trackable objects in the form of a trackable storybook. The first project to be displayed on this platform will be the Jim Henson Storyteller series, with a new story adapted by Felix & Paul Studios called “The Seven Ravens,” produced with the support of Logos: CMF, SODEC, EPIC Mega Grant, and Magic Leap.
Virtual Reality (VR) is a transformative medium for storytelling where we can place you, the audience, directly inside the story and make you matter. We do this by giving you a role to play and empowering you to interact and build relationships with characters in the story. We present our experiments and learning in authoring interactive VR narratives across our two most recent projects: Baba Yaga and Bonfire. We showcase our Storyteller toolset for creating non-linear stories where interactivity is immersive. We then provide insights into creating emotive characters with the same quality character performances as in our hand-crafted linear animations.
This case study will show how the world's first 360VR sitcom, The BizNest, paves the way not just for immersive comedy, but for immersive storytelling generally. It will provide guidance for how to think about telling stories in XR (extended reality), techniques for how to bring immersive stories to life, and new conventions that creatives working in immersive disciplines can apply to their work. Eve Weston (writer/director) will walk the audience through the various processes used and will provide illustrative examples from The BizNest, which is the subject of a devoted chapter in Handbook of Research on the Global Impacts and Roles of Immersive Media [Weston, 2020] and won the DreamlandXR Best Project for Television award at CES 2020. The BizNest, an immersive workplace comedy, has developed a new approach for telling stories in a 360-degree environment, one that is poised to have as much effect on storytelling in immersive media as the invention of multi-cam did on storytelling for TV.
Eyes are crucial in depicting the emotions and psychology of a character. In this talk, we present the eye system we developed at OLM Digital to design and control eyes for Japanese animation where they typically come in a lot of different shapes and colors. Our tool is straightforward and provides a lot of control to the users. Our artists can create elongated irises and pupils, add the cornea bump and refraction and add lighting effects like caustics and catchlights. The render pass exports the necessary information to provide the ability to adjust the final appearance at compositing time, giving a lot of creative freedom to the artists.
Finding the look of the soul characters for the movie Soul was a challenging process. The art direction was based on a ethereal design and turned out to be a moving target during the look development collaboration between the different departments involved. While on a tight schedule, multiple technical approaches were taken in order to find solutions to the difficult design challenges. The final approach features a volumetric shading with a unique line treatment on extreme flexible yet simple and appealing character designs.
During “Show Yourself“ in Walt Disney Animation Studios’ “Frozen 2,“ materials and lights are the main representation for three key story elements: glacial ice, the magic of the Spirits, and the concept of memory. This talk covers the creative approaches and collaborative workflows that brought these elements to the screen.
Short Circuit is an experimental professional development program that began in 2016 where anyone at Walt Disney Animation Studios can pitch an idea and potentially be selected to create an original experimental short film with the support of the studio. This innovative program aims to develop and train studio talent by promoting the culture of storytelling throughout Disney Animation. By design, this program highlights new voices, encourages risk taking in both visual style and story, and supports technical innovation in the filmmaking process. The program has so far given 20 Disney Animation filmmakers the opportunity to create their own short, 14 of which have made their debut on Disney+. These experimental shorts have pushed the boundaries of storytelling and art, exploring stylized looks in a condensed production period. This talk will present the history and motivation behind Short Circuit, highlight how small teams worked efficiently to explore storytelling and unique art styles and discuss how lessons from the program have impacted filmmaking at Walt Disney Animation Studios.
In Walt Disney Animation Studios’ “Frozen 2”, costumes play an important part in the character design and story. Intricate embroidery on the costumes captures integral components of the characters’ personalities, symbolized by the different shapes and patterns. One of the challenges for the character team was the realization of the intricate embroidery work that is essential to the character design. We present a new curve-based approach to procedurally generate complex embroidery that can take 2D visual designs represented by line strokes and produce renderable curves. The curves deform along with the costumes while staying flattened on their surfaces. The method is straightforward and intuitive. Authoring and visualization are fast and easy, allowing quick iterations without a large amount of manual work from artists when a design changes. The method supports free-form stitches with threads of various widths, colors and opacities, enabling a wide range of embroidery styles.
We present a new technique to refit garments between characters of different shapes. Our approach is based on a novel iterative scheme that alternates relaxation and rebinding optimizations. In the relaxation step, we use affine-invariant coordinates in order to adapt the input 3D garment to the shape of a target character while minimizing mesh distortion. In the rebinding step, we reset the spacing between the refitted garment and the character body controlled by user-prescribed tightness values. Our method also supports additional constraints that encode the spatial and structural arrangement of multi-layers, seams, and fold-overs. We employed this refit tool in Pixar’s feature film Soul to transfer garment pieces between various digital characters.
The art direction of Soul’s version of New York City sought a highly detailed ”hypertextural” style to contrast the ethereal volumetric world our characters visit later in the film. Many New York City shots were extreme closeups of main characters playing various instruments. This meant extreme closeups of our main characters’ various garments. The Soul characters team identified that increasing the detail of our garment assets heightened the separation between the two worlds. To help establish this highly detailed look, the team developed a render-time deformation pipeline for geometric thread detail.
We introduce a new approximate Fresnel reflectance model that enables the accurate reproduction of ground-truth reflectance in real-time rendering engines. Our method is based on an empirical decomposition of the space of possible Fresnel curves. It is compatible with the preintegration of image-based lighting and area lights used in real-time engines. Our work permits to use a reflectance parametrization [Gulbrandsen 2014] that was previously restricted to offline rendering.