In this presentation we describe a work-in-progress approach to a portable character animation pipeline for real-time scenarios that can dramatically reduce iteration time and also increase character quality and flexibility. Simply put, it is a What You Rig and Animate (in the DCC app) is What You Get (in the VR experience) approach. Our implementation involves using the python-based Kraken tool to generate a rig that can run in Autodesk Maya® and also a version that can be executed by Fabric Engine within Unreal Engine®. By essentially running the same full rig both in Maya and Unreal, we are able to maintain film-quality characters that keep the same richness and animation control.
Portable characters have their rigs defined in a way that allows them to run in any environment while maintaining the full flexibility and functionality of the original control and deformation rig, which in turn allows for artistic intent to be preserved at all stages.
We present state-of-the-art character animation techniques for generating realistic anatomical motion of muscles, fat, and skin. Physics-based character animation uses computational resources in lieu of exhaustive artist effort to produce physically realistic images and animations. This principle has already seen widespread adoption in rendering, fluids, and cloth simulation. We believe that the savings in manpower and improved realism of results provided by a physics- and anatomy-based approach to characters cannot be matched by other techniques.
Over the past year we have developed a physics-based character toolkit at Ziva Dynamics and used it to create a photo-realistic human character named Adrienne. We give an overview of the workflow used to create Adrienne, from modeling of anatomical bodies to their simulation via the Finite Element Method. We also discuss practical considerations necessary for effective physics-based character animation.
We present techniques to selectively and dynamically detect and smooth folds in a cloth mesh after simulation. This gives artists the controls to emphasize or de-emphasize certain folds, cleanup simulation errors that can cause crumpled cloth, and resolve cloth-body interpenetrations that can happen during smoothing. These techniques are simple and fast, and help the artist to direct, cleanup, and enrich the look of simulated cloth.
Creating a high quality blendshape rig usually involves a large amount of effort from skilled artists. Although current 3D reconstruction technologies are able to capture accurate facial geometry of the actor, it is still very difficult to build a production-ready blendshape rig from unorganized scans. Removing rigid head motion and separating mixed expressions from the captures are two of the major challenges in this process. We present a technique that creates a facial blendshape rig based on performance capture and a generic face rig. The customized rig accurately captures actor-specific face details while producing a semantically meaningful FACS basis. The resulting rig faithfully serves both artist friendly keyframe animation and high quality facial motion retargeting in production.
This talk will give an overview of Framestore's VFX pipeline, focusing on the colour transformation steps; it will then cover the migration to using the Academy Color encoding System (ACES) over the last few years.
Showing how the changing state of ACES furthered Framestore adoption, by adding features to better match the requirements of VFX studios in general.
We will also mention a few other areas Framestore uses ACES technologies, and some of the areas where the latest ACES version 1.0 would benefit from additional work, along with some suggestions and workarounds.
Gaffer is an open-source application framework for visual effects production, which includes a multithreaded node based computation framework and a QT based UI framework for editing and viewing node graphs. The Gaffer frameworks were initiated independently by John Haddon in 2007, and have been used and extended in production at Image Engine since they were open-sourced in 2011. They have become vital to nearly the entire Image Engine pipeline, forming the basis of any node-based system we choose to develop.
To ensure peak utilization of hardware resources, as well as handle the increasingly dynamic demands placed on its render farm infrastructure, Weta Digital developed custom queuing, scheduling, job description and submission systems - which work in concert to maximize the available cores across a large range of non-uniform task types.
The render farm is one of the most important, high traffic components of a modern VFX pipeline. Beyond the hardware itself a render farm requires careful management and maintenance to ensure it is operating at peak efficiency. In Wetas case this hardware consists of a mix of over 80,000 CPU cores and a number of GPU resources, and as this has grown it has introduced many interesting scalability challenges.
In this talk we aim to present our end-to-end solutions in the render farm space, from the structure of the resource and the inherent problems introduced at this scale, through the development of Plow - our management, queuing and monitoring software, and Kenobi - our job description framework. Finally we will detail the deployment process and production benefits realized. Within each section we intend to present the scalability issues encountered, and detail our strategy, process and results in solving these problems.
The ever increasing complexity and computational demands of modern VFX drives Wetas need to innovate in all areas, from surfacing, rendering and simulation but also to core pipeline infrastructure.
The 'Matchmove', or camera-tracking process is a crucial task and one of the first to be performed in the visual effects pipeline. An accurate solve for camera movement is imperative and will have an impact on almost every other part of the pipeline downstream. In this work we present a comprehensive analysis of the process at a major visual effects studio, drawing on a large dataset of real shots. We also present guidelines and rules-of-thumb for camera tracking scheduling which are, in what we believe to be an industry first, backed by statistical data drawn from our dataset. We also make available data from our pipeline which shows the amount of time spent on camera tracking and the types of shot that are most common in our work. We hope this will be of interest to the wider computer vision research community and will assist in directing future research.
This talk describes work that I have been doing using generative systems and the problems this raises with how to deal with multi-dimensional parameter spaces. In particular I am interested in dealing with problems where there are too many parameters to do a simple exhaustive search, only a small number of parameter combinations are likely to achieve interesting results, but the user still wants to retain creative influence.
For a number of years I have been exploring how intricate complex structures may be created by simulating growth processes. In early work, such the Aggregation (Lomas 2005) and Flow series, a small number of parameters controlled various effects that could bias the growth. These could be explored by simply varying all the parameters independently and running simulations to test the results.
Simple methods such as these work well when there are up to 3 parameters. However, as the number of parameters increase, the task rapidly becomes increasingly complex, and methods that exhaustively sample all the parameters independently are no longer viable.
In this talk I will discuss how I have approached this problem for my recent Cellular Forms (Lomas 2014) and Hybrid Forms (Lomas 2015) works which can have more than 30 parameters, any of which could affect the simulation process in complex and unexpected ways.
In particular, systems that have the potential for interesting emergent results often exhibit difficult behavior, where most sets of parameter values create uninteresting regularity or chaos. Only at the transition areas between these states are the most interesting complex results found.
To help solve these problems I have been developing a tool called 'Species Explorer'. This uses a hybrid approach that combines both evolutionary and lazy machine learning techniques to assist the user find combinations of parameters that may be worth sampling, helping them to explore for novelty as well as to refine particularly promising results.
Disney's live action remake of The Jungle Book required us to build photorealistic, organic and complex jungle environments. We developed a geometry distributor with which artists could dress a large number of very diverse CG sets. It was used in over 800 shots to scatter elements, ranging from debris to entire trees. Per object attributes were configurable and the distribution was driven by procedural shaders and custom maps.
This paper describes how the system worked and demonstrates the efficiency and effectiveness of the workflows which originated from it. We will present a number of scenarios where this pipelined, semi-procedural strategy for set dressing was crucial to the creation of high-quality environments.
Allumette is an immersive, highly emotional, and visually complex virtual reality movie that takes place in a city floating amongst clouds. The story unfolds around you, and as the viewer, you are free to experience the action from a perspective of your choosing. This means you can move around and view the clouds from all angles, while the set and characters interact intimately with the landscape. This type of set is a formidable challenge for traditional animated films where you have huge resources and hours to render each frame, which makes achieving the look and feel of immersive clouds in VR uncharted territory full of difficult challenges. Existing lightweight techniques for real time clouds, such as using geometric shells with translucency shaders, and sprite-based methods, have a combination of poor quality and bad performance in VR, which led us to seek novel methods to tackle the problem. For Allumette, we first modeled clouds in virtual reality by painting cloud shells using a proprietary modeling tool, then used a third party procedural modeling package to create and light the cloud voxel grids. Finally, these grids were exported with a custom file format, and rendered using a ray marcher in our game engine. The resulting clouds take .6ms per eye to render, and immerse the viewer in our cloud city.
Alice Through The Looking Glass was a movie that required controllable, highly stylized natural phenomena as a core ingredient of some of it's major VFX sequences. Sony Pictures Imageworks developed various cross-departmental collaboration techniques to achieve this surreal aesthetic on two major, heavily featured effects. Specifically these were the Rust and Oceans Of Time effects.
This paper will detail different examples of integrating unusual pipeline and workflow methods for these high concept ideas. In the case of Oceans of Time, the traditional workflow from front end to back end was either circumvented or inverted entirely because of the unconventional nature of the effects involved. For the rust, while the workflow was more linear in the traditional manner, the work between departments had to overlap to a significantly greater extent than is typical. For both sets of sequences, reasons for certain production decisions will be outlined, as well as some of the benefits and issues presented as a result.