Here we present a unique approach to building a highly-scalable, multi-functional, and production-friendly feature animation pipeline on a core infrastructure comprised of microservices. We discuss basic service layer design as well as the benefits and challenges of moving decades-old production processes for an entire animation studio to a new, transactional pipeline operating against a compartmentalized technology stack. The goal is to clean up the clutter of a legacy pipeline and enable a more flexible production environment using modern, web-based technology.
Though it offers immense and undeniable compute scalability, cloud computing has a reputation for being more expensive than the local alternatives. But does it have to be? We'll use data from visual effect company Atomic Fiction to look at cost optimization strategies such as using pre-emptible instances, cost limits, vertical vs. horizontal scaling, and selection of instance types, to provide examples of how those strategies can reduce production costs.
This talk presents the motivations and goals for developing the Shotgun Pipeline Toolkit (Toolkit), a platform for building, customizing, and evolving production pipelines. We cover the challenges of developing a product that is valuable for studios of all types and sizes, supports common operating systems, and is easy to use; all while providing the flexibility and customizability required by creative studios. We will show the evolution of Toolkit from concept to release and the lessons learned trying to build pipeline components traditionally found only in studios with large development budgets. Finally, we look at recent work to reduce the barrier to entry to Toolkit as a step towards democratizing pipeline.
This talk presents the techniques used to create the hair for 'The Fashionista Twins', Satin and Chenille, from the film Trolls. The conjoined twins are uniquely connected in a loop by their brightly colored hair. The seamless connection of their hair posed unique technical challenges in grooming, rigging, and the shot pipeline and it required a collaborative effort to bring their hair to life.
Neural Style Transfer is a striking, recently-developed technique that uses neural networks to artistically redraw an image in the style of a source style image. This paper explores the use of this technique in a production setting, applying Neural Style Transfer to redraw key scenes in Come Swim in the style of the impressionistic painting that inspired the film. We present a case study on how the technique can be driven within the framework of an iterative creative process to achieve a desired look and propose a mapping of the broad parameter space to a key set of creative controls. We hope this study can provide insights for others who wish to use the technique in a production setting and guide priorities for future research.
The ever increasing complexity of the LEGO movies demanded a new way of managing project breakdowns. Animal Logic's fine-grained, modular representation for assets[Sarsfied and Murphy 2011] meant that hundreds and thousands of shots, and shot objects, needed to be managed. It was clear from our experience on the The LEGO Movie that our existing text-based spreadsheet approach would not scale to demands of The LEGO Batman Movie.
Game engine technology, when applied to traditional linear animation production pipelines, can positively alter the dynamics of animated content creation. With realtime interactivity, the iterative revision process improves, flexibility during scene assembly increases, and rendering overhead is potentially eliminated.
In recent years the expected standard for facial animation and character performance in AAA video games has dramatically increased. The use of photogrammetric capture techniques for actor-likeness acquisition, coupled with video-based facial capture and solving methods, has improved quality across the industry. However, due to variability across project pipelines, increased per-project scope for performance capture, and a reliance on external vendors, it is often challenging to maintain visual consistency from project to project, and even from character to character within a single project. Given these factors, we identified the need for a unified, robust, and scalable pipeline for actor likeness acquisition, character art, performance capture, and character animation.
Skin slide is the deformation effect where the outer surface moves along its tangent directions, caused by the stretching of other skin regions and/or the dynamic motion of the underlying tissues. Such an effect is essential for expressing natural deformations of humans, animals, and tightly-fitting costumes. Previous methods to achieve skin sliding were either manually controlled, or laborious to set up and expensive to compute. We present a novel, automated method to achieve convincing skin sliding with minimal set-up and run-time computation. Our method takes advantage of the recent developments in the elastic body simulations formulated as optimization using Alternating Direction Method of Multipliers (ADMM). This approach, which generalizes position-based and projective dynamics, allows intuitive integration of arbitrary constraints such as collision against the original deforming surface. The collision is accelerated and stabilized by taking advantage of the local nature of the sliding. To accelerate the convergence even further while respecting the artist-driven deformation, we propose a simple method to resume the simulation from the previous local parameterization. Various production results using a Maya deformer implementation of this technique prove its efficiency and competency.
Recently, Industrial Light & Magic has begun exploring facial muscle simulation as a means of augmenting our blendshape-based facial animation workflow in order to attain higher quality results. During this process, we discovered that a precise and accurate model of the underlying facial anatomy is key to obtaining high-quality facial simulation results that can be used for photorealistic hero characters. We present an overview of our workflow for developing such a model along with some of the key anatomical lessons that were essential to the process.