As described by this now famous quote "The art challenges technology, and the technology inspires art", technology has always played an important part in Pixar's movie making process. This course will show how we develop and utilize new technical advancements to tell more appealing stories, using real-world examples from our latest movie Incredibles 2. This is a direct refresher of the previous course from Siggraph Asia 2015. Since that time, Pixar's pipeline has been heavily restructured, switching its main rendering algorithm to pathtracing. We will describe how that technology has now matured, and how we were able to reintroduce complete artistic control in this new physically based world. At the same time, the pipeline kept evolving and we introduced a new stage that didn't exist before, in the form of deep compositing. Finally we'll focus on USD, OpenSubdiv and the Hydra render engine, to showcase how the whole pipeline is moving toward real-time feedback, not only for rendering, but also for many other departments such as animation, simulation and crowds.
Costumes are an important part of character design, acting, storytelling, and visual appeal in animation. However, it is challenging to achieve art-directed natural-looking motion and detail in CG animated clothing, due to technology, workflow, and budget constraints. This course will cover Pixar's latest approach to CG costumes, from design to tailoring to simulation, and how we try to address these challenges. Our goal is to continue working towards a balance between the detail and physicality of real costumes, and the stylized artistry and movement of 2D animated clothing.
Using examples from "Incredibles 2", "Coco", and other Pixar films, we will show how our artists approach the initial costume design direction, strategically plan designs to fit within time and technology constraints, and translate drawings into 3D clothing on stylized characters. Next, we will show how we create garment models using 3D and flat-panel tailoring methods, applications for common simulation parameters and settings, and robust out-of-box simulation techniques using cloth rigging and dynamic alterations. Finally, we will cover the tools used to simulate garments in shots, create appealing shapes and movement, and help Animation let the characters act with their clothing. Although Pixar uses proprietary tools, the principles can be applied to other pipelines. Along the way, we will talk about how the tailoring and simulation teams collaborate and fit in with other departments, such as Rigging, Shading, Animation, Art, and Crowds, as well as the current state of our technology and tool set. We will cover material for all levels of experience, with backgrounds ranging from artistic to technical.
Artists are underrepresented in the reinforcement learning (RL) community due to the steep learning curve involved in in-depth understanding of RL algorithms. However, artists can play an important role in the RL community by defining innovative problems, designing creative environments, and creating novel applications. As a popular tool for artists to experiment with programming, Processing has been highly adapted by many artists as their entry point to programming. Given the popularity of Processing in the creative community, we use this tutorial as a steppingstone to bridge RL and creativity by introducing RL core concepts in Processing. The purpose of this workshop is twofold: 1) to attract more artists to the RL community by demonstrating RL demos in their familiar IDE; 2) to demystify RL problems by implementing them in a high-level language without any external libraries. Importantly, this tutorial is not about introducing a specific programming language, but will focus on how to analyze, frame, and solve RL problems.
In computer graphics, many traditional problems are now better handled by deep-learning based data-driven methods. In applications that operate on regular 2D domains, like image processing and computational photography, deep networks are state-of-the-art, beating dedicated hand-crafted methods by significant margins. More recently, other domains such as geometry processing, animation, video processing, and physical simulations have benefited from deep learning methods as well. The massive volume of research that has emerged in just a few years is often difficult to grasp for researchers new to this area. This tutorial gives an organized overview of core theory, practice, and graphics-related applications of deep learning.
Near-eye (VR/AR) displays suffer from technical, interaction as well as visual quality issues which hinder their commercial potential. This tutorial will deliver an overview of cutting-edge VR/AR display technologies, focusing on technical, interaction and perceptual issues which, if solved, will drive the next generation of display technologies. The most recent advancements in near-eye displays will be presented providing (i) correct accommodation cues, (ii) near-eye varifocal AR, (iii) high dynamic range rendition, (iv) gaze-aware capabilities, either predictive or based on eye-tracking as well as (v) motion-awareness. Future avenues for academic and industrial research related to the next generation of AR/VR display technologies will be analyzed.
I would like to conclude this presentation by providing a brief summary of this course and share my vision of a playful world where the real and digital world coexist in the near future.
Since 2004, I have been developing game AI for many titles in AAA titles:
• Chrome Hounds (Xbox360®)
• Demon's Souls (PS3®)
• Armored Core V (Xbox360®/PS3®)
• Final Fantasy XIV: A Realm Reborn
• Final Fantasy XV
Overview of modern GPU techniques for large-scale visualization
• Focus on volume data
Out-of-core techniques leveraging modern GPU features
• Virtual texturing approaches
Display-Aware, Remote and Web-Based Visualization
We complement the last three editions of the course at SIGGRAPH Asia (2015, 2016) and SIGGRAPH (2017) to make it more of a hands-on nature and include OpenISS. We explore a rapid prototyping of interactive graphical applications for stage and beyond using Jitter/Max and Processing with OpenGL, shaders, and featuring connectivity with various devices. Such rapid prototyping environment is ideal for entertainment computing, as well as for artists and live performances using real-time interactive graphics. We share the expertise we developed in connecting the real-time graphics with on-stage performance with the Illimitable Space System (ISS) v2 and its OpenISS core.
• Simplex algorithm / interior point algorithms
• Standard solvers / quite fast
• Formulation is already non-trivial
• Graphical Example
"Vulcan is the god of fire including the fire of volcanoes, metalworking, and the forge in ancient Roman religion and myth. Vulcan is often depicted with a blacksmith's hammer.The Vulcanalia was the annual festival held August 23 in his honor. His Greek counterpart is Hephaestus, the god of fire and smithery. In Etruscan religion, he is identified with Sethlans.
Despite the wide adoption in film production and animation industry nowadays, Monte Carlo light transport simulation is still prone to producing noisy images within short rendering time. Accelerating the convergence of Monte Carlo rendering without sacrificing its accuracy is by far a challenging task. In this course, we will learn about gradient-domain light transport simulation, a recent family of techniques in physically based rendering introduced in the past five years that can accelerate traditional Monte Carlo rendering up to approximately an order of magnitude based on gradient estimation and image reconstruction. Particularly, we will introduce the fundamentals of gradient-domain rendering with gradient-domain path tracing, and then extend the discussion to gradient-domain bidirectional path tracing and photon density estimation. We also discuss volume rendering in the gradient domain before diving into advanced topics in recent state-of-the-art papers in this direction. We further discuss tips and tricks in open-source implementations of such algorithms, and provide ideas for future research directions
The research and development department of Square Enix, the Advanced Technology Division (ATD), has recently released a virtual reality (VR) adaptation of a manga. Since the very beginning of this project, its theme has been the question:
Physics-based animation of elastic materials allows to simulate dynamic deformable objects such as fabrics, human tissue, hair, etc. Due to their complex inner mechanical behaviour, it is difficult to replicate their motions interactively and accurately at the same time. This course introduces students and practitioners to several parallel iterative techniques to tackle this problem and achieve elastic deformations in real-time. We focus on techniques for applications such as video games and interactive design, with fixed and small hard time budgets available for physically-based animation, and where responsiveness and stability are often more important than accuracy, as long as the results are believable. The course focuses on solvers able to fully exploit the computational capabilities of modern GPU architectures, effectively solving systems of hundreds of thousands of nonlinear equations in a matter of few milliseconds. The course introduces the basic concepts concerning physics-based elastic objects, and provide an overview of the different types of numerical solvers available in the literature. Then, we show how some variants of traditional solvers can address real-time animation and assess them in terms of accuracy, robustness and performance. Practical examples are provided throughout the course, in particular how to apply the depicted solvers to Projective Dynamics and Position-based Dynamics, two recent and popular physics models for elastic materials.
A consistent workflow is important to fully demonstrate the attractiveness of high-quality CG content, especially when such content uses high dynamic range (HDR) and wide color gamut (WCG) techniques. This course explains a practical workflow useful for both game developers and creators involved in HDR / Wide color content.
Even though CG quality has significantly improved over the past years, final output quality is restricted by limitations of luminance and color gamut of conventional output devices such as televisions. Recently HDR and Wide color technology has expanded these limitations but it is problematic to output high quality HDR images on each device, because consistent interpretations of both hardware behavior and software specification is difficult. Therefore, it is necessary to carefully establish reliable standards for stable outputs on various devices.
For that purpose, we need a consistent theory-based approach for each aspect of the workflow (asset collecting and editing, interchangeable formats, encoding, preview environment, verification) and rendering pipeline (lighting, tone mapping, etc.). Using reliable standards enables us to gain robust outputs with high color reproducibility and high dynamic range accuracy.
This course shares a wide range of knowledge from the basics of color science to the concrete solution used in the production of Gran Turismo SPORT, a photo realistic racing game with high quality HDR images. Participants can learn about real experience in developing HDR and WCG content.
Point patterns and stochastic structures lie at the heart of Monte Carlo based numerical integration schemes. Physically based rendering algorithms have largely benefited from these Monte Carlo based schemes that inherently solve very high dimensional light transport integrals. However, due to the underlying stochastic nature of the samples, the resultant images are corrupted with noise (unstructured aliasing or variance). This also results in bad convergence rates that prohibit using these techniques in more interactive environments (e.g. games, virtual reality). With the advent of smart rendering techniques and powerful computing units (CPUs/GPUs), it is now possible to perform physically based rendering at interactive rates. However, much is left to understand regarding the underlying sampling structures and patterns which are the primary cause of error in rendering.
This course surveys the most recent state-of-the-art frameworks that are developed to better understand the impact of samples' structure on the error and its convergence during Monte Carlo integration. It provides best practices and a set of tools for easy integration of such frameworks for sampling decisions in rendering. We revisit stochastic point processes that offers a unified theory explaining stochastic structures and sampling patterns in a common principled framework. We show how this theory generalizes spectral tools developed over the years to analyze error and convergence rates, and allows for analysis of more complex point patterns with adaptive density and correlations. At the end of the course, the audience will have a comprehensive understanding of both theoretical and practical aspects of point processes that would guide them in choosing and designing sampling strategies for applications specific to Monte Carlo rendering. A codebase and web application for easy use of the introduced techniques will also be made available on https://github.com/sinbag/SamplingAnalysisWithCorrelations.
We want to start from scratch and explain what USD is and isn't.
Four "Paradigms" of Science
• Empirical Science
• Theoretical Science
• Computational Science
• Data Science