Horizon Forbidden West's graphics are the product of many separate features working together to deliver the acclaimed visuals. Each has to run within strict performance and memory budgets to deliver a reliable framerate. In this presentation we'll break down some of those features, and the goals and challenges for each. The audience will get a behind the scenes look at some of the real-time systems that drive the rich dynamism of the world of Horizon.
We present a tool that allows users to quickly author character poses. Authoring character poses is typically done by professional artists and is a time consuming process that involves a lot of user manipulations. Our tool leverages both machine learning and a physics engine to enable users with no artistic experience to author natural-looking poses in a few seconds.
First, we trained a machine learning (ML) model to predict a full character pose, including individual fingers, from a set of sparse constraints. These constraints allow the user to control the final pose by specifying final joint positions, orientations or a target that they should face. Our ML architecture allows the constraints to be given in any order and number. The model was trained on a large set of motion capture data so that it predicts natural and realistic human poses.
Second, we integrated our ML model with a physics solver so that the final pose also respects environmental constraints such as colliding with other objects. This allows the user to quickly pose a character interacting with the environment, another character or itself.
Finally, we developed a user-friendly interface to control this tool. We believe that the combination of machine learning and physics lower the entry bar to character animation.
In this paper, we describe a distributed collaborative system to present a real-time live computer animated show featuring more than one character and more than one interactive show element that are individually and remotely controlled each from different physical locations.
We extend our instant NeRF implementation [Müller et al. 2022] to allow training from an incremental stream of images and camera poses, provided by a realtime Simultaneous Localization And Mapping (SLAM) system. Camera poses are refined end-to-end by back-propagating the gradients from NeRF training. Reconstruction quality is further improved by compensating for various camera properties, such as rolling shutter, non-linear lens distortion, and variable exposure typical of digital cameras.
Static scenes can be scanned, the NeRF model trained, and the reconstruction verified in an interactive fashion, in under a minute.
The Hyper Drumhead is a novel digital musical instrument that allows for the visualization and the manipulation of sound waves. At its core, a GPU-accelerated physical model computes in real-time the propagation of sound waves in two dimensions, allowing for the audio/visual simulation of massive domains. Every time the musician touches the surface of the instrument, an excitation signal is injected into the simulation domain, triggering wave propagation in all directions. By drawing boundaries in the domain and modifying the acoustic parameters of the simulated medium, the musician may generate and modulate reflections and resonances, effectively shaping the timbre of the resulting sound. In this short paper, we describe the main components of the instrument, including the control interface and the underlying numerical simulation.