SA '22: Proceedings of the SIGGRAPH Asia 2022 Real-Time Live!

Full Citation in the ACM Digital Library

Shader Park: Live-Coding Interactive & Procedural Shaders with JavaScript

Shader Park is an open-source JavaScript library and community for creating interactive 2D and 3D shaders. It features a live-coding environment that allows the community to share their creations. The library abstracts signed distance fields into a high-level imperative API (similar to Processing, or p5.js) making procedural modeling much more accessible and productive. The unification of procedural geometry, animation, and materials into a single program makes prototyping faster and more flexible. A major focus of the project is code literacy and artistic experimentation by making computer graphics programming accessible for artists, designers, students, educators, and all. The library also provides interactive documentation and a number of plugins for WebGL, Three.js, and TouchDesigner. In this demo we will cover the core features of Shader Park, and live-code an interactive shader with it.

Moments in Nature

Paradise creek, is a small river running between two cities. Realizing some of the original environment was altered, we created a story where herons and humans encounter each other in unexpected ways and the original river space could be discovered. The story unfolds as the audience observes the heron in flight. Telling the story of the heron becomes a collaborative effort between a team on stage and audience members in the theater. The participants on stage help to find and rescue the red heron. In real-time, viewers witness on screen the interactive story unfolding inside the virtual river.

Real time technologies for Realistic Digital Humans: facial performance and hair simulation

We have identified the world's most extensive parameter space for the human head based on over 4TB of 4D data acquired from multiple actors. Ziva's proprietary machine-learning processes can apply this data set to any number of secondary 3D heads, enabling them all to perform novel facial expressions in real-time while preserving volume and staying within the natural range of human expressions. Facial performances can then be augmented and tailored with Ziva expressions controls, solving the costly limitations of scalability, realism, artist control, and speed. For this presentation, we will discuss and demonstrate how this innovation can improve the overall quality of RT3D faces for all productions while simplifying and accelerating the overall production workflow and enabling mass production of high-performance real-time characters. We will then illustrate how performance capture can be decoupled from asset production, enabling actor-nonspecific performance capture, by showing a single performance being applied to multiple faces of varying proportions, enabling any performance to run on any head, all at state-of-the-art quality. We will additionally highlight a new integrated Hair solution for authoring / importing / simulating/ rendering strand-based hair in Unity. Built from the ground up with Unity users in mind, and evolved and hardened during the production of Enemies, the hair system is applicable not only to realistic digital humans, but also to much more stylized content and games. Using a fast and flexible GPU-based solver that works on both strand- and volume-information, the system enables users to interactively set up 'Hair Instances' and interact with those instances as they are simulated and rendered in real time. We will concentrate on demonstrating the simulation part of the system, including the strand-based solver, volume-based quantities such as density and pressure, the fully configurable set of constraints and the level of detail support that artists have.

ASAP: Auto-generating Storyboard and Previz

We present ASAP, a system that uses virtual humans to Automatically generate Storyboards And Pre-visualized scenes from movie scripts. In our ASAP system, virtual humans play the role of actors. To visualize the screenplay scene, our system understands the movie script, which is the text data, and then facilitates the automatic generation of the following virtual human's non-/verbal behavior: (1) co-speech gesture, (2) facial expression, and (3) body movements. First of all, co-speech gestures are created from dialogue paragraphs using a text-to-gesture model trained with 2D videos and 3D motion-captured data. Next, for the facial expressions, we interpret the actors' emotions in the parenthetical paragraphs and then adjust the virtual human's face animation to reflect emotions such as anger and sadness. For body movements, our system extract action entities from action paragraphs (e.g., subject, target, and action) and then combine sets of animations to make animation sequences (e.g., a man's act of sitting on a bed). As soon as possible, ASAP can reduce the amount of time, money, and labor-intensive work that needs to be done in the early stages of filmmaking.

Teleport to the Augmented Real-World with Live Interactive Effects (IFX)

Augmented telepresence provides rich communication for people at a distance with interactive blended information between the virtual and real world [Rhee et al. 2017, 2020; Young et al. 2022]. We push the boundaries of augmented telepresence with a novel live media technology, including live capturing, modeling, blending, and interactive effects (IFX) to augment telepresence. Using our technology, people at a distance can connect and communicate with creative storytelling, augmented with novel IFX.

We achieve this with the following breakthroughs: 1) digitizing remote spaces and people in real-time, 2) transmitting digitized information across a network, 3) augmenting remote telepresence using real-time visual effects and interactive storytelling with live-blending of 3D virtual assets into the digitized real-world.

In this presentation, we will unveil several new technologies and novel IFX that can enrich telepresence, including:

• Real-time 360° RGBD video capturing: we will demonstrate capturing 360° RGBD videos using a 360° RGB camera and LiDAR sensor, including synchronization between the RGB and depth streams as well as depth map generation.

• IFX with live RGBD videos: we will demonstrate real-time blending of 3D virtual objects into the live 360° RGBD videos, showcasing real-time occlusion and collision handling.

• 6-degrees of freedom (DoF) tele-movement: we introduce our recent research [Chen et al. 2022] for volumetric environment capturing and 6-DoF navigation. We will demonstrate real-time navigation (movement and rotation) in captured real surroundings (beyond room scales).

We will showcase applications (Figure 1) where we can virtually teleport to and explore within a live stream of the augmented real world and communicate remotely with live IFX.

Dynamic Ocean Explorer: Interactive XR Visualisation of Massive Volumetric Data for Ocean Science

Ocean Explorer is a head-mounted virtual and augmented reality application designed to help researchers quickly visualise and analyse massive volumetric ocean datasets. Using a simple gestural interface, researchers can directly manipulate ocean volumes with their hands, scrub through time, and interrogate the spatial dimensions of ocean phenomena with free-axis clipping.

Third Room: Decentralized Virtual Worlds on Matrix

Third Room is a web-based platform virtual worlds built on top of the open, secure, and decentralized Matrix communication protocol. It leverages the latest browser features such as Atomics, Shared ArrayBuffer, and WebGL2 to leverage the hardware's full capabilities and deliver high quality immersive worlds.

Creating Intimacy and Connection Through Live Theater in VR

Looking at how we create an environment of intimacy and trust in an intimidating environment like VR. Using techniques from immersive theater, improv and stage, we are able to have audiences from around the world, engage in vulnerable discourse and experience emotional journeys and connection in a strictly virtual world.

ICVFX Production with UE5.1

Westworld is going to showcase some of the ICVFX technology that it used in actual TV series and films production.

Scarecrow XR

Scarecrow XR is one of a kind XR immersive performance using cutting-edge technologies of metaverse platforms, motion and facial capture, haptics, and volumetric scan.

It evolved from LBE through Metaverse to Hybrid to fit various conditions. This presentation integrates the haptic algorithm and dancing performance developed for K'ARTS/POSTECH's "Ballet Metanique."

AI driven Live Interactive Cinematic Experience: Aria Studios' Virtual Movie Pipeline

'The 'AI-Driven Interactive Cinematic Experience' is a content production system combines live action with real-time graphics and multiple artificial intelligence solutions to create a conversational interactive entertainment. Our target is to make interactive content that viewers can interact with the fictional characters to affect the story plot, in addition to AI-based story engine, audience intention and language analysis, Real-time visualization of virtual set space and virtual characters according to the prompts of the story engine.

Viewer response collection unit that receives voice and text responses generates interactive scenarios of virtual content that affects branching story plots in real-time for a natural story interaction while making the viewers fully engaged as if they are talking to a movie. The act of 'Viewers change the plot of the story by communicating with the main character in the drama', suggests new type of media entertainment format.