This course explores OpenUSD’s transformative role in robotics simulation, providing a comprehensive framework for converting Universal Robot Description Format (URDF) assets into modular Universal Scene Description (USD) workflows. Participants learn to bridge industry-standard robotics specifications with OpenUSD’s interoperability features, enabling seamless integration of robotic components like UR10e arms and 2F-140 grippers into physics-accurate training environments. Techniques include joint...
As deep learning has matured into a driver of unprecedented insight, its application has expanded across a wide range of industries and scientific domains. One area of growing interest is the use of data-driven models in digital twin systems, with the Earth emerging as a natural and consequential subject. Earth system modeling is more than a scientific curiosity — it plays a critical role in addressing global safety, resilience, and environmental planning — and NVIDIA’s Earth-2 platform represents one such effort. While the Earth-2 initiative focuses on Earth’s macro-scale physical processes, it draws on techniques that are broadly applicable across domains and provide a strong foundation for those interested in physical system modeling and machine learning in the natural sciences.
This workshop offers a hands-on exploration of the Earth-2 ecosystem, centered on the Earth2Studio Python library for AI-driven global weather forecasting. Participants will progress through three interactive exercises that take them directly into real-world forecasting workflows. Through the workshop, attendees will learn to generate ensemble forecasts, evaluate them with established metrics, and produce detailed, high-resolution predictions tailored to specific regional needs.
The course emphasizes the use of deep learning models and showcases their advantages over traditional Numerical Weather Prediction (NWP) methods. By working through examples like hurricane tracking and heatwave analysis, participants will build foundational knowledge around scalable, accurate, and cost-effective inference solutions. By the end of this workshop, participants will have the necessary tools and skills to apply cutting-edge weather modeling techniques and understand how to structure and validate applications surrounding them.
Recent advances in 3D reconstruction and 3D deep learning have introduced new representations such as 3D Gaussian Splatting ([Kerbl et al. 2023]) and its variants ([Du et al. 2024; Gao et al. 2025; Liang et al. 2024; Moenne-Loccoz et al. 2024]). While promising, working with these representations poses practical challenges in rendering/visualization, and interactive simulation. This course introduces researchers and practitioners to the open-source tools built within NVIDIA Kaolin using NVIDIA’s Warp library for effectively working with and simulating 3D Gaussian Splats. We cover foundational features of these libraries and provide a hands-on deep dive into enabling elasto-dynamic physics simulation with contact, directly on these 3D Gaussian Splat objects as well as other implicit and explicit representations all in the same scene. Designed for accessibility, the course requires no prior background in Gaussian Splats or physics simulation. Attendees will follow interactive examples in Jupyter notebooks, with dedicated GPU resources provided to each participant. By the end of the course, participants will be equipped to integrate Kaolin and Warp into their workflows, enhancing their research and development capabilities in 3D learning and simulation.
This hands-on lab introduces digital twin technology for robotics, focusing on software-in-the-loop testing using OpenUSD, NVIDIA Isaac Sim, and the Robot Operating System (ROS) with a locomotion policy trained by NVIDIA Isaac Lab. As the robotics industry continues to evolve, the need for efficient and cost-effective testing methodologies becomes crucial. Digital twins offer a powerful solution, allowing developers to simulate and validate robotic systems in virtual environments before physical deployment.
This hands-on lab introduces Slang, an open-source, open governance shading language hosted by Khronos that simplifies graphics development across platforms. Designed to tackle the growing complexity of shader code, Slang offers modern programming constructs while maintaining top performance on current GPUs.
Participants will get practical experience with Slang’s stand-out features: cross-platform capabilities for write-once, run-anywhere development, a module system that streamlines code organization, and generics and interfaces that eliminate preprocessor headaches.
We’ll cover integrating Slang with industry tools like RenderDoc and Visual Studio Code, and demonstrate converting existing GLSL and HLSL code to Slang. We’ll also briefly introduce advanced features like automatic differentiation, reflection, and bindless capabilities.
Whether you’re developing games, working in visualization, or researching new rendering techniques, you’ll gain practical skills to immediately improve your graphics workflow and prepare for next-generation rendering challenges.
This lab introduces a framework for developing multi-asset spatial configuration systems on Apple Vision Pro, integrating OpenUSD’s compositional workflows with NVIDIA Omniverse’s real-time simulation capabilities. Participants learn to build gesture-controlled SwiftUI interfaces that dynamically synchronize with physics-aware Omniverse environments, enabling real-time customization of 3D products across categories such as consumer goods and industrial components. The system employs a hybrid renderi...
MeshTorrent is a peer-to-peer platform for automated 3D content creation and exchange, inspired by BitTorrent-style file sharing. By merging AI-based text-to-3D generation with swarm-based distribution, MeshTorrent harnesses the combined bandwidth and storage resources of its users, enabling scalable and decentralized sharing of 3D assets. This paper describes the core design of MeshTorrent, including an AI workflow for generating fresh .glb files, metadata management via a distributed hash table, partial previews for quick inspection, and specialized extensions for 2D sprites (SpriteTorrent) and rigged character models (RigTorrent). Preliminary tests show faster content download times than single-host alternatives, reduced server costs, and robust resilience to network churn, advancing an open ecosystem for collaborative 3D model exchange.
Develop front-end experiences for the Apple Vision Pro and NVIDIA Omniverse as the back-end server for captivating spatial experiences.
This innovative lab explores the integration of Apple Vision Pro with NVIDIA Omniverse to create immersive industrial digital twin experiences. Participants will learn to develop spatial applications that leverage the power of both platforms, combining Swift and SwiftUI for front-end development with Omniverse’s robust backend capabilities.
The lab guides attendees through setting up the development environment, including configuring Xcode for Vision Pro development and preparing Omniverse as a backend server. Participants will build a SwiftUI-based client application, implementing CloudXR framework components for Omniverse communication. They’ll then establish connectivity between Vision Pro and Omniverse, setting up Omniverse Kit for streaming 3D content.
Interactive features are a key focus, with participants developing a message bus system for client-server communication and creating functions for camera control and animation playback. The lab also covers enhancing user experience by adding metadata viewing capabilities for 3D objects and optimizing performance for real-time interaction.
Through hands-on exercises, attendees will build a functional Vision Pro client that connects to Omniverse, implement interactive features like bay camera selection and animation control, and customize the application to display metadata for selected 3D objects.
This lab represents a significant contribution to the SIGGRAPH community by bridging cutting-edge AR technology with powerful 3D content creation and streaming capabilities.
Flamenco is the Open Source render farm software developed by Blender Studio. It can be used for distributed rendering across multiple machines, but also as a single-machine queue runner. This hands-on class will briefly teach how to install and use it, and then focus on customizing it to your needs. This will cover custom job types, as well as modifications to Flamenco itself.
Spatial user interfaces (UI) and graphics are important components in three-dimensional environments found across many applications including augmented reality (AR), mixed reality (MR), virtual reality (VR), and video games. As spatial computing enters the mainstream through products like the Apple Vision Pro, the design of intuitive, accessible 3D user interfaces (UI) is critical. Creating UI in spatial applications that interact naturally with the physical and digital worlds requires engineering and design considerations that go beyond traditional 2D interfaces.
This course challenges participants to rethink 3D UI design to enhance spatial experiences through intuitive and predictive interfaces with machine learning to explore an ambitiously comprehensive look at the challenges of developing UI/UX for the Apple Vision Pro. We begin with core design principles of 2D and 3D UI before diving deeper into 3D UI, focusing on how depth perception, spatial awareness, and natural gestures work together to create cohesive user experiences and interactions. Then, we apply these concepts to introduce individuals to programming and designing with CoreML and SwiftUI. Finally, we examine how leveraging AI/ML enables personalization for individual needs, a key in revolutionizing the accessibility and future of spatial UI.
This lab will cover the cloning of the Academy Software Foundation (ASWF) Open Shading Language (OSL) repository, the installation or building of its software dependencies, and building the repository contents, including the testshade, testrender, and osltoy tools. The writing of custom OSL shaders will follow using osltoy to test and render them. A summary of a few OSL source code repositories will conclude the lab.
This paper describes the “Scrapyard Challenge: Classic Arcade Game Controller Redesign Workshop”, an interactive workshop that allows anyone to create a novel controller for classic arcade games using a custom-built interface board and Raspberry Pi [Raspberry Pi, 2025] computers running Retropie Software [Retropie, 2025]. The Scrapyard Challenge workshops are intense workshops ranging in duration from all day to several hours that focus on interface design through the use of found objects and junk materials. Since they began in 2003, the Scrapyard Challenge workshops have been held in 17 countries across the five continents of North America, South America, Europe, Australia, and Asia. The workshops began as a way of introducing interface design to novices without the need to learn electronics by supplying a custom built interface board that people could simply “plug” homemade switches and analog controllers into to create custom MIDI controllers, wearable devices, or public art installations. As the cost of game emulation decreased from standard computers such as Macs and PCs to microcomputers such as the Raspberry Pi, the workshop has evolved with these changes over its 20 year history and now allows for participants to build game controllers for classic games and consoles such as Super Mario Brothers (Nintendo Entertainment System), Street Fighter (Multiple Machine Arcade Emulator or MAME), Crash Bandicoot (PlayStation 1), and many other home arcade systems. This Labs workshop integrates games on up to 20 different systems that spans 20 years of home gaming and thousands of potential games to choose from by participants designing controllers for those games.
This hands-on session introduces a practical and accessible method for AI-powered visual generation in projection mapping. Participants will explore how artificial intelligence, real-time computer graphics, and interactive mapping can converge to empower creators—regardless of coding or technical background—to produce immersive visuals.
The class focuses on generating dynamic visual content using AI models trained on diverse artistic styles, and adapting it in real time to various 2D and 3D projection surfaces. Attendees will learn how to use intuitive interfaces to customize visual outputs and control shader-based rendering parameters for optimized performance and fluid interaction.
Technical insights will be shared on how real-time shader optimizations reduce computational load without compromising visual quality. Beyond technical execution, the session will include an open discussion on ethical AI use, transparency in dataset sourcing, and energy-conscious design practices.
Over the course of 90 minutes, participants will:
• |
Generate AI-based visuals from curated styles (non-real-time, pre-generated) |
||||
• |
Apply and manipulate visuals on physical surfaces using video mapping techniques |
||||
• |
Tune rendering performance with interactive shader controls |
||||
• |
Reflect on responsible and sustainable use of AI in creative workflows |
This class transforms a complex creative pipeline into an accessible learning experience, offering concrete skills and critical insights into the future of AI-assisted digital art.
This course explores the creation of real-time interactive graphics on embedded systems, focusing on the integration of ossia score, Puara, and Raspberry Pi. Participants will learn to design and implement dynamic visual experiences using GPU shaders, including the use of Interactive Shader Format (ISF) and techniques for porting shaders from platforms such as Shadertoy. The course will cover mapping physical behaviours through the interactions possible with a Raspberry Pi’s inputs directly to GPU shaders, enabling responsive and interactive visuals. Through hands-on exercises, attendees will create custom mappings using Puara-gestures, a tool for integrating gestural control into real-time graphics. The course will discuss the limitations of embedded systems, such as resolution constraints and the trade-offs between each stage of the rendering pipeline. Practical insights into direct rendering from an application to the GPU, bypassing compositors and display servers, will be provided, offering a comprehensive understanding of optimizing graphics performance on resource-constrained devices. Designed for artists, developers, and researchers, this course bridges the gap between creative expression and technical implementation, empowering participants to push the boundaries of real-time graphics on embedded platforms.
p5.js is a free and open-source JavaScript library for creative coding that prioritizes access. As a “batteries-included” toolkit, p5.js is used worldwide for teaching, performance, installation, collaboration, and experimentation. In this lab, we introduce participants to graphics in p5.js by creating an interactive postcard with a mix of 2D and 3D elements. This walkthrough includes an introduction to code-based animation, parameterized visuals, mouse and touch interactivity, and screen reader support in p5.js. Additionally, we will demonstrate new features in the latest p5.js 2.0 release, including improvements to typography and support for authoring shaders in JavaScript. We use an interactive postcard as an example because it shows how easy it is to bring these different parts of the creative coding toolkit together to create not only sketches, but complete web-based interactive artworks and high-resolution exports. For technical artists and creative technologists, p5.js offers a unique level of variety and control.
This hands-on workshop will focus on rendering gaussian splats in real-time on mobile or standalone VR devices. It is intended to be an intermediate course. It will be done using the UnityGaussianSplatting renderer, first showing how it works, then adding some optimizations on top of it to make it performant on mobile GPUs. Attendees should have programming experience, preferably in C# and HLSL, and a basic understanding of the Unity game engine. Attendees should have Unity 6 and Visual Studio Code pre-loaded on their devices if they intend to follow along. Also it is encouraged to have an Android phone or standalone VR device to run the final optimized renderer, but it is not essential.
Generative AI is transforming how we create and interact with 3D content—yet for many developers, integrating state-of-the-art (SOTA) models into existing 3D pipelines remains a significant hurdle. This 90-minute hands-on lab bridges that gap, offering a streamlined theoretical foundation followed by practical, open-source solutions for embedding generative AI into real-time or offline 3D applications.
We focus on two highly extensible workflows:
• Using ComfyUI to convert concept art into 3D assets, accelerating previsualization and asset generation.
• Leveraging video-to-video generative models to produce enhanced renders and motion-driven animation sequences.
Participants will walk away with modular, reproducible pipelines that are adaptable across engines and frameworks. From model discovery to local inference integration, this session provides the tools and insight to deploy generative AI wherever your 3D workflows lives - from Blender/Maya to Unity/Unreal and beyond.
This hands-on class focuses on progressively building a simple space traffic system, using proceduralism. The two main building blocks of this system are the spaceship generator and a guided entity simulation. The goal of the system is to achieve believable behavior, making the background of a scene feel alive.
The implementation of the setup makes use of the procedural generation framework Geometry Nodes built into Blender.
While the spaceship system will be used as a practical example, the concepts are more generally applicable to similar scenarios.
There is no requirement of previous experience with the Geometry Nodes system, while a base level familiarity with Blender is encouraged to follow along.
Climate simulation has become a critical tool for shaping architectural performance in response to rising global temperatures. This Labs hands-on session presents the digital twin workflow behind Hammock Tower, an architectural design for Paris in 2100, which leverages NVIDIA Omniverse, SimScale, Cesium, and Autodesk Forma to inform climate-resilient strategies for a projected +4 °C future. Participants will learn how to use environmental simulation to guide passive cooling strategies, validate design hypotheses, and visualize airflow at both building and urban scales. The session emphasizes translating site-specific data into actionable spatial interventions for sustainable architectural design. Project documentation is available at .
This is an intermediate level course for attendees to gain a strong understanding of the basic principles of generative AI. The course will help build intuition around several topics with easy-to-understand explanations and examples from some of the prevalent algorithms and models including Autoencoders, CNN, Diffusion Models, Transformers, and NeRFs.
Digital twins increasingly require integrating, viewing, and interacting with ever larger amounts of disparate data. 3D Tiles enables users to visualize geospatial data on a massive scale while balancing performance across devices. 3D Tiles likewise provides a method to aggregate diverse data, such as 3D models, photogrammetry, LiDAR, BIM/CAD models, and more, into an interoperable ecosystem based on open-source standards. Cesium is a leading platform for 3D geospatial data suitable for developers, technical artists, data visualizers, and non-technical stakeholders. Cesium provides users with a number of free tools to convert 3D models into 3D Tiles and to host 3D Tiles for streaming. In this hands-on lab, we’ll leverage the Cesium platform to create global-scale digital twins by creating 3D Tiles, uploading them to a server, and interacting with the data in various runtimes such as the web and game engines.
This class focuses on how the artist-centered solutions developed at Blender Studio over more than a decade of filmmaking can help non-technical artists work together seamlessly. The only tools required are Blender, Python and Docker.
This class focuses on extending Blender’s functionality through its powerful Python API. Starting from fundamental concepts such as operators, we will gradually increase the complexity of our solution, to a small but useful development tool.
Attendees should come prepared with Blender, Python knowledge, and a code editor to follow along and experiment in real time.
Quantum teleportation is a protocol that allows two people, conventionally named Alice and Bob, to transfer (not copy) quantum information. The transfer is performed seemingly instantaneously, no matter how far apart they are. It requires that Alice an Bob each have one of two entangled qubits, and that Alice sends two classical bits to Bob by conventional means.
Note that this is a transfer of information, rather than anything physical. Quantum teleportation is not teleportation as usually seen in science fiction, because we can only send quantum information. There’s no way for Bob to turn that received information into matter, whether it’s a grumpy (but humane) doctor or an exploding warp core.
This Lab invites participants to explore how today’s Vulkan ecosystem tools and resources allow everyone to write real-time ray-traced effects with far less boilerplate than in the past. Building on a trimmed version of the open-source Khronos Vulkan Tutorial [Khronos 2025a] and the vk-bootstrap [Giessen 2025] and Vulkan-RAII [Khronos 2025c] helper libraries, we will incrementally assemble a renderer that combines rasterization with ray tracing to implement accurate shadows. The session balances bite-sized coding intervals with short concept discussions, so that every participant leaves with a runnable project and a mental map of where to dig deeper afterward.
Kids (and many adults) often fail to see engineers and scientists as creative in the same sense as artists, writers, and musicians. This is understandable because STEM classes typically teach basic skills and then ask students to apply these to solve known problems to get to the one, correct, known solution. Creativity is explicitly discouraged.
With paper animatronics, we try to break this paradigm, showing kids that technology provides powerful tools for creativity. Students tell the important stories of history, culture, science, or just about any subject via their papercraft which they bring to life with motion and synchronized sound. Their characters literally talk, with mouths moving appropriately as they speak. Kids get that storytelling and papercraft are creative tasks, and they quickly come to see the animatronics parts as simply additional things with which to be creative. Here, the story is the point, with the tech playing an important, but supporting role.
In this paper animatronics workshop, you will make your own storytelling paper robots using our latest, easy-to-use, easy-to-afford, animatronics kits. Building physical things is surprisingly fun and engages learner creativity in a very different way than screens. Hopefully, this will spark your interest in bringing these sorts of technology-based, physical storytelling projects to schools in your community.