This course will cover the compact volume data structure and various tools available in the open-source library OpenVDB. Since its release in 2012 it has become an industry standard and has been used in numerous VFX movies release in the past decade. It is also adopted by most commercial software packages used by the movie industry, including Houdini, RenderMan, Arnold, RealFlow, Clarisse, Guerrilla Render, Maxwell Render, Modo, V-Ray, Octane Render, 3Delight, Embergee, Blender, Chaos Phoenix, Corona, FumeFX, KeyShot, LightWave, ThinkingParticles, Eddy, Bifrost, Siemens NX, Unreal Engine, as well as bindings for Mathmatica. Recently, OpenVDB has also found use in new fields, including SLAM, autonomous driving, industrial design, 3D printing, medical imaging, rocket design, arial surveillance, robotics, and many machine learning applications. Finally, OpenVDB was the first open-source project to be adopted by the Academy Software Foundation (ASWF) and the Linux Foundation (in 2018).
Physically based shading has transformed the way we approach production rendering and simplified the lives of artists in the process. By employing shading models that adhere to physical principles, one can readily create high quality, realistic materials that maintain their appearance under a variety of lighting environments, in contrast to the ad hoc models of yore. However, physically based shading is not a solved problem, and thus the aim of this course is to share the latest theory as well as lessons from production.
This 3 hour course provides a detailed overview of grid-free Monte Carlo methods for solving partial differential equations (PDEs) based on the walk on spheres (WoS) algorithm, with a special emphasis on problems with high geometric complexity. PDEs are a basic building block of models and algorithms used throughout science, engineering and visual computing. Yet despite decades of research, conventional PDE solvers struggle to capture the immense geometric complexity of the natural world. A perennial challenge is spatial discretization: traditionally, one must partition the domain into a high-quality volumetric mesh—a process that can be brittle, memory intensive, and difficult to parallelize. WoS makes a radical departure from this approach, by reformulating the problem in terms of recursive integral equations that can be estimated using the Monte Carlo method, eliminating the need for spatial discretization. Since these equations strongly resemble those found in light transport theory, one can leverage deep knowledge from Monte Carlo rendering to develop new PDE solvers that share many of its advantages: no meshing, trivial parallelism, and the ability to evaluate the solution at any point without solving a global system of equations.
The course is divided into two parts. Part I will cover the basics of using WoS to solve fundamental PDEs like the Poisson equation. Topics include formulating the solution as an integral equation, generating samples via recursive random walks, and employing accelerated distance and ray intersection queries to efficiently handle complex geometries. Participants will also gain experience setting up demo applications involving data interpolation, heat transfer, and geometric optimization using the open-source “Zombie” library, which implements various grid-free Monte Carlo PDE solvers. Part II will feature a mini-panel of academic and industry contributors covering advanced topics including variance reduction, differentiable and multi-physics simulation, and applications in industrial design and robust geometry processing.
For a beginner, walking into a SIGGRAPH conference is a very intimidating experience. There is so much to see and so much to do, and everyone seems to be speaking an unfamiliar language, and they seem to be ooh'ing and ahh'ing over things that they appreciate but you don't know how to. This course is for you! Rather than kicking new attendees off the dock and expecting them to swim at SIGGRAPH, the Whirlwind Introduction course's purpose is to give anyone who wants it a basic background in the concepts and terminology that they need to get more from the different venues in the conference. It is like a “pre-course” in that it is more fundamental than any other introductory activities and should be attended before anything else in the conference program.
Like a semester long graduate seminar on Eigenanalysis, Singular Value Decompositions, and Principal Component Analysis in Computer Graphics and Interactive Techniques, this course looks at matrix diagonalization and analysis through the lens of 13 technical papers selected by the lecturers. The lecturers will highlight trends, similarities, differences, and historical threads through the papers. Applications will range from image segmentation, to fast, robust phsyics simulation, to geometric modeling, to BRDF modeling. Note that we slightly abuse the term Eigenanalysis to include its generalizations, Singular Value Decomposition and Principal Component Analysis, as all three techniques rely on matrix diagonalization.
The course will also serve as a retrospective on the selected papers, placing them in historical context and highlighting significant contributions, as well as forgotten gems.
The lecturers have broad expertise across computer graphics and interactive techniques and have co-led the VANGOGH lab meeting at UMBC since 2015. They delivered a similarly structured course on Mathematical Optimization in Computer Graphics at SIGGRAPH 2024.
Work Graphs is a Direct3D12 feature added in 2024 and enables GPU compute shaders to dispatch other shaders directly without CPU intervention. Thus, they can help improve GPU memory usage and runtime speed. In this course, we teach Work Graphs concepts at practical and relevant hands-on examples with a focus on the non-trivial HLSL Work Graphs additions. After this class, participants should be able to assess when Work Graphs is useful for their tasks and be able to understand, explain, and apply Work Graphs.
Generative artificial intelligence is taking the world by storm. It's a new technology that enables computers learn from historical data to generate new content that feels original and human-like. Although generative AI has become a part of everyday life, many people still lack a clear understanding of how it works. Without a solid understanding of generative AI, it can be difficult for individuals to fully embrace this emerging technology. This course offers a comprehensive introduction to the concepts, technologies, and real-world impact of generative AI.
The course begins with a historical overview of artificial intelligence that traces its evolution from early rules-based logic to the deep learning breakthroughs that have enabled today's most powerful generative AI models. Participants will explore foundational concepts that include illustrations of neural network models to gain intuition behind how machines learn to generate text, images, and other types of content. The course delves into practical applications of generative AI across multiple modalities, including natural language, computer vision, audio, and more. Participants will also study high-level system architectures of an AI chatbot, the poster child of generative AI, to understand how these systems operate. Finally, the course will examine future development of generative AI, including the rise of agentic applications that can reason, plan, and interact autonomously.
Building on the content from the previous year’s course, the path of adoption is again the focus of the class. This iteration aims to guide the student through more specific and advanced pillars of USD by highlighting some increasingly complex integration scenarios. These include the trends and challenge of adopting USD, innovative solutions for USD structure as well as how DCCs like Maya are continuing to push USD integration within their architectures.
Using real world examples, the instructors will present challenges faced in many integrations, some common pitfalls that arise and ultimately concepts on how instructors proposed to solve them. Throughout this course, each presenter will provide insights and detailed learning materials from different aspects of USD, based on recent production experience across different facilities.
This course is not intended to present a definitive, “one-true” approach on how to use USD, but rather it seeks to provide insightful lessons for how productions are working with its current abilities and constraints, aiming at these lessons to be informative and inspiring to others putting USD into productions.
By the end of the course, attendees will have acquired knowledge that will help them make informed decisions on why they would choose one approach over another when confronted with a USD challenge.
Cybersickness remains a persistent challenge in virtual and extended reality (VR/XR), affecting user comfort, adoption, and long-term engagement. This course provides developers, designers, and researchers with science-backed strategies for mitigating cybersickness in immersive environments. Attendees will explore theoretical foundations, practical mitigation techniques, and real-world case studies, including locomotion strategies such as HyperJump, leaning-based navigation, and UI/scene optimizations. Through interactive discussions, live polling, and collaborative exercises, attendees will gain a structured understanding of cybersickness causes, measurement, and prevention. The session will conclude with a structured “Do’s & Don’ts” guide for immediate application in VR/XR projects. This course ensures attendees leave with actionable strategies to enhance XR usability. Ideal for VR/XR developers, UX designers, and researchers, this course translates scientific insights into practical solutions, equipping attendees with the knowledge and tools to identify, assess, and mitigate cybersickness, thereby enhancing user comfort, engagement, and adoption in their XR projects.
This introductory course will present a systematic, comprehensive framework for thinking about visualization in terms of principles and design choices. It features a unified approach encompassing information visualization techniques for abstract data, scientific visualization techniques for spatial data, and visual analytics techniques for interweaving data transformation and analysis with interactive visual exploration. It emphasizes the careful validation of effectiveness and the consideration of function before form. It breaks down visualization design according to three questions: what data users need to see, why users need to carry out their tasks, and how the visual representations proposed can be constructed and manipulated. It will walk through the use of space and color to visually encode data in a view. Three major data types will be covered: tables, networks, and sampled spatial data. The course will also cover the four major families of strategies for handling complexity: deriving new data to show within a view, interactivity to change a view over time, faceting across multiple views that can be shown side by side, and reducing what is shown within a single view using aggregation and filtering. The emphasis of this course is the space of design choices; algorithms will not be covered. It is suitable for a broad audience, from beginners to more experienced designers. It does not assume any previous experience in programming, mathematics, human–computer interaction, or graphic design. The course format will be lectures interspersed with question and answer sessions.
Quantum computing is a radically new and exciting approach to programming. By exploiting the unusual behavior of quantum objects, this new technology invites us to re-imagine the computer graphics methods we know and love in revolutionary new ways. This course is math-free and requires no technical background.
The Fourier Transform is fundamental to computer graphics, explaining topics from aliasing and sampling to image compression and filtering. This friendly course explains the principles in words, pictures, and animation, rather than math. The concepts are the important thing. We show that they are comprehensible, useful, and beautiful.
Over the last decade, advanced data-driven sampling algorithms, such as path guiding, have made their way from the scientific realm into production renderers [Vorba et al. 2019]. These algorithms enable the rendering of challenging lighting effects (e.g., complex indirect illumination, caustics, volumetric multi-scattering, and occluded direct illumination from multiple lights), which are crucial for generating high-fidelity images. The fact that these algorithms primarily focus on optimizing local importance sampling decisions makes it possible to integrate them into a path tracer, the de facto standard rendering algorithm used in production today ([Fascione et al. 2019],[Jakob et al. 2019]). The theory behind these algorithms has been presented and discussed on various occasions (e.g., in presentations or research papers), and their practical applications in production have been explored in the previous course on path guiding in production. Nevertheless, the implementation details or challenges associated with integrating them into a production render are usually unknown or not publicly discussed. This course aims to provide a deeper understanding of how specific guiding algorithms are integrated into and utilized in various production renderers, including Blender’s Cycles, Chaos’s VRay and Corona, SideFX’s Karma, and Disney Animation’s Hyperion [Burley et al. 2018]. The presented algorithms and integrations can be categorized into two main groups: the first aims to guide the entire sampling process by utilizing information about the total light transport of the scene, and the second focuses on guiding specific effects, such as caustics.
Wokeness is a problem in the United States. Specifically, it is argued, Americans are not nearly woke enough. And this lack of wokeness could be our undoing. Widespread social misbeliefs are not just dangerous for the people being misunderstood, they put everyone at risk, undermining public health, education, economic progress, and democracy itself. These distortions can be small and odd (e.g. questionable hygiene choices) or big and dangerous (e.g. vaccine skepticism, gun violence, systemic inequality, rising fascism).
In this context, woke dataviz is not simply a question of ethics or social justice, it is a matter of self-preservation. The way we visualize others can reinforce the harmful misbeliefs that put us all at risk. For example, bar charts of social disparities can subtly misdirect blame and promote harmful stereotypes. On the other hand, more transparent, expressive visualizations can interrupt these biases. Similar effects may be possible in other visual media, such as news photography, casting elves of color in prestige fantasy shows, or Shadowheart’s “jiggle physics.”
This course explores the surprising interplay between visual representation and social cognition. It looks at how visual rhetoric influences perception, how those perceptions support broader social narratives, and how those narratives, in turn, can shape our reality. It also branches out into related visual media, communication research, and political psychology to show how widespread — and unsettling — these effects can be. This course will also be practical, covering frameworks for unpacking the social implications of data design, and techniques for clear, constructive representations of the people and systems around us. Participants will leave with a sharper eye, a few added design tricks, and a justifiably smug attitude toward bar charts.
This course offers an exploration of some features of Vulkan 1.3 and 1.4, equipping developers with the tools and understanding needed to build modern Vulkan renderers. The course is divided into three focused parts that cover the essentials of modern Vulkan, synchronization techniques, and bindless rendering. Participants will gain exposure to real-world applications of modern Vulkan concepts and strategies to implement bindless on mobile architectures.
This course will introduce attendees to foundational concepts and methods in applying computational design and digital fabrication techniques to craft production. Instructors will cover methods from ceramics and textiles. Attendees will learn about craft materials and manual methods for fabrication of these materials, 2) compatible machining methods, and general computational design and optimization techniques for computational fabrication. We will cover general approaches to developing domain-specific computational representations for craft production processes, suitable applications of material simulation and visualization, and methods for enforcing craft-specific constraints in computational design tools without limiting the exercise of craft skills and creative decision-making. In addition, we will introduce computational approaches to dynamically control digital fabrication machine behaviors in ways that align with manual craft production. Overall, we aim to illustrate the connections between methods from graphics research and computational fabrication while providing concrete examples of how the physical realities of craft production require flexible computational methods directly informed by material practice.
This course offers a thorough exploration of the role of randomness in generative AI, leveraging foundational knowledge from statistical physics, stochastic differential equations, and computer graphics. By connecting these disciplines, the course aims to provide participants with a deep understanding of how noise impacts generative modeling and introduce state-of-the-art techniques and applications of noise in AI. First, we revisit the mathematical concepts essential for understanding diffusion and the integral role of noise in diffusion-based generative modeling. In the second part of the course, we introduce the various types of noises studied within the computer graphics community and present their impact on rendering, texture synthesis and content creation. In the last part, we will look at how different noise correlations and noise schedulers impact the expressive power of image and video generation models. By the end of the course, participants will gain an in-depth understanding of the mathematical constructs for diffusion models and how noise correlations can play an important role in enhancing the diversity and expressiveness of these models. The audience will also learn to code these noises developed in the graphics literature and their impact on generative modeling. The course is aimed for students, researchers and practitioners. All the related material for the course can be found on https://diffusion-noise.mpi-inf.mpg.de/.
The past five years has seen rapid growth in techniques that replace physically based shading algorithms with learned neural approximations. These techniques compress data and code to provide compact, high-quality approximations to complex algorithms. As of recently, these algorithms can now be hardware accelerated using cross-platform APIs in Vulkan and Direct3D to access AI acceleration hardware. Combined with modern differentiable shading languages, there is now a complete development toolchain for building and deploying neural shaders.
The course will take attendees on a theory-to-practice journey through the neural shading space, starting with a survey of the types of techniques and their applications. We will then teach the math behind neural shading, starting small and adding detail from there. After that, the course will cover hardware acceleration concepts and share tips on bringing models from training into a production C++ environment. We close by reviewing a few "full" neural graphics models and discuss how they advance the state-of-the-art in real-time graphics.
After completing this course, participants will understand neural shading fundamentals, and be able to build and deploy hardware-accelerated neural shading models in modern renderers.
This hands-on, three-hour workshop introduces participants to ComfyUI, a visual, node-based interface for generative AI workflows built for Stable Diffusion. As an open-source Python and JavaScript tool, ComfyUI can run both locally and in cloud environments, making it ideal for flexible deployment and experimentation. Through guided exercises and demonstrations, attendees gain practical experience with advanced generative models for image, video, and 3D creation, and learn how to integrate both open and proprietary tools into modular creative pipelines.
Computer graphics, often associated with films, games, and visual effects, has long been a powerful tool for addressing scientific challenges—from its origins in 3D visualization for medical imaging to its role in modern computational modeling and simulation. This course explores the deep and evolving relationship between computer graphics and science, highlighting past achievements, ongoing contributions, and open questions that remain. We show how core methods, such as geometric reasoning and physical modeling, provide inductive biases that help address challenges in both fields, especially in data-scarce settings. To that end, we aim to reframe graphics as a modeling language for science by bridging vocabulary gaps between the two communities. Designed for both newcomers and experts, Graphics4Science invites the graphics community to engage with science, tackle high-impact problems where graphics expertise can make a difference, and contribute to the future of scientific discovery. Additional details are available on the course website: https://graphics4science.github.io.
Level-of-detail (LOD) is a concept we intuitively experience in everyday life—whether it is how our brains filter visual information or how digital maps adjust as we zoom. At its core, LOD is about allocating detail where it matters most, striking a balance between efficiency and precision. While originally developed to accelerate rendering in computer graphics, modern LOD techniques have evolved into a powerful framework for constructing hierarchical shape representations and designing multilevel solvers in geometry processing and simulation.
This course begins by highlighting the role of LOD not just in visualization, but also in numerical computation. We introduce key methods for constructing hierarchical structures, then dive into the design of multilevel solvers—focusing on how to transfer quantities and signals across levels, with emphasis on surface self-parametrization and intrinsic prolongation. We conclude with applications that showcase how these hierarchical frameworks enable efficient, accurate and scalable solutions to problems in geometry processing and physical simulation.
This course starts with challenges of VR graphics, and then introduces details of many optimization techniques to tackle those challenges from different aspects. All the introduced techniques have been used in the on-market VR devices. For each detail, we explained how it works and what benefit it brings to us to improve user experience for VR graphics. The course concludes with a wrap up of the contents together with more challenges that are not well addressed at this time. The goal of this course is to improve mutual understanding between academia and industry, introduce the challenges of VR graphics and the state-of-the-art optimization techniques, and finally, inspire and attract more research results in this area in the future.