Esports is a unique challenge for rendering research, with players regularly turning off even basic rendering techniques to reduce latency. In this workshop, three esports developers and three competing esports athletes will form an expert panel on esports rendering needs. The workshop will have three parts: a traditional panel session, with questions from a moderator and from the panel itself; an audience discussion session, with groups led by organizers and panel members producing questions and raising issues; and a closing panel session, with the panel addressing the questions raised by the audience.
XR technologies are beginning to reshape surgical navigation, rehabilitation, and education but real-world adoption remains limited. This workshop and paper explore XR's translational challenges through a clinician-centered lens. Grounded in our experience developing an intraoperative XR system, we advocate for a biodesign-first approach. We highlight where XR succeeds, where it fails, and why deep clinical integration, not necessarily technical novelty, drives real impact. The workshop brings together clinicians, engineers, and researchers to examine real clinical use cases in robotic surgery, physical therapy, training, and remote care. Through breakout discussions, we'll critically assess where XR genuinely improves outcomes, and where it does not. This session is especially relevant to developers seeking real-world constraints, researchers looking to ground their tools in practice, and clinicians interested in shaping the next generation of medical interfaces. Attendees will leave with not only new collaborations but a practical framework for translational XR: one that builds not for XR's sake, but for patient impact.
Human creativity has never been more challenged: Through the advent of AI-based storytelling and creative tools, new forms of computational creativity emerge (Lee 2022, Boden 1998), which give rise to rapid advances across animation, storytelling, and computer graphics. Storyboards can now be created within seconds through AI-based platforms, animations are prompted into existence, and image inputs allow for digital doubles to take the lead in feature films. However, risks persist across authorship, accreditation, royalties, but also authenticity, individual human expression and handcrafted digital artistry. Following brief presentations, this workshop invites participants to brainstorm their own responses to a rapidly evolving field. As a global forum for research excellence in computer graphics, ACM SIGGRAPH's Frontiers workshop brings together experts from diverse fields, providing a unique opportunity for public discourse around the ethics, challenges and new realities the future of creativity entails.
A creator's play is never done. Our Hybrid Dance Xplorations workshop invites you to play with us as we adventurously explore virtual camera control, motion capture, generative AI, touchless or gesture-based interaction, and potentially clothing/costume design & simulation – in evolving configurations of our XR/AI sandbox for co-creation and hybrid performance. We will share previous and present work including three use/play cases for movement-based experiences with contemporary dance and salsa (solo, pairs and rueda). Presenters include local artists, researchers and technologists. A highlight will be engaging group activities for workshop attendees to contribute to ideas, laughter and ambitions to date.
Human communication has always evolved alongside the new technologies which mediate it. The written word itself was once a revolutionary technology, enabling documentation and knowledge sharing on an unprecedented scale. Each invention spawns new communication norms, with the printing press, telegraph, telephone, text message, video call, and VR each inducing new linguistic patterns. Nowadays, with instantaneous communication just a tap or click away, long-form letters/emails are unusual, while shorter, more frequent communication has become the expectation. Emojis and GIFs allow us to carefully craft what our reactions communicate, bypassing body language cues in some digital spaces.
This 90-minute interactive workshop explores how design choices shape the use of emerging technology, and how that usage en masse may influence the future of communication while bridging past and future norms. Led by Ketki, a human-centered designer specializing in mixed reality, and Aubrie, a linguist and Responsible AI researcher, the session invites participants into a speculative dialogue on how these tools are reshaping the ways we connect, express ourselves, and understand each other.
Gaussian splats are a rapidly emerging method for the fast and efficient creation of photorealistic 3D visualizations and are particularly suitable for real-time applications. Today, a growing number of software solutions support the capture, visualization, editing, and compression of Gaussian splats. However, as different companies and organizations adopt varying formats, the risk of ecosystem fragmentation and a lack of interoperability becomes a real possibility. However, timing is everything in standardization. Premature efforts risk stifling innovation, while waiting too long can lead to fragmented proprietary technologies. The sweet spot is when a technology’s utility is proven, and the absence of standards starts to hinder adoption and growth.
By fostering collaboration among users, tool developers, and engine vendors, this workshop seeks to discuss the state of current technologies, formats, and use cases to guide the evolution of Gaussian splat interoperability and help remove pain points, drive adoption and prevent fragmentation.
The intersection of technology and healthcare offers rich opportunities for rethinking research, industry and education in computer graphics and interactive techniques. This article presents the SIGGRAPH for Health workshop, which brings together diverse contributions from academia, medical practice and art. The inputs highlight the complex and interdisciplinary nature of the subject and provide exciting precedents and inspiration for the future of the conference.
This course equips professionals with the Onboarding Generative AI Canvas, a structured framework designed to bridge the gap between the theoretical benefits of AI and its more practical implementation. It addresses the critical need for actionable strategies by guiding individuals and teams to assess organizational readiness, mitigate risks (e.g., bias, privacy, workforce dynamics), and align generative AI tools—with their specific workflows. Participants will walk away with a customized roadmap to integrate AI responsibly, a risk-aware mindset to navigate ethical and operational challenges, and collaborative fluency to enable cross-departmental alignment, providing further opportunities for AI adoption, without compromising long-term organizational goals or accountability.
This talk will showcase unprecedented natural behaviors in performing dynamic tasks on the Boston Dynamics Atlas humanoid robot, marking a major advance in closing the gap between human characters in graphics and physical humanoids in robotics. In this work, we employed a bottom-up approach to facilitate physical intelligence. Our primary focus was on streamlining the zero-shot sim-to-real transfer of policies that were trained to mimic stylized kinematic motions, either captured from humans or designed by animators. By carefully selecting components of our Reinforcement Learning (RL) framework and automating the deployment process on the hardware, we achieved high-quality motions while minimizing excessive domain randomization and avoiding the need for complicated reward shaping.
While this work focuses on embodied physical intelligence, the progress in the broader field of Artificial Intelligence (AI) has largely been driven by advances in perception and cognitive intelligence. The human cognitive imprint on the Internet has fueled the rise of Foundation Models, such as LLMs and VLMs, bringing us closer than ever to the holy grail of AI: creating an agent that understands the world and acts accordingly. Yet, despite these remarkable advances in the virtual realm, the "act" component of AI continues to lag behind. Embodied intelligence has proven challenging, as the physical agent must grapple with the complexity, uncertainty, and variability of the real world. While it is tempting to apply the same methodologies that revolutionized virtual domains, scaling up models and datasets, a crucial difference remains: unlike the rich, structured, and abundant data on the Internet, human motion data is sparse, incomplete, and typically lacks explicit action labels. To address this challenge, some researchers have turned to teleoperation to collect in-morphology motion data with action labels, leveraging pre-trained foundation models to bootstrap the training process.
Rather than immediately training a generalist model capable of performing a wide variety of tasks, our first milestone focused on transferring single-task policies trained in simulation to hardware in a zero-shot manner. Furthermore, we used only a motion dataset with state transitions and no action labels, avoiding reliance on teleoperation data. We drew upon data collected from humans through motion capture, videos, or animation. To maintain the motion style during deployment on physical hardware, we minimized excessive domain randomization, as it could compromise the preservation of subtle motion details on the hardware.
Building on this foundation, we developed an automated pipeline for processing motion data and performing zero-shot sim-to-real transfer using RL, minimizing the need for human intervention. We then expanded this framework to support multi-task policies that can generalize across various behaviors. To synthesize human-like motions from high-level operator commands, we trained motion generation models using Diffusion Transformers along with motion data that we collected ourselves. The trained motion generation model is used during both training and inference to provide in-context motion references for the RL policy.
While this work focuses on bridging the gap at the level of embodied physical intelligence, full cognitive integration remains a broader challenge, and progress will come from converging bottom-up and top-down approaches.
We're in an era where documenting and reflecting human identity and lived experience is being reframed through the lens of extended reality (XR) and artificial intelligence (AI) that when combined will challenge many assumptions we hold. To get a glimpse into this future, we draw from the curation of interactive and immersive projects featured at Signals Creative Tech Expo and XR Lab from 2022 - 2025, we can learn from how creative technologists utilize immersive and generative artificial intelligence technologies to interrogate our senses, emotions, memories, identity, agency, surveillance, and interconnectedness across physical and virtual domains.
Machine learning (ML) for Earth Observation (EO) data is revolutionizing the speed and scope at which science and policy can operate — filling critical data gaps across fields such as ecology and development economics. Helping fuel this progress is a class of ML for EO models that distill global satellite data into compact, multi-purpose representations of the Earth. These “Earth embedding” models include image embeddings designed to capture the unique characteristics of satellite imagery and an emerging class of location encoders that serve as implicit neural representations of Earth’s data. These models are already unlocking new use cases and capabilities, with much research yet to be explored.
Display systems are undergoing a profound transformation—from fixed, planar rectangles to dynamic, spatially integrated, and immersive environments. This talk surveys the evolving landscape of display technologies and how they are reshaping the visual computing pipeline. Rather than focusing on individual devices or formats, we examine broader shifts that impact rendering, interaction, perception, and content design. Across domains—from personal headsets to public installations—emerging display modalities are challenging long-held assumptions about how images are created, presented, and experienced. The discussion is organized into five key areas: Augmented Reality, Virtual Reality, High Dynamic Range, Stereoscopic Displays, and Immersive Installations.
Legged robots, particularly humanoids, represent an emerging technology whose widespread acceptance depends on their ability to perform meaningful tasks at the human cadence in the real world. These systems are inherently complex, and unlocking their full potential requires control strategies that coordinate whole-body motions. These complexities pose significant challenges for current motion control methods, underscoring the need for innovative approaches that extend beyond existing frameworks. Similar to how text data catalyzed progress in natural language processing, motion data from humans and animals holds the potential to usher in a new era of physical intelligence. Yet, unlike abundant and well-structured text corpora, human motion data is sparse, often incomplete, and typically lacks action labels, which limits the direct application of supervised learning techniques. To bridge this gap, recent years have seen the integration of imitation learning with structured simulation-based methods like reinforcement learning and trajectory optimization, which shows promise in generalizing natural behaviors to robotic systems. At the same time, transferring behaviors from simulation to physical hardware has taken significant strides, particularly for specialized policies. These advancements have been primarily fueled by improvements in robotic hardware, physics simulations, and techniques for sim-to-real transfer. As a result, the field of robotics has evolved considerably. Today, robotics is shifting its focus from specialized policies toward broader challenges such as generalization, high-level control, and behavior sequencing, all while continuing efforts to bridge the gap between simulation and reality. This workshop aims to gather experts in 3D human/animal modeling, motion generation algorithms, and robotics to review recent progress and chart the next steps for advancing the field. It will serve as a unique forum where the latest advances in computer graphics, particularly in controlling simulated characters, intersect with cutting-edge developments in robotics, from hardware innovations to sim-to-real transfer. The workshop aims to drive innovation in these fields by encouraging deeper collaboration across disciplines and utilizing their complementary strengths.
This workshop aims to explore the evolution of subspace methods in physical simulation, tracing their origins from classical engineering formulations to cutting-edge neural techniques. By gathering leading researchers, students, and practitioners, the session will serve as a platform for cross-disciplinary dialogue, education, and community building around model reduction techniques in graphics and simulation.
This workshop aims to democratize 3D content creation, including both static and dynamic content by exploring recent advances in fast 3D reconstruction from real-world inputs such as videos and images. By focusing on object and scene reconstruction, volumetric video content reconstruction and generative 3D content creation, we bring together researchers and practitioners to discuss scalable pipelines that lower the barrier to immersive content creation. A key feature of this workshop is a hands-on demonstration: participants will experience 1) volumetric content real-time streaming and live-broadcasting, and also 2) VR contents generated from AIGC on VR headsets. The workshop seeks to bridge the gap between cutting-edge reconstruction research and real-world VR applications.