SA '19: SIGGRAPH Asia 2019 Courses

Full Citation in the ACM Digital Library

Immersive analytics

Welcome and Overview

Visualisation and Visual Analytics

Introduction to Immersive Analytics

Computing Beyond the Desktop


Multithreading in Pixar's animation tools

Why is multithreading important?

Trends in modern hardware

Stagnant clock rates

Increasing core counts

Special purpose hardware

New paradigms and patterns: Some non-intuitive

Cinematic scientific visualization: the art of communicating science

The Advanced Visualization Lab (AVL) is part of the the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign. The AVL is led by Professor Donna Cox, who coined the term "Renaissance Team", with the belief that bringing together specialists of diverse backgrounds creates a team that is greater than the sum of its parts, and members of the AVL team reflect that in our interdisciplinarity. We specialize in creating high-quality cinematic scientific visualizations of supercomputer simulations for public outreach.

A whirlwind introduction to computer graphics

Provide a background for the amazing things you will see at SIGGRAPH Asia 2019

Create an understanding of common computer graphics vocabulary

Help you understand the significance of the images and animations that you will see

Provide references for further study

Introduction to the Vulkan® computer graphics API

Vulkan is better at keeping the GPU busy than OpenGL is. OpenGL drivers need to do a lot of CPU work before handing work off to the GPU. Vulkan lets you get more power from the GPU card you already have.

Extended reality practice in art & design creative education

As Jerald (2018) states, though virtual reality (VR) has existed for over 50 years, its use as a creative medium is relatively new. In the last four years, as part of the 'second wave of VR', new affordability and accessibility of hardware and software for experiencing and creating VR has incited a surge of interest for the technology from creative industries. Meanwhile, interest and attempts to create VR projects has expanded into other forms of Extended Reality (XR) technologies, like Augmented Reality (AR) and Mixed Reality (MR).

As a group of educators and practitioners from creative disciplines, our focus is to create a fundamentals of XR Education curriculum for undergraduates and/or postgraduates in schools of art and design who have no/less coding and software background.

We believe in guiding students to approach VR as a creative medium is increasingly important. Furthermore, we also introduce the XR-ED Group (sponsored by ACM SIGGRAPH Educators Forum). This group is a collective of educators and practitioners interested in creating an XR curriculum, and to share the work of students. The group was first run as part of a co-located event with VRCAI 2018 / SIGGRAPH Asia 2018 in Tokyo, and this year will be running in Brisbane as a co-located event with VRCAI 2019 / SIGGRAPH Asia 2019.

State of the art on stylized fabrication

Digital fabrication devices are powerful tools for creating tangible reproductions of 3D digital models. Most available printing technologies aim at producing an accurate copy of a tridimensional shape. However, fabrication technologies can also be used to create a stylistic representation of a digital shape. We refer to this class of methods as stylized fabrication methods. These methods abstract geometric and physical features of a given shape to create an unconventional representation, to produce an optical illusion, or to devise a particular interaction with the fabricated model. In this course, we classify and overview this broad and emerging class of approaches and also propose possible directions for future research.

Deep learning: a crash course

The topics for Part 1 of 4.

BxDF material acquisition, representation, and rendering for VR and design

Photorealistic and physically-based rendering of real-world environments with high fidelity materials is important to a range of applications, including special effects, architectural modelling, cultural heritage, computer games, automotive design, and virtual reality (VR). Our perception of the world depends on lighting and surface material characteristics, which determine how the light is reflected, scattered, and absorbed. In order to reproduce appearance, we must therefore understand all the ways objects interact with light, and the acquisition and representation of materials has thus been an important part of computer graphics from early days. Nevertheless, no material model nor acquisition setup is without limitations in terms of the variety of materials represented, and different approaches vary widely in terms of compatibility and ease of use.

In this course, we describe the state of the art in material appearance acquisition and modelling, ranging from mathematical BSDFs to data-driven capture and representation of anisotropic materials, and volumetric/thread models for patterned fabrics. We further address the problem of material appearance constancy across different rendering platforms. We present two case studies in architectural and interior design. The first study demonstrates Yulio, a new platform for the creation, delivery, and visualization of acquired material models and reverse engineered cloth models in immersive VR experiences. The second study shows an end-to-end process of capture and data-driven BSDF representation using the physically-based Radiance system for lighting simulation and rendering.

Creating your first augmented reality experience with ARCore

Approaches and challenges to virtual and augmented reality in health care and rehabilitation: a short course

An interactive introduction to WEBGL

Explosion of interest in 3D graphics through a browser

makes use of local hardware

little loss of performance

no platform dependence

Answer three questions

Which API should I use?

What do I need to know?

How do I get started?

Data-driven photometric 3D modeling

Dataflow programming and processing for artists and beyond

We complement the last three editions of the course at SIGGRAPH Asia (2015, 2016, 2018) and SIGGRAPH (2017) to make it more of a hands-on nature and include OpenISS. We explore a rapid prototyping of interactive graphical applications for stage and beyond using Jitter/Max and Processing with OpenGL, shaders, and featuring connectivity with various devices. Such rapid prototyping environment is ideal for entertainment computing, as well as for artists and live performances using real-time interactive graphics. We share the expertise we developed in connecting the real-time graphics with on-stage performance with the Illimitable Space System (ISS) v2 and its OpenISS core framework for creative near-realtime broadcasting, and the use of AI and HCI techniques in art.

Eye tracking and virtual reality

Virtual Reality has the potential to transform the way we work, rest and play. We are seeing use cases as diverse as education and pain management, with new applications being imagined every day. VR technology comes with new challenges, and many obstacles need to be overcome to ensure good user experience. Recently many new Virtual Reality systems with integrated eye tracking have become available. This course presents timely, relevant information on how Virtual Reality (VR) can leverage eye-tracking data to optimize the user experience and to alleviate usability issues surrounding many challenges in immersive VEs. The integration of eye tracking allows us to determine where the viewer is focusing their attention. If we, as the content creators and world builders, need the user to focus on another area of the VE we can use techniques to attract attention to these regions and also we can confirm we are doing so successful as we continually track the users gaze. Advancing these approaches could make the VR experience more comfortable, safe and effective for the user.

Creating and understanding 3D annotated scene meshes

Deep learning requires availability of massive 3D data.

How to acquire 3D scenes efficiently?