SA '19: SIGGRAPH Asia 2019 XR

Full Citation in the ACM Digital Library

AR-ia: Volumetric Opera for Mobile Augmented Reality

Motivated by recent availability of augmented and virtual reality platforms, we tackle the challenging problem of immersive storytelling experiences on mobile devices. In particular, we show an end-to-end system to generate 3D assets that enable real-time rendering of an opera on high end mobile phones. We call our system AR-ia and in this paper we walk through the main components and technical challenges of such a system, showing how to deliver an immersive mixed reality experience in every user’s living room.

Come to the Table! Haere mai ki te tēpu!

Come to the Table! explores whether extended realities (XR) can create a bridge between indigenous people (Māori), descendants from European settlers (Pākehā) and people from other ethnicities, by practicing social inclusion. The experience uses real time depth sensing technology and AR/VR displays to enable participants to view and be part of tabletop conversations with people from different cultural backgrounds, in a playful, explorative and powerful way.

Encounters: A Multiparticipant Audiovisual Art Experience with XR

“Encounters” provides a multiparticipant audiovisual art experience with a cross reality (XR) system which consists of HoloLens units, VIVE Tracker units, SteamVR system and so on. In this experience, participants fire virtual bullets or beams at physical objects which then create a physical sound and a corresponding virtual visual effect. This is done by placing a “Kuroko” unit, which consists of a VIVE Tracker unit, a Raspberry Pi unit and a solenoid, beside a physical object using the XR system. We prepared eight Kuroko units which participants can freely place anywhere in the physical space. Then, they would interact with physical objects in that space by making sounds using the XR system. We believe that through this multiparticipant experience, participants not only experience a new form of art expression using XR but also rethink their relationship with other participants or physical objects and the environment.

FaceDrive: Facial Expression Driven Operation to Control Virtual Supernumerary Robotic Arms

Supernumerary Robotic Arms (SRAs) can make physical activities easier, but require cooperation with the operator. To improve cooperation, we predict the operator’s intentions by using his/her Facial Expressions (FEs). We propose to map FEs to SRAs commands (e.g. grab, release). To measure FEs, we used a optical sensor-based approach (here inside a HMD), the sensors data are fed to a SVM classifying them in FEs. The SRAs can then carry out commands by predicting the operator’s FEs (and arguably, the operator’s intention). We made a Virtual reality Environment (VE) with SRAs and synchronizable avatar to investigate the most suitable mapping between FEs and SRAs. In SIGGRAPH Asia 2019, the user can manipulate virtual SRAs using his/her FEs.

FreeMo: Extending Hand Tracking Experiences Through Capture Volume and User Freedom

Camera-based hand tracking technologies can enable unprecedented interactive contents with gripping impressions of embodiment and immersion. However, current systems are often restricted by the tracking volume of their sensors which results in limited experiences and content creation possibilities. We present two demonstrations of FreeMo, a hand tracking device that extends the interaction space of a Leap Motion by augmenting it with self-actuating capabilities to chase after the user’s palms. By doing so, we can offer content featuring significantly enhanced user freedom and immersion as showcased by our applications: an interactive room-scale VR experience with hand tracking from head to lap, and two-person competitive virtual air hockey game.

Head Gaze Target Selection for Redirected Interaction

Haptic interaction in virtual reality poses an ongoing research challenge as to how touch sensations are delivered in applications. A simple solution is to have matching physical and virtual counterparts, i.e. a physical switch provides perfect haptic feedback for a virtual reality switch with matching geometry. Redirection illusions take this further allowing many virtual objects to be mapped to one physical object. In many systems that utilise redirection the interaction sequence is predetermined. This prevents users selecting their own target and a reset action is also required to provide an origin for the redirection. This paper overcomes these limitations with a novel application of head gaze to enable users to determine their own sequence of interactions with a remapped physical-virtual interface. We also introduce a technique for providing an optimal mapping between physical and virtual components using multiple physical target to remove the reset action.

HyperDrum: Interactive Synchronous Drumming in Virtual Reality using Everyday Objects

Hyperscanning is a method to detect if brain wave synchronicity exists between two or more individuals, which is usually due to behavioral or social interactions. It is usually limited to neuroscience studies and very rarely used as an interaction or visual feedback mechanic. In this work, we propose HyperDrum, which is about leveraging this cognitive synchronization to create a collaborative music production experience with immersive visualization in virtual reality. We let the participants wear electroencephalography (EEG) head-mounted displays to create music together using a physical drum. As the melody becomes in synced, we perform hyperscanning to evaluate the degree of synchronicity. The produced music and visualization reflects the synchronicity level while at the same time trains the participants to create music together, enriching the experience and performance. HyperDrum’s main goal is twofold; to blend cognitive neuroscience with creativity in VR, and to encourage connectivity between humans using both art and science.

I’m tired of demos: an adaptive MR remote collaborative platform

In AR/MR remote collaboration, a remote expert often has to demonstrate tasks for a local worker. To make this easier, we have developed a new adaptive MR remote collaborative architecture which enables a remote expert to guide a local user in physical assembly and training tasks. A remote user can activate the instructions through simple and intuitive interaction, then clear instructions are shown in both AR (local) and VR (remote) views, enabling a local worker to operate the tool following the instructions. In the demonstration we use a hammer operation as a physical task, showing the benefits of the adaptive MR remote collaborative platform.

JumpinVR: Enhancing Jump Experience in a Limited Physical Space

We introduce a short virtual reality experience highlighting a use-case scenario of distance relocation technique in Redirected Jumping to reduce the size requirements for tracked working space of spatial applications. In our demo, the player traverses a virtual factory by jumping between moving platforms with jump distance scaled by gain.

Light Me Up: An Augmented-Reality Projection System

Real-time facial projection mapping is a challenging problem due to the low system latency and the high spatial augmentation accuracy requirements. We propose a new compact and inexpensive projector-camera system (ProCam) composed of off-the-self devices that achieves dynamic facial projection mapping. A mini projector and a depth sensor camera are coupled together to project content on a user’s face. In one application, the camera tracks the facial landmarks of a person and simulated makeup is mapped on the person’s face. The latter is created by defining different zones of interest on the face. Instead of using sophisticated hardware, we propose an affordable system that can be easily installed anywhere while it assures an immerse experience. No initialization phase is needed and the system can handle different face topologies. In addition, the users can keep their eyes open and enjoy the projetion in a mirror.

Live 6DoF Video Production with Stereo Camera

We propose a light-weight 6DoF video production pipeline which uses only one stereo camera as input. The subject can move freely in any direction (lateral and depth) as the stereo camera follows to keep the subject within the frame. DeepKeying, our own proprietary keying method based on deep learning, helps with ease of live shooting with no need for green screen. The live video stream is processed in real time to provide 6DoF video experience. Our method enables a sense of immersion at production quality.

Lost City of Mer

Lost City of Mer is a virtual reality (VR) game experience combined with a smartphone app that immerses players in a fantasy undersea civilization devastated by ecological disaster caused by global warning. The project aims to harness the immersive and empathetic potential of VR to address climate change and create a sense of urgency in the player with regard to their personal carbon footprint.

Players are invited to help rebuild the lost world of Mer and its devastated ecosystem in VR by re-establishing its unique flora and fauna, and fighting ongoing dangers and threats, with the aim of bringing back to life its mysterious Mer-people inhabitants. Guided by a solitary seal spirit named Athina – the last of its kind in a dying ocean – players try to save the Mer population from extinction. They tend to secret gardens of coral threatened by pollution, create habitats for Mer-people, and explore the destroyed civilization, in the process learning how their real-world actions impact the world around them.

The project was developed with the input of environmental scientists from Harvard University and Dartmouth College. The experience is based on real science, but told through fantasy, as it draws on the cross-cultural myth of the mermaid to appeal to people across the globe.

M-Hair: Extended Reality by Stimulating the Body Hair

M-Hair is a novel method for providing tactile feedback by stimulating only body hair. By applying passive magnetic materials to the body hair, these ones responsive to external magnetic forces/fields, creating a new opportunity for interactions, such as enriching media experiences, emotional touch, or even relieving pain.

PhantomTouch: Creating an Extended Reality by the Illusion of Touch using a Shape-Memory Alloy Matrix

With the rise of VR applications, the ability to experience physical touches becomes progressively important to increase immersion. In this paper, we propose PhantomTouch, a wearable forearm augmentation, that enables recreation of natural touch sensation by applying shear forces onto the skin. In contrast to commonly used vibration-based haptics, our approach consists of arranging light-weight and stretchable 3 × 3cm plasters in a matrix onto the skin. Individual plasters were embedded with lines of shape-memory alloy (SMA) wires to control shear forces.

The matrix arrangement of the plasters enables the illusion of a phantom touch, for instance, feeling a wrist grab or an arm stroke.

Physical e-Sports in VAIR Field system

In this study, we define physical e-sports that require physical training and show an example of such sport created using a mobile virtual reality system, the VAIR Field. Unlike conventional e-sports, physical e-sports involve physical activities, becoming the technological evolution of conventional sports. Our system uses extended reality technology without a head-mounted display and is safe for children to play, but it requires physical exercise and physical ability. It is a new kind of sport that uses multiple mobile devices and virtual weapons, providing more than just visual reality and allowing multiple players to play at the same time. By superimposing the physical world and the virtual world, physical e-sports allow the body to move with full force.

Pumping Life: Embodied Virtual Companion for Enhancing Immersive Experience with Multisensory Feedback

With the advance of virtual reality (VR) head-mounted display, the appearance of the virtual companion can be more realistic and full of vitality, such as breathing and facial expression. However, the users cannot interact physically with the companions due to they do not have a physical body. In this work, our goal is to enable the virtual companion with multisensory feedback in the VR, which allows the users to play with the virtual companion physically in the immersive environment. We present Pumping Life, a dynamic flow system for enhancing the virtual companion with multisensory feedback, which utilizes water pumps and heater to provide shape deformation and thermal feedback. In this work, to show the interactive gameplay with our system, we deploy the system into a teddy bear and design a VR role-playing game. In this game, the player needs to collaborate with the teddy bear to complete the mission, which would perceive the vitality and expression of the teddy bear with multiple tactile sensations.

SceneCam: Using AR to improve Multi-Camera Remote Collaboration

During multi-camera remote collaboration on physical tasks, as the name implies, multiple cameras capture different areas and perspectives of a task space. It can be challenging for the remote user to obtain the right view of the local user and to understand the spatial relationship between the disjointed views of task space areas. We present SceneCam, a prototype with which we use AR to explore different techniques for improving multi-camera remote collaboration by making optimal camera selection easier and faster for the remote user and by making the spatial relationship between task space areas explicit. To this end, SceneCam implements two camera selection techniques: nudging the remote user to select an optimal camera view of the local user, and automatic selection of an optimal camera view of the local user. Furthermore, SceneCam provides the remote user with two focus-in-context views - exocentric and egocentric views - that visualize the spatial relationship between the multiple task space areas and the local user.

SmartSim: Combination of Vibro-Vestibular Wheelchair and Curved Pedestal of Self-Gravitational Acceleration for Road Property and Motion Feedback

We developed a vehicle ride simulation system for immersive virtual reality, consisting of a wheelchair for vibration and vestibular sensation, and a pedestal with a curved surface for the wheelchair to run on, utilizing a gravitational component. Vehicle motion feedback systems often use a six degrees of freedom motion platform to induce virtual vehicle acceleration on the user’s body. However, because motion platforms are typically complex and expensive, their use is limited to relatively large-scale systems. The proposed system enables the presentation of variety of road property sensations as well as continuous acceleration of vehicle motion using high-bandwidth wheel torque produced by two direct-current motors. Our unique combination of a wheel and a pedestal can present vibration and vestibular sensations of vehicle acceleration with simple, light-weight, and low-cost equipment. In our demonstration experience, users can perceive the sensation of various road properties, such as uneven surfaces, and continuous acceleration of a car or roller coaster.

Super Size Hero

Supersizehero is an immerse VR game for HTC Vive which puts the player in the role of an overweight hero trying to save the day. A special crafted, tracked fat suit allowing the player to actively use his belly serves as the main gameplay mechanic. The game is highscore based - each round the player needs to prevent a prison breakout or bank robbing by bouncing fleeing prisoners back into the prison, interrupt bank robbers and bring money back to the bank in order to gain as much points as possible in the given round. At the start of every level the player can choose one of three suits - each grants special abilities and a unique playstyle.

TouchVR: a Wearable Haptic Interface for VR Aimed at Delivering Multi-modal Stimuli at the User’s Palm

TouchVR is a novel wearable haptic interface which can deliver multimodal tactile stimuli on the palm by DeltaTouch haptic display and vibrotactile feedback on the fingertips by vibration motors for the Virtual Reality (VR) user. DeltaTouch display is capable of generating 3D force vector at the contact point and presenting multimodal tactile sensation of weight, slippage, encounter, softness, and texture. The VR system consists of HTC Vive Pro base stations, head-mounted display (HMD), and Leap Motion controller for tracking the user’s hand motion. The MatrixTouch, BallFeel, RoboX, and AnimalFeel applications have been developed to demonstrate the capabilities of the proposed technology. A novel haptic interface can potentially bring a new level of immersion of the user in VR and make it more interactive and tangible.

Upload Not Complete

Created by Taiwanese artists Hu, Chin-Hsiang, Tsai Bing-Hua and Chang, Zhao-Qing, this piece attempts to use a hybrid reality, LED lights, wearable devices, and fans to build a installation that uploads the human mind to digital space. Imagine that upload process can see virtual object in real space. When you see the virtual object and feel the influence (wind and vibration), after passing through the upwardly extending tunnel, the screen enters the completely virtual space, but you don’t know whether the upload is completed.

Who You Are is What You Tell: Effects of Perspectives on Virtual Reality Story Experiences

Virtual reality (VR) stories provide an immersive and interactive medium to present narrative content. While it provides an immersive way to present the content, it is challenging to present the story in a way that matches the intention of the producer. There are several aspects of the VR environment that may affect the way the viewer experiences and perceives the content including the inhabited character by the viewer, the available interactive objects, areas of interest in the scene and their salience to receive attention. This presented demonstration is part of a project where we are investigating the ways a VR story can be presented—in other words, how the VR environment can be designed—to provide affordances that will help the viewer to perceive the story as intended by the producer, without explicitly guiding the viewer for particular interactions.