SIGGRAPH '22: ACM SIGGRAPH 2022 Emerging Technologies

Full Citation in the ACM Digital Library

Demonstration of Electrical Head Actuation: Enabling Interactive Systems to Directly Manipulate Head Orientation

We demonstrate a novel interface concept in which interactive systems directly manipulate the user’s head orientation. We implement this using electrical-muscle-stimulation (EMS) of the neck muscles, which turns the head around its yaw (left/right) and pitch (up/down) axis. At SIGGRAPH 2022 Emerging Techinologies, we will demonstrate how this technology enables novel interactions via two example applications: (1) finding different visual targets in mixed reality while the system actuates the user’s head orientation to guide their point-of-view; (2) a VR roller coaster application where the user’s head nods up as the ride accelerates.

Demonstrating poimo as Inflatable, Inclusive Mobility Devices with a Soft Input Interface

In this demo, we showcase poimo, a series of POrtable and Inflatable MObility devices made of a balloon-like soft material called drop-stitch fabric. Poimo has two merits: (1) inflatability of a soft and lightweight body allowing users to deflate and carry it on a small scale and (2) inclusiveness enabling made-to-order design and fabrication of mobility devices for each user. As an alternative way to conventional rigid controllers, we also introduce a soft and deformable input interface to navigate a sofa-type poimo. Finally, we report two types of field studies: public users riding on two types of poimos on a flat paved road, and the authors riding on a sofa-type poimo in a traditional European environment with inclined stone-paved streets.

Dual Robot Avatar: Real-time Multispace Experience using Telepresence Robots and Walk Sensation Feedback including Viewpoint Sharing for Immersive Virtual Tours

Traveling to different places simultaneously is a dream for several people, but it is difficult to realize this aspiration because of our physical space limits. On one hand, virtual reality technologies can help alleviate such limits. According to the best of the authors’ knowledge, there is no study attempt to operate multiple telepresence robots in remote places simultaneously, with presenting walk sensation feedback to the operator for an immersive multispace experience. In this study, we used autonomous mobile robots; a dog and wheel type one, where their movements’ direction can be controlled by an operator (Fig. 1). The operator can alternatively choose/re-choose the space (or robot) to attend and can move the viewpoint using a head-mounted display (HMD) controller. A live video image with 4 K resolution is transmitted to the HMD via web real-time communication (WebRTC) network from a 360° camera placed to the top of each robot. The operator perceives viewpoint movement feedback as a visual cue and vestibular feeling via waist motion and proprioception on the legs. Our system also allows viewpoint sharing in which fifty users can enjoy omnidirectional viewing of the remote environments through the HMD without walk-like sensation feedback.

HDR VR

The human visual system can resolve luminances from near zero to over a million candelas per meter squared (nits), and is able to simultaneously resolve over 4 orders of magnitude without adaptation [Kunkel and Reinhard 2010]. While most traditional displays only replicate a fraction of this range, high-dynamic-range (HDR) displays aim to support ranges closer to perceptual limits [Reinhard et al. 2010] and have achieved widespread use in cinemas and home theaters, but the perceptual impact of HDR remain largely unexplored in the context of Virtual Reality (VR) displays, which are typically limited to peak luminance values below 200 nits [Mehrfard et al. 2019]. To address this, we present an HDR VR demonstrator with a display system comprised entirely of off-the-shelf parts, capable of peak luminances over 16,000 nits. We achieve this without reducing the field of view (FOV) or simultaneous contrast relative to commercially-available VR headsets, while maintaining support for binocular and motion parallax depth cues. Consequently, our prototype has the potential to achieve a higher degree of perceptual realism than existing direct-view devices like HDR televisions and other high-luminance prototypes.

ImageFlowing-Enhance Emotional Expression by Reproducing the Vital Signs of the Photographer

ImageFlowing is a ‘living’ photograph that reproduces the biometric signs of the photographer. Viewers can feel how the photographer felt through photographer’s breathing, heartbeats and skin temperature. We extend a two-dimensional picture into a multi-modal experience, aiming at creating a tighter emotional link between the viewer and the photographer.

Meta Avatar Robot Cafe: Linking Physical and Virtual Cybernetic Avatars to Provide Physical Augmentation for People with Disabilities

Meta avatar robot cafe is a cafe that fuses cyberspace and physical space to create new encounters with people. We create a place where people with disabilities who have difficulty going out can freely switch between their physical bodies and virtual bodies, and communicate their presence and warmth to each other.

Rapid Design of Articulated Objects: An Interactive Showcase

Designing articulated objects is challenging because, unlike with static objects, it requires complex decisions regarding the form, parts, rig, poses, and motion. We showcase a novel 3D sketching system for authoring concepts of articulated objects during the early stages of design, when such decisions are made. Designers can easily learn and use our system to produce compelling concepts rapidly, demonstrating that 3D sketching can bridge the gap between 2D sketching and 3D modeling, and be extended to designing articulated objects in films, animations, games, and products.

ReQTable: Square tabletop display that provides dual-sided mid-air images to each of four users

We propose “ReQTable”, an optical system displaying dual-sided mid-air images to each of four users with less stray light. Dual Slit Mirror Arrays (dual SMAs) can produce a mid-air image that multiple people can view simultaneously without wearing special equipment. Dual SMAs are rectangular and can theoretically display mid-air images in the four surrounding directions. However, they also produce unwanted light, called stray light, which is especially noticeable when mid-air images are displayed in some directions. It is difficult for users to identify stray lights, and they cannot concentrate on viewing only the mid-air images. To improve a user experience, it is necessary to suppress stray light. ReQTable displays mid-air images in four directions with stray light reduction.

Sense of Embodiment Inducement for People with Reduced Lower-body Mobility and Sensations with Partial-Visuomotor Stimulation

To induce the Sense of Embodiment (SoE) on the virtual 3D avatar during a Virtual Reality (VR) walking scenario, VR interfaces have employed the visuotactile or visuomotor approaches. However, people with reduced lower-body mobility and sensation (PRLMS) who are incapable of feeling or moving their legs would find this task extremely challenging. Here, we propose an upper-body motion tracking-based partial-visuomotor technique to induce SoE and positive feedback for PRLMS patients. We design partial-visuomotor stimulation consisting of two distinctive inputs (Button Control & Upper Motion tracking) and outputs (wheelchair motion & Gait Motion). The preliminary user study was conducted to explore subjective preference with qualitative feedback. From the qualitative study result, we observed the positive response on the partial-visuomotor regarding SoE in the asynchronous VR experience for PRLMS.

ThermoBlinds: Non-Contact, Highly Responsive Thermal Feedback for Thermal Interaction

We present ThermoBlinds, a non-contact, highly responsive thermal feedback device for thermal interactions. It provides responsive feedback by rapidly adjusting infrared irradiance with a shutter mechanism. The feedback can be applied according to visual media contents to reproduce thermal experiences. By providing thermal feedback in accordance with the user’s gaze, it is possible to create a visual media experience with a high resolution thermal sensation. Additionally, it supports remote communication by providing thermal feedback based on the other person’s gaze.

Waving Blanket: Dynamic Liquid Distribution for Multiple Tactile Feedback using Rewirable Piping System

Perceiving multiple tactile sensations in virtual reality(VR) is one of the keys to enabling a compelling, immersive experience. The haptic experience consists of different receptors on the human body. Although several haptic technologies can produce different tactile stimulation to achieve the experience, the hybrid haptic system needs to combine each technique, requiring a complex configuration. Therefore, our goal is to provide several stimulations in one technique to reduce the effort to integrate haptic devices. This paper presents Waving Blanket, a dynamic liquid distribution system for multiple tactile feedback, utilizing a water pump and air valve to transmit and allocate the liquid in the pipe. We designed a virtual natural scene and developed a relaxation application called ”Water Forest” with our haptic system to show the possibility by combining visual-auditory feedback. Additionally, a rewirable piping system is adopted to explore the mechanism of simulating vibration, pressure, weight, and weight-shifting feedback.

WonderScope: Practical Near-surface AR Device for Museum Exhibits

Mobile augmented reality (AR) applications have become essential tools for delivering additional information to museum visitors. However, interacting through a mobile screen can potentially distract visitors from the exhibits. We propose WonderScope which is a peripheral system for mobile devices that enables practical near-surface AR interaction. Using a single small RFID tag on the exhibit as the origin, WonderScope can detect the position and orientation of the device on the surface of the exhibit. It performs on various surfaces of different materials based on the result of data fusion from two types of displacement sensors and an accelerometer of an inertial measurement unit (IMU). The mobile application utilizes the data for spatial registration of digital content on the exhibit’s surface, which make the users feel like seeing-through or magnifying the surface of exhibits.