SA '21 Emerging Technologies: SIGGRAPH Asia 2021 Emerging Technologies

Full Citation in the ACM Digital Library

“Amazing Sketchbook the Ride”: Driving a Cart in a 3DCG Scene Created from a Hand-Drawn Sketch

Collaborative Avatar Platform for Collective Human Expertise

In this study, we developed a system in which two users are integrated into one actual robot arm to collaborate with each other. The method of dividing the roles allows flexible movements that is not dependent on the range of movement and rotational freedom of the human arm, and enables movements that are impossible for a single person. The mixing of movements in adjustable ratio allows an expert to support a beginner to make more stable movements, and induces collaboration between users with different perspectives. Additionally, we implement tactile feedback to enable interaction between users and between users and the robot. We investigate the effects of these on usability and cognitive behavior. This system is expected to become a new method of collaboration in the cyber-physical society.

Depth-Aware Dynamic Projection Mapping using High-speed RGB and IR Projectors

In this paper, we propose a strong system for dynamic projection mapping (DPM). It consists of a high-speed 500-fps camera, high-speed 24-bit 947-fps RGB projector, and newly developed 8-bit 2,880-fps infrared (IR) projector. This configuration allows us to capture the depth map and project the depth-aware image at a high frame rate. We also realize 0.4-ms markerless 3D pose tracking using the depth map by leveraging small inter-frame motions under high-frame-rate capture. We exploit these captured data and apply the depth-aware DPM to an entire scene with low latency, in a setting in which tracking-based and modelless mapping are combined and performed simultaneously.

DroneStick: Flying Joystick as a Novel Type of Interface

DroneStick is a novel hands-free method for smooth interaction between a human and a robotic system via one of its agents, without training and any additional handheld or wearable device or infrastructure. A flying joystick (DroneStick), being a part of a multi-robot system, is composed of a flying drone and coiled wire with a vibration motor. By pulling on the coiled wire, the operator commands certain motions of the follower robotic system. The DroneStick system does not require the user to carry any equipment before or after performing the required interaction. DroneStick provides useful feedback to the operator in the form of force transferred through the wire, translation/rotation of the flying joystick, and motor vibrations at the fingertips. Feedback allows users to interact with different forms of robotic systems intuitively. A potential application can enhance an automated ‘last mile’ delivery when a recipient needs to guide a delivery drone/robot gently to a spot where a parcel has to be dropped.

Frisson Waves: Sharing Frisson to Create Collective Empathetic Experiences for Music Performances

Frisson is a feeling and a mental experience of body reactions such as shivers, tingling skin, and goosebumps. However, this sensation is not shareable naturally with others and is rarely used in live performances. We propose Frisson Waves, a real-time system to detect, trigger and share frisson in a wave-like pattern during music performances. The system consists of a physiological sensing wristband for detecting frisson and a thermo-haptic neckband for inducing frisson. We aim to improve the connectedness of audience members and performers during music performances by sharing frisson.

GroundFlow: Multiple Flows Feedback for Enhancing Immersive Experience on the Floor in the Wet Scenes

With haptic technology, the uses can experience an enhanced immersion in virtual reality. Most haptic techniques focus on the upper body, such as the head, chest, and hands, to provide strength feedback when interacting in the virtual world. Thus, researchers have been exploring different techniques to simulate haptic feedback for walking around in virtual space, such as texture, height, vibration, shape, and resistance. However, those techniques can not provide a real wet sensation in the virtual scene. Therefore, we present GroundFlow, a water recirculation system that provides multiple flows feedback on the floor in immersive virtual reality. Our demonstration also implemented a virtual excursion that allows users to experience different water flows and their corresponding wet scenes.

HoloBurner:: Mixed Reality Equipment for Learning Flame Color Reaction by using Aerial Imaging Display

The HoloBurner system is a mixed reality instrument of chemistry experiments for learning flame color reaction. It can provide users with the sense and enjoyment of performing experiments in a laboratory. Flame color reaction is a phenomenon which can identify an element by its unique color, with which students can learn that substances are comprised with various different elements. The HoloBurner system provide users a real feeling of operating a burner, and realistic image of flame with an aerial imaging display. Students can experience flame color reaction using the HoloBurner safely in natural way same as with real instruments.

Integration of stereoscopic laser-based geometry into 3D video using DLP Link synchronisation

Stereoscopic video projection using active shuatter glasses is a mature technology employed in multi-person immersive virtual reality environments such as CAVE systems. On the other hand, non-stereoscopic laser projectors are popular in the entertainment industry because they can display graphics at greater distances and cover larger areas. However, stereoscopic-capable laser-based vector graphics could enhance video-based immersive 3D experiences, due to their unique visual characteristics including extremely high contrast and arbitrarily extended gamut through the use of multiple laser sources. Their virtually infinite depth of field also allows for easy installation compared with video projectors, regardless of the size of the augmented space. In this work, we demonstrate a system integrating 3D laser-based vector graphics into a dynamic scene generated by a conventional stereoscopic video projection system.

LIPSYNC.AI: A.I. Driven Lips and Tongue Animations Using Articulatory Phonetic Descriptors and FACS Blendshapes

We present a solution for generating realistic lips and tongue animations, using a novel hybrid method which makes use of both the advancements in deep learning and the theory behind speech and phonetics. Our solution generates highly accurate and natural animations of the jaw, lips and tongue through the use of additional phonetic information during the neural network training, and the procedural mapping of its outputs directly to FACS [Prince et al. 2015] based blendshapes, in order to comply to animation industry standards.

Midair Haptic-Optic Display with Multi-Tactile Texture based on Presenting Vibration and Pressure Sensation by Ultrasound

Reproducing the tactile texture of a real object allows humans to interact in virtual reality space with a high sense of immersion. In this study, we develop a midair haptic-optic display with multi-tactile texture using focused ultrasound. In this system, users can touch aerial 3D images with a realistic texture without wearing any devices. For rendering realistic texture, our system simultaneously presents vibration and static pressure sensation. The presentable sensation of the previous ultrasound system is limited to vibrations. The combination of the sensations can improve the reality of rendered tactile texture by ultrasound. In the demo, we presented three textures with different roughness: cloth, wooden board, and gel.

Multimodal Feedback Pen Shaped Interface and MR Application with Spatial Reality Display

Multimodal interface is essential to enrich the reality of drawing in the virtual 3D environment. We propose the pen shaped interface capable of providing the following multimodal feedbacks: (1) Linear motion force feedback to express contact pressure of virtual object, (2) Rotational force feedback to simulate friction of rubbing virtual surface and tangential contact force with virtual object, (3) Vibrotactile feedback, and (4) auditory feedback to express contact information and texture of virtual object. We developed Mixed Reality (MR) interaction system with pen shaped interface and Spatial Reality Display. This system will display virtual pen tip extended from the actual pen shaped interface, and user can use it to draw into virtual workspace as well as interact with virtual objects. This demonstrates the advantage of our proposal by improving the reality of virtual space interaction.

Parallel Ping-Pong: Demonstrating Parallel Interaction through Multiple Bodies by a Single User

We explore a “Parallel Interaction”, where a user controls several bodies concurrently. To demonstrate this type of interaction we developed “Parallel Ping-Pong”. The user plays ping-pong by controlling 2 robot arms with a Virtual Reality (VR) controller, and while looking from the viewpoint of one of the 2 robot arms with a Head Mounted Display (HMD). The HMD view switches between the 2 robot arms’ viewpoints according to the ping-pong ball position and direction. To keep the sense of agency over the robot arms (hindered by the system latency), the robot arms calculate the balls’ paths and position themselves to hit the balls back; all while integrating the controller motion. Throughout this demonstration, we investigate a real-life design of Parallel Interaction with multiple bodies by a single user.

Real-time Image-based Virtual Try-on with Measurement Garment

Virtual try-on technology that replaces a customer’s wearing with arbitrary garments can significantly improve the online cloth shopping experience [anonymous 2021; Han et al. 2018; Yang et al. 2020]. In this work, we present a real-time image-based virtual try-on system composed of two parts, i.e., photo-realistic clothed person image synthesis for the customers to experience the try-on result and garment capturing for the retailers to capture the rich deformations of the target garment. Distinguished from previous image-based virtual try-on works, we formulate the problem as a supervised image-to-image translation problem using a measurement garment, and we capture the training data with a custom actuated mannequin.

Recognition of Gestures over Textiles with Acoustic Signatures

We demonstrate a method capable of turning textured surfaces (in particular textile patches) into opportunistic input interfaces thanks to a machine learning model pre-trained on acoustic signals generated by scratching different fabrics. A single short audio recording is then sufficient to characterize both a gesture and the textures substrate. The sensing method does not require intervention on the fabric (no special coating, additional sensors or wires). It is passive (no acoustic or RF signal injected) and works well using regular microphones, such as those embedded in smartphones. Our prototype yields 93.86% accuracy on simultaneous texture/gesture recognition on a test matrix formed by eight textures and eight gestures as long as the microphone is close enough (e.g. under the fabric), or when the patch is attached to a solid body transmitting sound waves. Preliminary results also show that the system recognizes the manipulation of Velcro straps, zippers, or the taping or scratching of plastic cloth buttons over the air when the microphone is in personal space. This research paves the way for a fruitful collaboration between wearables researchers and fashion designers that could lead to an interaction dictionary for common textile patterns or guidelines for the design of signature-robust stitched patches not compromising aesthetic elements.

Self-Shape-Sensing Device with Flexible Mechanical Axes for Deformable Input Interface

We propose a novel device that is capable of sensing its own shape, structured around a flexible mechanical axis that allows for its deformation within a wider degree of freedom and enables its tangible and intuitive control by hand. This device has a 5 x 5 axis structure consisting of a spherical joint that rotates between ± 45° and a slide shaft whose length can be varied from 50 to 71 mm. We developed a signal-processing algorithm which can work on the micro controller unit (MCU) of the device and use built-in sensors to not only reconstruct deformations in real-time, but also directly use such reconstructions in 3D graphics applications. As a result, we achieved an intuitive and interactive 3D experience using volumetric displays.

Simultaneous Augmentation of Textures and Deformation Based on Dynamic Projection Mapping

In this paper, we exploit human perception characteristics and dynamic projection mapping techniques and realize overwriting of both textures and deformation of a real object. To keep the projection following a moving object and induce deformation illusion, we developed a 1000 fps projector-camera system and demonstrated augmentation of the real world. In the demonstration, the audience will see a plaster figure turning into a colorful and flabby object.

The Aromatic Garden: Exploring new ways to interactively interpret narratives combining olfaction and vision including temporal change of scents using olfactory display

VWind: Virtual Wind Sensation to the Ear by Cross-Modal Effects of Audio-Visual, Thermal, and Vibrotactile Stimuli

In the field of virtual reality, wind displays have been researched to present wind sensation. However, since the wind displays need physical wind sources, large heat transfer mechanisms are required to produce hot wind and cold wind. We propose to present virtual wind sensation by cross-modal effects. We developed VWind, a wearable headphone-type device to give vibrotactile and thermal stimuli to the ear in addition to visual scenarios and binaural sounds. For user experience, the demonstrations are prepared to present virtual wind sensation as if users were blown to the ear or exposed to freezing winds.

Weighted Walking: Propeller-based On-leg Force Simulation of Walking in Fluid Materials in VR