SIGGRAPH '19- ACM SIGGRAPH 2019 Emerging Technologies

Full Citation in the ACM Digital Library

360-degree transparent holographic screen display

We propose a technique for creating a feeling of reality to 2D images. We have succeeded in fabricating a holographic screen with higher transparency and luminance compared to one based on conventional technology. With the combination of a 360-degree transparent holographic screen display and sensing technology using multiple high-speed cameras, the observer gets the feeling that an object is "actually there". Fusion of the background and the image increases the feeling of "floating" in the image by using a holographic screen, and the multiple highspeed cameras can make the motion parallax image according to the position of the observer in real time. Therefore, the image seems to be located at the center of the cylinder.

A compact retinal scan near-eye display

A compact full color laser beam scanning (LBS)-based Augmented Reality (AR) near-eye display has been developed. By the unique relay optical system adopting a novel holographic grating to compensate the color dispersion of holographic image combiner in front of the eye, we have achieved high resolution (1280x720p), large field of view (47degree diagonal), high transparency (over 85%) and hand-held miniaturization. The prototype implemented only two connectors of USB Type-C and HDMI Type-D to supply power and video signal of HDMI interface respectively from small control box using two cables.

A design for optical cloaking display

In the graphics research context, optical cloaking that hides an object by relaying a light field is also a kind of display. Despite much interest in the cloaking, large-size cloaking has not been achieved without some limitations and/or assumptions. To solve this problem, a computational design method is proposed for an optical cloaking display via viewpoint transformation. The method uses two kinds of passive optical elements that transfer a point into a plane symmetric point. In the experiments, a novel passive display was developed that optically cloaks the object and transmits a light field properly. Experimental results for the multiviewpoint scene that was captured are presented.

A stretch-sensing soft glove for interactive hand pose estimation

We present a stretch-sensing soft glove to interactively capture full hand poses with high accuracy and without requiring an external optical setup. Our device can be fabricated with simple tools available in most fabrication labs. The pose is reconstructed from a capacitive sensor array embedded in the glove. We propose a data representation that allows deep neural networks to exploit the spatial layout of the sensor itself. The network is trained only once, using an inexpensive off-the-shelf hand pose reconstruction system to gather the training data. The per-user calibration is then performed on-the-fly using only the glove.

A transparent display with per-pixel color and opacity control

We propose a new display system that composites matted foreground animated graphics and video, with per-pixel controllable emitted color and transparency, over real-world dynamic objects seen through a transparent display. Multiple users can participate simultaneously without any glasses, trackers, or additional devices. The current prototype is deployed as a desktop-monitor-sized transparent display box assembled from commodity hardware components with the addition of a high-frame-rate controllable diffuser.

Active textile tailoring

Active Textile Tailoring is a new process for creating smart textiles in which its fibers change shape and structure in response to heat. This adaptive textile can create a new type of sizing customization or aesthetic patterning for the preference of individual customers. This system was developed in collaboration with MIT, Ministry of Supply, Hills Inc. and Iowa State University with support from the federal non-profit Advanced Functional Fabrics of America (AFFOA).

AffectiveHMD: facial expression recognition in head mounted display using embedded photo reflective sensors

We propose a facial expression mapping technology between virtual avatars and Head-Mounted Display (HMD) users. HMDs allow people to enjoy an immersive Virtual Reality (VR) experience. A virtual avatar can be a representative of the user in the virtual environment. However, the synchronization of the virtual avatar's expressions with those of the HMD user is limited. The major problem of wearing an HMD is that a large portion of the user's face is occluded, making facial recognition difficult in an HMD-based virtual environment. To overcome this problem, we propose a facial expression mapping technology using photo-reflective sensors. The sensors attached inside the HMD measure the reflection intensity between the sensors and the user's face. The intensity values of five basic facial expressions (Neutral, Happy, Angry, Surprised, and Sad) are used for training a classifier to estimate the facial expression of a user. In Siggraph 2019, the user can enjoy two application, the facial expression synchronization with the avatar, and simple manipulation experience for a virtual environment by facial expressions.

Arque: artificial biomimicry-lnspired tail for extending innate body functions

For most mammals and vertebrate animals, tail plays an important role for their body providing variant functions to expand their mobility, or as a limb that allows manipulation and gripping. In this work, Arque, we propose an artificial biomimicry-inspired anthropomorphic tail to allow us alter our body momentum for assistive, and haptic feedback applications. The proposed tail consists of adjacent joints with a spring-based structure to handle shearing and tangential forces, and allow managing the length and weight of the target tail. The internal structure of the tail is driven by four pneumatic artificial muscles providing the actuation mechanism for the tail tip. Here we highlight potential applications for using such prosthetic tail as an extension of human body to provide active momentum alteration in balancing situations, or as a device to alter body momentum for full-body haptic feedback scenarios.

ChicMR: immersive mixed reality system using video-see-thru HMD and 3D LiDAR scanner

In this paper, we propose a mixed-reality system that provides an immersive interaction space for both real and virtual objects. Using a custom-made video-see-thru (VST) HMD, the system streams high-resolution stereo images of what the user is currently looking at. Also, the proposed system includes a rotating 2D LiDAR that configures a geometry of the user's surrounding space. Based on the triangle mesh approach, the proposed system creates an interaction space from the point cloud data. With the constructed interaction space, the user experiences various interactions using both real and virtual objects in the mixed-reality environment. We demonstrate the feasibility and effectiveness of our system with a real-time application.

Demonstrating preemptive reaction: accelerating human reaction using electrical muscle stimulation without compromising agency

In this demonstration we enable preemptive force-feedback systems to speed up human reaction time without fully compromising the user's sense of agency. Typically these haptic systems speed up human reaction time by means of electrical muscle stimulation (EMS) or mechanical actuation (exoskeletons), which unfortunately, completely remove the users sense of agency. We address this by actuating the user's body, using EMS, within a particular time window (160 ms after visual stimulus), which we found to speed up reaction time by 80 ms, while retaining a sense of agency. Here, we demonstrate this at the example of two applications: (1) taking a picture of a high-speed moving object in mid-flight, or (2) hit baseball with a toy gun.

Eigen zoetrope

Zoetrope is an animation display that produces the illusion of motion via a displayed sequence of pictures; this device was invented in 1833. The traditional Zoetrope consists of a rotational disk and a strobe light. The disk has some plates that are pasted together showing sequential pictures and is rotated rapidly. The strobe light is emitted with synchronization to the rotation angle To display the animation. The Zoetrope has been improved and there are many kinds of the zoetropes in SIGGRAPH community: interactive zoetrope [Smoot et al. 2010], ZoeMatrope [Miyashita et al. 2016], and Magic Zoetrope [Yokota and Hashida 2018].

EyeHacker: gaze-based automatic reality manipulation

In this study, we introduce EyeHacker, which is an immersive virtual reality (VR) system that spatiotemporally mixes the live and recorded/edited scenes based on the measurement of the users' gaze. This system updates the transition risk in real time by utilizing the gaze information of the users (i.e., the locus of attention) and the optical flow of scenes. Scene transitions are allowed when the risk is less than the threshold, which is modulated by the head movement data of the users (i.e., the faster their head movement, the higher will be the threshold). Using this algorithm and experience scenario prepared in advance, visual reality can be manipulated without being noticed by users (i.e., eye hacking). For example, consider a situation in which the objects around the users perpetually disappear and appear. The users would often have a strange feeling that something was wrong and, sometimes, would even find what happened but only later; they cannot visually perceive the changes in real time. Further, with the other variant of risk algorithms, the system can implement a variety of experience scenarios, resulting in reality confusion.

GlideReality: a highly immersive VR System augmented by a novel multi-modal and multi-contact cutaneous wearable display

The user's palm plays a relevant role in the detection and manipulation of objects. The GlideReality system was designed to increase the immersive Virtual Reality (VR) experience and make it more interactive, providing multi-contact and multi-modal haptic stimuli on the user's palm using a novel wearable haptic display LinkGlide. The proposed device, which consists of three five-bar inverted linkages generatingthree contact points,is used toprovide tactile stimuli to the users in the VR environment. The force sensors, located at each of the contact points, provide feedback to the control system to generate the required stimuli with a specific force. In addition, an impedance control was developed to generate the sensation of objects stiffness. The system consists of an HTC Vive Pro Headset for room scale VR, the trackers for the hand position, and the game engine Unity 3D for the integration of the system and the application development. The GlideReality system provides highly immersive interaction with virtual objects in the different applications: BallFeel, PressFeel, and AnimalFeel. With this system, we can potentially achieve a highly immersive VR experience and make it more interactive.

Golf training system using sonification and virtual shadow

This paper proposed a golf training system using real-time audio-visual feedback. The system captures user's motion with an optical motion tracking system and projects his/her posture as a virtual shadow on the ground. Unlike other golf training systems, our system enables the user to keep his/her gaze on the ball. The model swing motion of the expert golfer is also overlaid so that the user can understand the difference between his/her motion and expert's. Moreover, the system provides audio feedback using sound image localization. It enables the user to understand the position and orientation of the club face, which is often out of sight of the user during his/her swing motion.

HAPTIC PLASTeR: soft, thin, light and flexible haptic display using DEA composed of slide-ring material for daily life

Recently, many wearable haptic displays have been widely explored aiming the enriched user experience through various application such as the virtual reality (VR) and Telexistence. Many such proposed wearable haptic displays so far are composed of rigid materials such as motors, voice coil actuators and speakers [Minamizawa et al. 2007]. Therefore, in recent years, haptic displays composed of soft materials such as dielectric elastomer actuators (DEAs) have been proposed [Koo et al. 2008; Park et al. 2015]. However, the polymers used in such DEAs have hysteresis-loss property as a main physical limitation, which results in different output displacement property during the actuation cycles. In addition, these DEAs consist of a property that requires "pre-stretching", i.e., a strong force is required at the time of initial actuation. As such, these properties requires specialized actuation mechanisms for DEAs to be widely used as haptic displays.

Liquid printed pneumatics

Liquid Printed Pneumatics is a project developed by the MIT, Self-Assembly Lab and Swiss designer Christophe Guberan that focuses on the 3D printing of pneumatically activated objects. Rapid Liquid Printing (RLP), a new additive manufacturing process developed at the lab, is used to create shape changing devices and objects.

MagniFinger: magnified perception by a fingertip probe microscope

We propose MagniFinger, a fingertip-worn microscopy device that augments the limited abilities of human visual and tactile sensory systems in micrometer-scale environments. MagniFinger makes use of the finger's dexterous motor skills to achieve precise and intuitive control while allowing the user to observe the desired position simply by placing a fingertip. To implement the fingertip-sized device and its tactile display, we have built a system comprising a ball lens, an image sensor, and a thin piezoelectric actuator. Vibration-based tactile feedback is displayed based on the luminance of a magnified image, providing the user with the feeling of touching the magnified world.

Matching prescription & visual acuity: towards AR for humans

An increasingly important part of usuable near-eye displays to allow use by users who use vision correction such as that provided by glasses and contact lenses. Recent research indicates that over 20% of world population is myopic, and this percentage is increasing [Holden et al. 2016]. Commercial prototypes have offered an additional prescription lens pair or a glasses-compatible design, but both of these approaches increase the size and weight of the device. Ideally, a user's prescription should be considered from the optical design stage for the smallest form factor.

Melody slot machine

We developed an interactive music system called the "Melody Slot Machine, " which provides an experience of manipulating a music performance. The melodies used in the system are divided into multiple segments, and each segment has multiple variations of melodies. By turning the dials manually, users can switch the variations of melodies freely. When you pull the slot lever, the melody of all segments rotates, and melody segments are randomly selected. Since the performer displayed in a hologram moves in accordance with the selected variation of melody, users can enjoy the feeling of manipulating the performance.

PickHits: hitting experience generation with throwing motion via a handheld mechanical device

Experiences of hitting targets cause a great feeling. We propose a system for generating this experience computationally. This system consists of external tracking cameras and a handheld device for holding and releasing a thrown object. As a proof-of-concept system, we developed the system based on two key elements: low-latency release device and constant model-based prediction. During the user's throwing motion, the ballistic trajectory of the thrown object is predicted in real time, and when the trajectory coincides with the desired one, the object is released. We found that we can generate a computational hitting experience within a limited range space.

PinocchioVR

In the Pinocchio fairy tale, the nose of a boy extends when he lies. Inspired by this tale, we created a Pinocchio VR system that presents the feeling of the nose extending through body ownership illusion. This illusion is created by pulling the nose while presenting the visuals of the nose growing in the head mounted display. Our research that explored different combinations of haptics and visuals indicated that the the minimum requirement for this body ownership illusion is the visuals and the 'nose-pulling' haptic sensation. Thus, the PinocchioVR system consists of a head mounted display and an integrated haptic nose-pulling mechanism. Furthermore, in this demonstration, we explore olfactory and vibrotactile sensations as multimodal effects to present new experiences such as interacting with far away objects with your nose, extending the nose to smell foods at a distance, and even, experience what it would feel like to hang clothes on your elongated nose.

Shading atlas streaming demonstration

Streaming high quality rendering for virtual reality applications requires minimizing perceived latency. Shading Atlas Streaming (SAS) [Mueller et al. 2018] is a novel object-space rendering framework suitable for streaming virtual reality content. SAS decouples server-side shading from client-side rendering, allowing the client to perform framerate upsampling and latency compensation autonomously for short periods of time. The shading information created by the server in object space is temporally coherent and can be efficiently compressed using standard MPEG encoding. SAS compares favorably to previous methods for remote image-based rendering in terms of image quality and network bandwidth efficiency. SAS allows highly efficient parallel allocation in a virtualized-texture-like memory hierarchy, solving a common efficiency problem of object-space shading. With SAS, untethered virtual reality headsets can benefit from high quality rendering without paying in increased latency. Visitors will be able to try SAS by roaming the exhibit area wearing a Snapdragon 845 based headset connected via consumer WiFi.

ShapeSense: a 2D shape rendering VR device with moving surfaces that controls mass properties and air resistance

We introduce "ShapeSense," a VR device with moving surfaces for rendering various shape perceptions with a single device. Shape-Sense can simultaneously control the mass properties and air resistance in order to represent the target shape seen in VR. The results of user studies showed that our proposed device can reproduce various shape perceptions and is superior to the conventional device, which only considers mass properties for shape rendering.

Space walk: a combination of subtle redirected walking techniques integrated with gameplay and narration

Redirected walking (RDW) denotes a collection of techniques for immersive virtual environments (IVEs), in which users are unknowingly guided on paths in the real world that vary from the paths they perceive in the IVE. For this Emerging Technologies exhibit we present a playful virtual reality (VR) experience that introduces a combination of those RDW techniques such as bending gains, rotation gains, and impossible spaces, which are all subtly integrated with the gameplay and narration to perfectly fit the given environment. Those perceptual tricks allow users to explore a virtual space station of 45 m2 in a room-scale setup by natural walking only.

TeeVR: spatial template-based acquisition, modeling, and rendering of large-scale indoor spaces

Conventional image-based rendering has limited applicability for large-scale spaces. In this study, we demonstrate an efficient alternative to conventional image-based rendering. Our key approach is based on a spatial template (ST), which solely includes architectural geometric primitives. The predictability of ST improves the efficiency of acquisition, storage, and rendering. Thereby, our system can be applied to the modeling and rendering of larger indoor spaces.

TeleSight: enabling asymmetric collaboration in VR between HMD user and Non-HMD users

In this paper, we propose "TeleSight", proof-of-concept prototype enabling asymmetric collaboration in VR between HMD user and Non-HMD users. TeleSight provides two interaction layers for designing asymmetric interaction, a physical layer for direct tangible interaction with players in VR space and visual layer that visually expresses VR space. Each layer is constructed by an avatar robot that imitates the motion of the HMD user, and a projection system. Non-HMD users can understand what happens in the virtual world by each interaction layers. Also, Non-HMD users can interact tangibly with the players in VR space throughout avatar robot. TeleSight provides that cooperative co-located gameplay between HMD user and Non-HMD users through experience scenarios using two layers.

Transfantome: transformation into bodies of various scale and structure in multiple spaces

Transfantome is a novel robot interaction in which the user seamlessly changes or simultaneously uses different "Bodies" of different sizes, structures, and positions. By combining various multiple bodies, we expand our range of power, dexterity, and the place we act, and it brings us unexplored experience or highly efficient work.

We map a virtual "Proxy Body" to the target physical "bodies" like miniature robots or huge construction machines and control them. By moving and scaling the Proxy Body in the virtual environment that reproduces the target bodies and their environment, we can smoothly change bodies which have separated view, size, and structure. As a result, we can handle various bodies together intuitively. For example, by using a powerful giant robot and a dextrous small one, it is possible to carry out the work like rescuing injured person while removing heavy debris efficiently and carefully.