SIGGRAPH21 Emerging Technologies: Special Interest Group on Computer Graphics and Interactive Techniques Conference Emerging Technologies

Full Citation in the ACM Digital Library

SESSION: Haptics

Demonstrating Touch&Fold: A Foldable Haptic Actuator for Rendering Touch in Mixed Reality

We propose a nail-mounted foldable haptic device that provides tactile feedback to mixed reality (MR) environments by pressing against the user's fingerpad when a user touches a virtual object. What is novel in our device is that it quickly tucks away when the user interacts with real-world objects. Its design allows it to fold back on top of the user's nail when not in use, keeping the user's fingerpad free to, for instance, manipulate handheld tools and other objects while in MR. To achieve this, we engineered a wireless and self-contained haptic device, which measures 24×24×41 mm and weighs 9.5 g. Furthermore, our foldable end-effector also features a linear resonant actuator, allowing it to render not only touch contacts (i.e., pressure) but also textures (i.e., vibrations). We demonstrate how our device renders contacts with MR surfaces, buttons, low- and high-frequency textures.

Demonstrating MagnetIO: Passive yet Interactive Soft Haptic Patches Anywhere

We demonstrate a new type of haptic actuator, which we call MagnetIO, that is comprised of two parts: one battery-powered voice-coil worn on the user's fingernail and any number of interactive soft patches that can be attached onto any surface (everyday objects, user's body, appliances, etc.). When the user's finger wearing our voice-coil contacts any of the interactive patches it detects its magnetic signature via magnetometer and vibrates the patch, adding haptic feedback to otherwise input-only interactions. To allow these passive patches to vibrate, we make them from silicone with regions doped with polarized neodymium powder, resulting in soft and stretchable magnets. This stretchable form-factor allows them to be wrapped to the user's body or everyday objects of various shapes. We demonstrate how these add haptic output to many situations, such as adding haptic buttons to the walls of one's home.

Balanced Glass Design: A flavor perception changing system by controlling the center-of-gravity

In this paper, we propose Balanced Glass Design, a system to change flavor perception. The system consists of glass-type device shifting its center of gravity in response to the user’s motion which allows drinking a beverage with a virtual perception of weight through drinking motion. We thought It’s possible to intervene in the user’s perception of flavor by displaying virtual weight perception, and so conducted experiments on weight perception and demonstrations as a user study. This paper describes the system design, the result of experiments, and comments obtained through a user study.

SESSION: Taking Flight

Augmented reality representation of virtual user avatars moving in a virtual representation of the real world at their respective real world locations

In this work we present an augmented reality (AR) application that allows a user with an AR display to watch another user, flying an airplane in the Microsoft Flight Simulator 2020 (MSFS), at their respective location in the real world. To do that, we take the location data of a virtual 3D airplane model in a virtual representation of the world of a user playing MSFS, and stream it via a server to a mobile device. The mobile device user can then see the same 3D airplane model at exactly that real world location, that corresponds to the location of the virtual 3D airplane model in the virtual representation of the world. The mobile device user can also see the avatar movement updated according to the 3D airplane movement in the virtual world. We implemented the application on both a cellphone and a see-through headset.

SwarmPlay: A Swarm of Nano-Quadcopters Playing Tic-tac-toe Board Game against a Human

We present a new paradigm of games, i.e. SwarmPlay, where each playing component is presented by an individual drone that has its own mobility and swarm intelligence to win against a human player. The motivation behind the research is to make the games with machines tangible and interactive. Although some research on the robotic players for board games already exists, e.g., chess, the SwarmPlay technology has the potential to offer much more engagement and interaction with a human as it proposes a multi-agent swarm instead of a single interactive robot. The proposed system consists of a robotic swarm, a workstation, a computer vision (CV), and Game Theory-based algorithms. A novel game algorithm was developed to provide a natural game experience to the user. The preliminary user study revealed that participants were highly engaged in the game with drones (69% put a maximum score on the Likert scale) and found it less artificial compared to the regular computer-based systems (77% put maximum score). The affection of the user’s game perception from its outcome was analyzed and put under discussion. User study revealed that SwarmPlay has the potential to be implemented in a wider range of games, significantly improving human-drone interactivity.

DronePaint: Swarm Light Painting with DNN-based Gesture Recognition

We propose a novel human-swarm interaction system, allowing the user to directly control a swarm of drones in a complex environment through trajectory drawing with a hand gesture interface based on the DNN-based gesture recognition.

The developed CV-based system allows the user to control the swarm behavior without additional devices through human gestures and motions in real-time, providing convenient tools to change the swarm’s shape and formation. The two types of interaction were proposed and implemented to adjust the swarm hierarchy: trajectory drawing and free-form trajectory generation control.

The experimental results revealed a high accuracy of the gesture recognition system (99.75%), allowing the user to achieve relatively high precision of the trajectory drawing (mean error of 5.6 cm in comparison to 3.1 cm by mouse drawing) over the three evaluated trajectory patterns. The proposed system can be potentially applied in complex environment exploration, spray painting using drones, and interactive drone shows, allowing users to create their own art objects by drone swarms.

SESSION: Speed and Precision

MetamorHockey: A Projection-based Virtual Air Hockey Platform Featuring Transformable Mallet Shapes

We propose a novel projection-based virtual air hockey system in which not only the puck but also the mallet is displayed as an image. Being a projected image, the mallet can freely “metamorphose” into different shapes, which expands the game design beyond the original air hockey. We discuss possible scenarios with a resizable mallet, with mallet shapes defined by drawing, and with a mallet whose collision conditions can be modified. A key challenge in implementation is to minimize the latency because the direct manipulation nature of the mallet positioning imposes a higher demand on latency than the puck positioning. By using a high-speed camera and a high-speed projector running at 420 fps, a satisfactorily quick tracking became possible such that we feel a projected mallet head to be an integral part of a mallet held by hand.

Gaming at Warp Speed: Improving Aiming with Late Warp

Latency can make all the difference in competitive online games. Late warp is a class of techniques used in VR that can reduce latency in FPS games as well. Prior work has demonstrated these techniques can recover most of the player performance lost to computer or network latency. Inspired by work demonstrating the usefulness of late warp as a potential solution to FPS latency, we provide an interactive demonstration, playable in a web browser, that shows how much latency limits aiming performance, and how late warp can help.

Behind The Game: Implicit Spatio-Temporal Intervention in Inter-personal Remote Physical Interactions on Playing Air Hockey

When playing inter-personal sports games remotely, the time lag between user actions and feedback decreases the user’s performance and sense of agency. While computational assistance can improve performance, naive intervention independent of the context also compromises the user’s sense of agency. We propose a context-aware assistance method that retrieves both user performance and sense of agency, and we demonstrate the method using air hockey (a two-dimensional physical game) as a testbed. Our system includes a 2D plotter-like machine that controls the striker on half of the table surface, and a web application interface that enables manipulation of the striker from a remote location. Using our system, a remote player can play against a physical opponent from anywhere through a web browser. We designed the striker control assistance based on the context by computationally predicting the puck’s trajectory using a real-time captured video image. With this assistance, the remote player exhibits an improved performance without compromising their sense of agency, and both players can experience the excitement of the game.

SESSION: COVID Inspired Innovations

Health Greeter Kiosk: Tech-Enabled Signage to Encourage Face Mask Use and Social Distancing

COVID-19 has been the cause of a global health crisis over the last year. High transmission rates of the virus threaten to cause a wave of infections which have the potential to overwhelm hospitals, leaving infected individuals without treatment. The World Health Organization (WHO) endorses two primary preventative measures for reducing transmission rates: the usage of face masks and adherence to social distancing [World Health Organization 2021]. In order to increase population adherence to these measures, we designed the Health Greeter Kiosk: a form of digital signage. Traditional physical signage has been used throughout the pandemic to enforce COVID-19 mandates, but lack population engagement and can easily go unnoticed. We designed this kiosk with the intent to reinforce these COVID-19 prevention mandates while also considering the necessity of population engagement. Our kiosk encourages engagement by providing visual feedback which is based on analysis from our kiosk’s computer vision software. This software integrates real-time face mask and social distance detection on a low-budget computer, without the need of a GPU. Our kiosk also collects statistics, relevant to the WHO mandates, which can be used to develop well-informed reopening strategies.

Experiment Assisting System with Local Augmented Body (EASY-LAB) for Subject Experiments under the COVID-19 Pandemic

Since it is challenging to perceive space and objects with a video conferencing system, which communicates using only video and audio, there are difficulties in testing subjects in the COVID-19 pandemic. We propose the EASY-LAB system that allows an experimenter to perform observation and physical interaction with the subject even from a remote location actively. The proposed system displays the camera image on a HMD worn by the experimenter, which camera is mounted on a small 6 DOF robot arm end, allowing observation from an easy-to-see perspective. The experimenter can also instruct the subject using another robot arm with a laser pointer. The robot’s joint angles are calculated by Inverse Kinematics from the experimenter’s head movements, then reflected in the actual robot. Photon Unity Networking component was used for the synchronization process with remote locations. These devices are affordable, effortless to set up, and can be delivered to the subject’s home. Finally, the proposed system was evaluated by four subjects, As a preliminary result, the mean pointing error was 1.1 cm, and the operation time was reduced by 60% compared with the conventional video conferencing system. This result indicated the EASY-LAB’s capability, at least in tasks that require pointing and observation from various angles. The statistical study with more subjects will be conducted in the follow-up study.

Sustainable society with a touchless solution using UbiMouse under the pandemic of COVID-19

This paper introduces a new artificial intelligence software which is capable of controlling devices using fingers in the air. With Ubimouse, touch-panels, restaurant ordering systems, ATM systems, and etc., which are commonly used by various people in public, can be contact-less devices. These touch-less devices, especially under the harsh conditions of COVID-19, are desired to prevent infections mediated by touch devices. Also, these devices cannot be used while wearing gloves due to their touch sensing fault. Thus, in fields using gloves, there is a demand for non-contact device operation. To satisfy these demands, we have developed “UbiMouse”. This is an AI software that allows you to operate the device by moving your fingers toward the device. In the AIs in UbiMouse, a convolution model and a regression model are used to identify fingers’ features from camera footage and to estimates the position of a detected finger, respectively. We demonstrate an operation of UbiMouse without contact. Along with this operation in the air, a mouse cursor is guided to the specified location with high accuracy.

SESSION: Display and Imaging

Reverse Pass-Through VR

We introduce reverse pass-through VR, wherein a three-dimensional view of the wearer’s eyes is presented to multiple outside viewers in a perspective-correct manner, with a prototype headset containing a world-facing light field display. This approach, in conjunction with existing video (forward) pass-through technology, enables more seamless interactions between people with and without headsets in social or professional contexts. Reverse pass-through VR ties together research in social telepresence and copresence, autostereoscopic displays, and facial capture to enable natural eye contact and other important non-verbal cues in a wider range of interaction scenarios.

Polarimetric Spatio-Temporal Light Transport Probing