VRST '21: Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology

Full Citation in the ACM Digital Library

SESSION: Papers 1: Tracking, Rendering, and Social Interaction

Avatar Tracking Control with Generations of Physically Natural Responses on Contact to Reduce Performers’ Loads

The real-time performance of motion-captured avatars in virtual space is becoming increasingly popular, especially within applications including social virtual realities (VRs), virtual performers (e.g., virtual YouTubers), and VR games. Such applications often include contact between multiple avatars or between avatars and objects as communication or gameplay. However, most current applications do not solve the effects of contact for avatars, causing penetration or unnatural behavior to occur. In reality, no contact with the player’s body occurs; nevertheless, the player must perform as if contact occurred. While physics simulation can solve the contact issue, the naive use of physics simulation causes tracking delay. We propose a novel avatar tracking controller with feedforward control. Our method enables quick, accurate tracking and flexible motion in response to contacts. Furthermore, the technique frees avatar performers from the loads of performing as if contact occurred. We implemented our method and experimentally evaluated the naturalness of the resulting motions and our approach’s effectiveness in reducing performers’ loads.

Fusing Semantic Segmentation and Object Detection for Visual SLAM in Dynamic Scenes

The assumption of static scenes limits the performance of traditional visual SLAM. Many existing solutions adopt deep learning methods or geometric constraints to solve the problem of dynamic scenes, but these schemes are either low efficiency or lack of robustness to a certain extent. In this paper, we propose a solution combining object detection and semantic segmentation to obtain the prior contours of potential dynamic objects. With this prior information, geometric constraints techniques are utilized to assist with removing dynamic feature points. Finally, the evaluation with the public datasets demonstrates that our proposed method can improve the accuracy of pose estimation and robustness of visual SLAM with no efficiency loss in high dynamic scenarios.

Impostor-based Rendering Acceleration for Virtual, Augmented, and Mixed Reality

This paper presents an image-based rendering approach to accelerate rendering time of virtual scenes containing a large number of complex high poly count objects. Our approach replaces complex objects by impostors, light-weight image-based representations leveraging geometry and shading related processing costs. In contrast to their classical implementation, our impostors are specifically designed to work in Virtual-, Augmented- and Mixed Reality scenarios (XR for short), as they support stereoscopic rendering to provide correct depth perception. Motion parallax of typical head movements is compensated by using a ray marched parallax correction step. Our approach provides a dynamic run-time recreation of impostors as necessary for larger changes in view position. The dynamic run-time recreation is decoupled from the actual rendering process. Hence, its associated processing cost is therefore distributed over multiple frames. This avoids any unwanted frame drops or latency spikes even for impostors of objects with complex geometry and many polygons. In addition to the significant performance benefit, our impostors compare favorably against the original mesh representation, as geometric and textural temporal aliasing artifacts are heavily suppressed.

Inside-Out Instrument Tracking for Surgical Navigation in Augmented Reality

Surgical navigation requires tracking of instruments with respect to the patient. Conventionally, tracking is done with stationary cameras, and the navigation information is displayed on a stationary display. In contrast, an augmented reality (AR) headset can superimpose surgical navigation information directly in the surgeon’s view. However, AR needs to track the headset, the instruments and the patient, often by relying on stationary infrastructure. We show that 6DOF tracking can be obtained without any stationary, external system by purely utilizing the on-board stereo cameras of a HoloLens 2 to track the same retro-reflective marker spheres used by current optical navigation systems. Our implementation is based on two tracking pipelines complementing each other, one using conventional stereo vision techniques, the other relying on a single-constraint-at-a-time extended Kalman filter. In a technical evaluation of our tracking approach, we show that clinically relevant accuracy of 1.70 mm/1.11° and real-time performance is achievable. We further describe an example application of our system for untethered end-to-end surgical navigation.

Non-isomorphic Interaction Techniques for Controlling Avatar Facial Expressions in VR

The control of an avatar’s facial expressions in virtual reality is mainly based on the automated recognition and transposition of the user’s facial expressions. These isomorphic techniques are limited to what users can convey with their own face and have recognition issues. To overcome these limitations, non-isomorphic techniques rely on interaction techniques using input devices to control the avatar’s facial expressions. Such techniques need to be designed to quickly and easily select and control an expression, and not disrupt a main task such as talking. We present the design of a set of new non-isomorphic interaction techniques for controlling an avatar facial expression in VR using a standard VR controller. These techniques have been evaluated through two controlled experiments to help designing an interaction technique combining the strengths of each approach. This technique was evaluated in a final ecological study showing it can be used in contexts such as social applications.

Ubiq: A System to Build Flexible Social Virtual Reality Experiences

While they have long been a subject of academic study, social virtual reality (SVR) systems are now attracting increasingly large audiences on current consumer virtual reality systems. The design space of SVR systems is very large, and relatively little is known about how these systems should be constructed in order to be usable and efficient. In this paper we present Ubiq, a toolkit that focuses on facilitating the construction of SVR systems. We argue for the design strategy of Ubiq and its scope. Ubiq is built on the Unity platform. It provides core functionality of many SVR systems such as connection management, voice, avatars, etc. However, its design remains easy to extend. We demonstrate examples built on Ubiq and how it has been successfully used in classroom teaching. Ubiq is open source (Apache License) and thus enables several use cases that commercial systems cannot.

SESSION: Papers 2: Visualization

Investigating the Effect of Sensor Data Visualization Variances in Virtual Reality

This paper investigates the effect of real-time sensor data variances on humans performing straightforward assembly tasks in a Virtual Reality-based (VR-based) training system. A VR-based training system has been developed to transfer color and depth images, and constructs colored point clouds data to represent objects in real-time. Various parameters that affect sensor data acquisition and visualization of remotely operated robots in the real-world are varied. Afterward, the associated task performance is observed. Experimental results from 12 participants performed a total of 95 VR-guided puzzle assembly tasks demonstrated that a combination of low resolution and uncolored points has the most significant effect on participants’ performance. Participants mentioned that they needed to rely upon tactile feedback when the perceptual feedback was minimal. The most insignificant parameter determined was the resolution of the data representations, which, when varied within the experimental bounds, only resulted in a 5% average change in completion time. Participants also indicated in surveys that they felt their performance had improved and frustration was reduced when provided with color information of the scene.

Spatial Augmented Reality Visibility and Line-of-Sight Cues for Building Design

Despite the technological advances in building design, visualizing 3D building layouts can be especially difficult for novice and expert users alike, who must take into account design constraints including line-of-sight and visibility. Using CADwalk, a commercial building design tool that utilizes floor-facing projectors to show 1:1 scale building plans, this work presents and evaluates two floor-based visual cues for assisting with evaluating line-of-sight and visibility. Additionally, we examine the impact of using virtual cameras looking from the inside-out (from user’s location to objects of interest) and outside-in (looking from an object of interest’s location back towards the user). Results show that floor-based cues led to participants more correctly rating visibility, despite taking longer to complete the task. This is an effective tradeoff, given the final outcome (the building design) where accuracy is paramount.

ImNDT: Immersive Workspace for the Analysis of Multidimensional Material Data From Non-Destructive Testing

An analysis of large multidimensional volumetric data as generated by non-destructive testing (NDT) techniques, e.g., X-ray computed tomography (XCT), can hardly be evaluated using standard 2D visualization techniques on desktop monitors. The analysis of fiber-reinforced polymers (FRPs) is currently a time-consuming and cognitively demanding task, as FRPs have a complex spatial structure, consisting of several hundred thousand fibers, each having more than twenty different extracted features. This paper presents ImNDT, a novel visualization system, which offers material experts an immersive exploration of multidimensional secondary data of FRPs. Our system is based on a virtual reality (VR) head-mounted device (HMD) to enable fluid and natural explorations through embodied navigation, the avoidance of menus, and manual mode switching. We developed immersive visualization and interaction methods tailored to the characterization of FRPs, such as a Model in Miniature, a similarity network, and a histo-book. An evaluation of our techniques with domain experts showed advantages in discovering structural patterns and similarities. Especially novices can strongly benefit from our intuitive representation and spatial rendering.

Effects of Image Realism on the Stress Response in Virtual Reality

opticARe - Augmented Reality Mobile Patient Monitoring in Intensive Care Units

German Intensive Care Units (ICUs) are in crisis, struggling with an increasing shortage of skilled workers, ultimately putting patients’ safety at risk. To counteract this process, researchers are increasingly concerned with finding digital solutions which aim to support healthcare professionals by enhancing the efficiency of reoccurring critical caring tasks and thus, improve working conditions. In this regard, this paper evaluates the application of Augmented Reality (AR) for patient monitoring for critical care nursing. Grounded on an observational study, semi-structured interviews, as well as a quantitative analysis, mobile patient monitoring scenarios, present particularly during patient transport, were identified as an innovative context of use of AR in the field. Additionally, user requirements such as high wearability, hands-free operability, and clear data representation could be derived from the obtained study results. For validation of these and identification of further requirements, three prototypes differing in their data illustration format were subsequently developed and quantitatively, as well as qualitatively evaluated by conducting an online survey. Thereby, it became evident that future implementations of a corresponding system for patient monitoring ought to integrate a context-dependent data presentation in particular, as this combines high navigability and availability of required data. Identifying patient monitoring during patient transport as a potential context of use, as well as distinguishing a context-dependent design approach as favorable constitute two key contributions of this work and provide a foundation on which future implementations of AR systems in the nursing domain and other related contexts can be established.

The Influence of in-VR Questionnaire Design on the User Experience

Researchers study the user experience in Virtual Reality (VR) typically by collecting either sensory data or using questionnaires. While traditional questionnaire formats present it through web-based survey tools (out-VR), recent studies investigate the effects of presenting questionnaires directly in the virtual environment (in-VR). The in-VR questionnaire can be defined as an implemented user-interface object that allows interaction with questionnaires in VR that do not break the immersion. Integrating questionnaires directly into the virtual environment, however, also challenges design decisions.

While most previous research presents in-VR questionnaires in the form of 2D panels in the virtual environment, we want to investigate the difference from such traditional formats to a presentation of a questionnaire format in the form of an interactive object as part of the environment. Accordingly, we evaluate and compare two different in-VR questionnaire designs and a traditional web-based form (out-VR) to assess user experience, the effect on presence, duration of completing the questionnaires, and users’ preferences. As the means for achieving this goal, we developed an immersive questionnaire toolkit that provides a general solution for implementing in-VR questionnaires and exchanging data with popular survey services. This toolkit enables us to run our study both on-site and remotely. As a first small study, 16 users, either on-site or remotely, attended by completing the System Usability Scale, NASA TLX, and the iGroup Presence Questionnaire after a playful activity. The first results indicate that there is no significant difference in the case of usability and presence between different design layouts. Furthermore, we could not find a significant difference also for the task load except between 2D and web-based layout for mental demand and frustration as well as the duration of completing the questionnaire. The results also indicate that users generally prefer in-VR questionnaire designs to the traditional ones. The study can be expanded to include more participants in user studies as a means of gaining more concrete results. Furthermore, additional questionnaire design alternatives can also help to provide us with a more usable and accurate questionnaire design in VR.

SESSION: Papers 3: Applications

Towards Context-aware Automatic Haptic Effect Generation for Home Theatre Environments

The application of haptic technology in entertainment systems, such as Virtual Reality and 4D cinema, enables novel experiences for users and drives the demand for efficient haptic authoring systems. Here, we propose an automatic multimodal vibrotactile content creation pipeline that substantially improves the overall hapto-audiovisual (HAV) experience based on contextual audio and visual content from movies. Our algorithm is implemented on a low-cost system with nine actuators attached to a viewing chair and extracts significant features from video files to generate corresponding haptic stimuli. We implemented this pipeline and used the resulting system in a user study (n = 16), quantifying user experience according to the sense of immersion, preference, harmony, and discomfort. The results indicate that the haptic patterns generated by our algorithm complement the movie content and provide an immersive and enjoyable HAV user experience. This further suggests that the pipeline can facilitate the efficient creation of 4D effects and could therefore be applied to improve the viewing experience in home theatre environments.

The Effect of Increased Body Motion in Virtual Reality on a Placement-Retrieval Task

Previous work has shown that increased effort and use of one’s body can improve memory. When positioning windows inside a virtual reality, does the use of a larger volume, and using one’s legs to move around, improve ability to later find the windows? The results of our experiment indicate there can be a modest benefit for spatial memory and retrieval time, but at the cost of increased time spent initially positioning the windows.

VRGaitAnalytics: Visualizing Dual Task Cost for VR Gait Assessment

Among its many promising applications, Virtual Reality (VR) can simulate diverse real-life scenarios and therefore help experimenters assess individuals’ gait performance (i.e., walking) under controlled functional contexts. VR-based gait assessment may provide low-risk, reproducible and controlled virtual environments, enabling experimenters to investigate underlying causes for imbalance by manipulating experimental conditions such as multi-sensory loads, mental processing loads (cognitive load), and/or motor tasks. We present a low-cost novel VR gait assessment system that simulates virtual obstacles, visual, auditory, and cognitive loads while using motion tracking to assess participants’ walking performance. The system utilizes in-situ spatial visualization for trial playback and instantaneous outcome measures which enable experimenters and participants to observe and interpret their performance. The trial playback can visualize any moment in the trial with embodied graphic segments including the head, waist, and feet. It can also replay two trials at the same time frame for trial-to-trial comparison, which helps visualize the impact of different experimental conditions. The outcome measures, i.e., the metrics related to walking performance, are calculated in real-time and displayed as data graphs in VR. The system can help experimenters get specific gait information on balance performance beyond a typical clinical gait test, making it clinically relevant and potentially applicable to gait rehabilitation. We conducted a feasibility study with physical therapy students, research graduate students, and licensed physical therapists. They evaluated the system and provided feedback on the outcome measures, the spatial visualizations, and the potential use of the system in the clinic. The study results indicate that the system was feasible for gait assessment, and the immediate spatial visualization features were seen as clinically relevant and useful. Limitations and considerations for future work are discussed.

Investigating the Effects of Virtual Patients’ Nonsensical Responses on Users’ Facial Expressions in Mental Health Training Scenarios

This report investigates how clinician-participants react to virtual patients’ sensical vs. nonsensical responses in a training simulation that aims to help clinicians acquire empathetic skills toward high-risk patients with symptoms of the Suicide Crisis Syndrome (SCS). Two suicidal virtual patients were developed, and clinician-participants interactions with them were recorded. Their facial emotions were analysed in three key moments: after a baseline sensical response, after a nonsensical response, and after the following sensical response. We compared their basic facial emotions aggregated into Negative and Positive facial affective behaviors (FABs). We describe our study involving ten clinician-participants and the results of the facial expression analysis with Noldus FaceReader. Our results suggest that nonsensical responses from virtual humans have an overall impact in both Positive and Negative facial affective emotions, and may lead to an increased percent of time participants demonstrate Negative facial affective behaviors when interacting with virtual humans. We discuss several aspects regarding the impacts and importance of considering nonsensical responses in the context of virtual human-based interactions.

Catching Jellies in Immersive Virtual Reality: A Comparative Teleoperation Study of ROVs in Underwater Capture Tasks

Remotely Operated Vehicles (ROVs) are essential to human-operated underwater expeditions in the deep sea. However, piloting an ROV to safely interact with live ecosystems is an expensive and cognitively demanding task, requiring extensive maneuvering and situational awareness. Immersive Virtual Reality (VR) Head-Mounted Displays (HMDs) could address some of these challenges. This paper investigates how VR HMDs influence operator performance through a novel telepresence system for piloting ROVs in real-time. We present an empirical user study [N=12] that examines common midwater creature capture tasks, comparing Stereoscopic-VR, Monoscopic-VR, and Desktop teleoperation conditions. Our findings indicate that Stereoscopic-VR can outperform Monoscopic-VR and Desktop ROV capture tasks, effectively doubling the efficacy of operators. We also found significant differences in presence, task load, usability, intrinsic motivation, and cybersickness. Our research points to new opportunities towards VR with ROVs.

BreachMob: Detecting Vulnerabilities in Physical Environments Using Virtual Reality

BreachMob is a virtual reality (VR) tool that applies open design principles from information security to physical buildings and structures. BreachMob uses a detailed 3D digital model of a property owner's building. The model is then published as a virtual environment (VE), complete with all applicable security measures and released to the public to test the building's security and find any potential vulnerabilities by completing specified objectives. Our paper contributes a new method of applying VR to crowd source detection of physical environment vulnerabilities. We detail the technical realization of two BreachMob prototypes (a home and an airport) reflecting on static and dynamic vulnerabilities. Our design critique suggests that BreachMob promotes user immersion by allowing participants the freedom to behave in ways that align with the experience of breaching physical security protocols.

EntangleVR: A Visual Programming Interface for Virtual Reality Interactive Scene Generation

Entanglement is a unique phenomenon in quantum physics that describes a correlated relationship in the measurement of a group of spatially separated particles. In the fields of science fiction, game design, art and philosophy, it has inspired the creation of numerous innovative works. We present EntangleVR, a novel method to create entanglement-inspired virtual scenes with the goal to simplify representing this phenomenon in the design of interactive VR games and experiences. By providing a reactive visual programming interface, users can integrate entanglement into their design without requiring prior knowledge of quantum computing or quantum physics. Our system enables fast creation of complex scenes composed of virtual objects with manipulable correlated behaviors.

SESSION: Papers 4: Interaction Design

Image-Based Texture Styling for Motion Effect Rendering

A motion platform provides the vestibular stimuli that elicit the sensations of self-motion and thereby improves the immersiveness. A representative example is a 4D Ride, which presents a video of POV shots and motion effects synchronized with the camera motion in the video. Previous research efforts resulted in a few automatic motion effect synthesis algorithms for POV shots. Although effective in generating gross motion effects, they do not consider fine features on the ground, such as a rough or bumpy road. In this paper, we propose an algorithm for styling the gross motion effects using a texture image. Our algorithm transforms a texture image into a high-frequency style motion and merges it with the original motion while respecting both perceptual and device constraints. A user study demonstrated that texture styling could increase immersiveness, realism, and harmony.

Virtual Rotations for Maneuvering in Immersive Virtual Environments

In virtual navigation, maneuvering around an object of interest is a common task which requires simultaneous changes in both rotation and translation. In this paper, we present Anchored Jumping, a teleportation technique for maneuvering that allows the explicit specification of a new viewing direction by selecting a point of interest as part of the target specification process. A first preliminary study showed that naïve Anchored Jumping can be improved by an automatic counter rotation that preserves the user’s relative orientation towards their point of interest. In our second, qualitative study, this extended technique was compared with two common approaches to specifying virtual rotations. Our results indicate that Anchored Jumping allows precise and comfortable maneuvering and is compatible with techniques that primarily support virtual exploration and search tasks. Equipped with a combination of such complementary techniques, seated users generally preferred virtual over physical rotations for indoor navigation.

Using Gaze Behavior and Head Orientation for Implicit Identification in Virtual Reality

Identifying users of a Virtual Reality (VR) headset provides designers of VR content with the opportunity to adapt the user interface, set user-specific preferences, or adjust the level of difficulty either for games or training applications. While most identification methods currently rely on explicit input, implicit user identification is less disruptive and does not impact the immersion of the users. In this work, we introduce a biometric identification system that employs the user’s gaze behavior as a unique, individual characteristic. In particular, we focus on the user’s gaze behavior and head orientation while following a moving stimulus. We verify our approach in a user study. A hybrid post-hoc analysis results in an identification accuracy of up to 75 % for an explainable machine learning algorithm and up to 100 % for a deep learning approach. We conclude with discussing application scenarios in which our approach can be used to implicitly identify users.

InteractML: Making machine learning accessible for creative practitioners working with movement interaction in immersive media

Interactive Machine Learning offers a method for designing movement interaction that supports creators in implementing even complex movement designs in their immersive applications by simply performing them with their bodies. We introduce a new tool, InteractML, and an accompanying ideation method, which makes movement interaction design faster, adaptable and accessible to creators of varying experience and backgrounds, such as artists, dancers and independent game developers. The tool is specifically tailored to non-experts as creators configure and train machine learning models via a node-based graph and VR interface, requiring minimal programming. We aim to democratise machine learning for movement interaction to be used in the development of a range of creative and immersive applications.

Research and Practice Recommendations for Mixed Reality Design – Different Perspectives from the Community

Over the last decades, different kinds of design guides have been created to maintain consistency and usability in interactive system development. However, in the case of spatial applications, practitioners from research and industry either have difficulty finding them or perceive such guides as lacking relevance, practicability, and applicability. This paper presents the current state of scientific research and industry practice by investigating currently used design recommendations for mixed reality (MR) system development. We analyzed and compared 875 design recommendations for MR applications elicited from 89 scientific papers and documentation from six industry practitioners in a literature review. In doing so, we identified differences regarding four key topics: Focus on unique MR design challenges, abstraction regarding devices and ecosystems, level of detail and abstraction of content, and covered topics. Based on that, we contribute to the MR design research by providing three factors for perceived irrelevance and six main implications for design recommendations that are applicable in scientific and industry practice.

Qualitative Dimensions of Technology-Mediated Reflective Learning: The Case of VR Experience of Psychosis

Self-reflection is evaluation of one’s inferential processes often triggered by complex social and emotional experiences, characterized by their ambiguity and unpredictability, pushing one to re-interpret the experience, and update existing knowledge. Using immersive Virtual Reality (VR), we aimed to support social and emotional learning (SEL) through reflection in psychology education. We used the case of psychosis as it involves ambiguous perceptual experiences. With a codesign workshop, we designed a VR prototype that simulates the perceptual, cognitive, affective, and social elements of psychotic experiences, followed by a user-study with psychology students to evaluate the potential of this technology to support reflection. Our analyses suggested that technology-mediated reflection in SEL involves two dimensions: spontaneous perspective-taking and shared state of affect. By exploring the subjective qualities of reflection with the said dimensions, our work contributes to the literature on technology-supported learning and VR developers designing for reflection.

SESSION: Papers 5: Sensing Devices and Haptics

RotoWrist: Continuous Infrared Wrist Angle Tracking using a Wristband

We introduce RotoWrist, an infrared (IR) light based solution for continuously and reliably tracking 2-degree-of-freedom (DoF) relative angle of the wrist with respect to the forearm using a wristband. The tracking system consists of eight time-of-flight (ToF) IR light modules distributed around a wristband. We developed a computationally simple tracking approach to reconstruct the orientation of the wrist without any runtime training, ensuring user independence. An evaluation study demonstrated that RotoWrist achieves a cross-user median tracking error of 5.9° in flexion/extension and 6.8° in radial and ulnar deviation with no calibration required as measured with optical ground truth. We further demonstrate the performance of RotoWrist for a pointing task and compare it against ground truth tracking.

PAIR: Phone as an Augmented Immersive Reality Controller

Immersive head-mounted augmented reality allows users to overlay 3D digital content on a user’s view of the world. Current-generation devices primarily support interaction modalities such as gesture, gaze and voice, which are readily available to most users yet lack precision and tactility, rendering them fatiguing for extended interactions. We propose using smartphones, which are also readily available, as companion devices complementing existing AR interaction modalities. We leverage user familiarity with smartphone interactions, coupled with their support for precise, tactile touch input, to unlock a broad range of interaction techniques and applications - for instance, turning the phone into an interior design palette, touch-enabled catapult or AR-rendered sword. We describe a prototype implementation of our interaction techniques using an off-the-shelf AR headset and smartphone, demonstrate applications, and report on the results of a positional accuracy study.

Mid-Air Thermo-Tactile Feedback using Ultrasound Haptic Display

This paper presents a mid-air thermo-tactile feedback system using an ultrasound haptic display. We design a proof-of-concept thermo-tactile feedback system with an open-top chamber, heat modules, and an ultrasound display. Our approach is to provide heated airflow along the path to the focused pressure point created from the ultrasound display to generate thermal and vibrotactile cues in mid-air simultaneously. We confirm that our system can generate the thermo-tactile stimuli up to 54.2°C with 3.43 mN when the ultrasonic haptic signal was set to 100 Hz with a 12 mm radius of the cue size. We also confirm that our system can provide a stable temperature (mean error=0.25%). We measure the warm detection threshold (WDT) and the heat-pain detection threshold (HPDT). The results show that the mean WDT was 32.8°C (SD=1.12), and the mean HPDT was 44.6°C (SD=1.64), which are consistent with the contact-based thermal thresholds. We also found that the accuracy of haptic pattern identification is similar for non-thermal (98.1%) and thermal conditions (97.2%), showing a non-significant effect of high temperature. We finally confirmed that thermo-tactile feedback further enhances the user experiences.

TangibleData: Interactive Data Visualization with Mid-Air Haptics

In this paper, we investigate the effects of mid-air haptics in interactive 3D data visualization. We build an interactive 3D data visualization tool that adapts hand gestures and mid-air haptics to provide tangible interaction in VR using ultrasound haptic feedback on 3D data visualization. We consider two types of 3D visualization datasets and provide different data encoding methods for haptic representations. Two user experiments are conducted to evaluate the effectiveness of our approach. The first experimental results show that adding a mid-air haptic modality can be beneficial regardless of noise conditions and useful for handling occlusion or discerning density and volume information. The second experiment results further show the strengths and weaknesses of direct touch and indirect touch modes. Our findings can shed light on designing and implementing a tangible interaction on 3D data visualization with mid-air haptic feedback.

PneuMod: A Modular Haptic Device with Localized Pressure and Thermal Feedback

Humans have tactile sensory organs distributed all over the body. However, haptic devices are often only created for one part (e.g., hands, wrist, or face). We propose PneuMod, a wearable modular haptic device that can simultaneously and independently present pressure and thermal (warm and cold) cues to different parts of the body. The module in PneuMod is a pneumatically-actuated silicone bubble with an integrated Peltier device that can render thermo-pneumatic feedback through shapes, locations, patterns, and motion effects. The modules can be arranged with varying resolutions on fabric to create sleeves, headbands, leg wraps, and other forms that can be worn on multiple parts of the body. In this paper, we describe the system design, the module implementation, and applications for social touch interactions and in-game thermal and pressure feedback.

Ellipses Ring Marker for High-speed Finger Tracking

High-speed finger tracking is necessary for augmented reality and operation in human-machine cooperation without latency discomfort, but conventional markerless finger tracking methods are not fast enough and the marker-based methods have low wearability. In this paper, we propose an ellipses ring marker (ERM), a finger-ring marker consisting of multiple ellipses and its high-speed image recognition algorithm. The finger-ring shape has highly wearing continuity, and the surface shape is suitable for various viewing angle observation. The invariance of the ellipse in the perspective projection enables accurate and low-latency posture estimation. We have experimentally investigated the advantage in normal distribution, validated the sufficient accuracy and computational cost in the marker tracking, and showed a demonstration of dynamic projection mapping on a palm.

Enabling Robot-assisted Motion Capture with Human Scale Tracking Optimization

Motion tracking systems with viewpoint concerns or whose marker data include unreliable states have proven difficult to use despite many impactful benefits. We propose a technique inspired by active vision and using a customized hill-climbing approach to control a robot-sensor setup and apply it to a magnetic induction system capable of occlusion-free motion tracking. Our solution reduces the impact of displacement and orientation issues for markers which inherently present a dead-angle range that disturbs usability and accuracy. The resulting interface is successful in stabilizing previously unexploitable data while preventing sub-optimal states for up to hundreds of occurrences per recording and featuring an approximate 40% decrease in tracking error.

SESSION: Papers 6: Perception

Presenting Sense of Loud Vocalization Using Vibratory Stimuli to the Larynx and Auditory Stimuli

In recent years, technologies related to virtual reality (VR) have continued to advance. As a method to enhance the VR experience, we focused on loud vocalization. This is because we believe that loud vocalization can enable us to engage with the VR environment in a more interactive way. Also, as loud vocalization is an action that is thought to be closely related to stress reduction and a sense of exhilaration, the stress reduction through VR with loud vocalization is also expected. But loud vocalization itself has disadvantages for physical, mental, and social reasons. Then, we hypothesized that loud vocalization itself is not necessary for such benefits; but the sense of loud vocalization plays an important role. Therefore, we focused on a method of substituting experience by presenting sensory stimuli. In this paper, we proposed a way to present the sense of loud vocalization through vibratory stimuli to the larynx and auditory stimuli to users who are actually vocalizing quietly with the expectation for the sense of loud vocalization. Our user study showed that the proposed method can extend the sense of vocalization and realize pseudo-loud vocalization. In addition, it was also shown that the proposed method can cause a sense of exhilaration. By contrast, excessively strong vibratory stimuli spoil the sense of loud vocalization, and thus the intensity of the vibration should be appropriately determined.

Absolute and Differential Thresholds of Motion Effects in Cardinal Directions

In this paper, we report both absolute and differential thresholds for motion in the six cardinal directions as comprehensively as possible. As with general 4D motion effects, we used sinusoidal motions with low intensity and large frequency as stimuli. Hence, we could also compare the effectiveness of motion types in delivering motion effects. We found that the thresholds for the z-axis (up-down) were higher than those for the x-axis (front-back) and y-axis (left-right) in both kinds of thresholds and that the type of motion significantly affected both thresholds. Further, between differential thresholds and reference intensities, we found a strong linear relationship for roll, yaw and, surge. Compared to them, a relatively weak linear relationship was observed for the rest of the motion types. Our results can be useful for generating motion effects for 4D contents while considering the human sensitivity to motion feedback.

Analysis of Detection Thresholds for Hand Redirection during Mid-Air Interactions in Virtual Reality

Avatars in virtual reality (VR) with fully articulated hands enable users to naturally interact with the virtual environment (VE). Interactions are often performed in a one-to-one mapping between the movements of the user’s real body, for instance, the hands, and the displayed body of the avatar. However, VR also allows manipulating this mapping to introduce non-isomorphic techniques. In this context, research on manipulations of virtual hand movements typically focuses on increasing the user’s interaction space to improve the overall efficiency of hand-based interactions.

In this paper, we investigate a hand retargeting method for decelerated hand movements. With this technique, users need to perform larger movements to reach for an object in the VE, which can be utilized, for example, in therapeutic applications. If these gain-based redirections of virtual hand movements are small enough, users become unable to reliably detect them due to the dominance of the visual sense. In a psychophysical experiment, we analyzed detection thresholds for six different motion paths in mid-air for both hands. We found significantly different detection thresholds between movement directions on each spatial axis. To verify our findings, we applied the identified gains in a playful application in a confirmatory study.

Virtual Reality platform for functional magnetic resonance imaging in ecologically valid conditions

Functional magnetic resonance Brain Imaging (fMRI) is a key non-invasive imaging technique for the study of human brain activity. Its millimetric spatial resolution is at the cost of several constraints: participants must remain static and experience artificial stimuli, making it difficult to generalize neuroscientific results to naturalistic and ecological conditions. Immersive Virtual Reality (VR) provides alternatives to such stimuli through simulation, but still requires an active first-person exploration of the environment to evoke a strong sense of presence in the virtual environment. Here, we report how to compensate for the inability to freely move in VR by leveraging on principles of embodiment for a virtual avatar, to eventually evoke a strong sense of presence with a minimal motion of the participant. We validated the functionality of the platform in a study where healthy participants performed several basic research tasks in an MR-specific immersive virtual environment. Our results show that our approach can lead to high sense of presence, strong body ownership, and sense of agency for a virtual avatar, with low movement-related MRI artifacts. Moreover, to exemplify the versatility of the platform, we reproduced several behavioral and fMRI results in the perceptual, motor, and cognitive domains. We discuss how to leverage such technology for neuroscience research and provide recommendations on efficient ways to implement and develop it successfully.

Enhancing In-game Immersion Using BCI-controlled Mechanics

Due to multimodal approach, the virtual reality experiences become increasingly more immersive and entertaining. New control modalities, such as brain-computer interfaces (BCIs), enable the players to engage in the game with both their bodies and minds. In our work, we investigate the influence of employing BCI-driven mechanics on player’s in-game immersion. We designed and implemented an escape room-themed game which employed player’s mental states of focus and relaxation as input for selected game mechanisms. Through a between-subject user study, we found that controlling the game with mental states enhances the in-game immersion and attracts the player’s engagement. At the same time, using BCIs did not impose additional cognitive workload. Our work contributes qualitative insights on psychocognitive effects of using BCIs in gaming and describing immersive gaming experiences.

Perceived Realism of Pedestrian Crowds Trajectories in VR

Crowd simulation algorithms play an essential role in populating Virtual Reality (VR) environments with multiple autonomous humanoid agents. The generation of plausible trajectories can be a significant computational cost for real-time graphics engines, especially in untethered and mobile devices such as portable VR devices. Previous research explores the plausibility and realism of crowd simulations on desktop computers but fails to account the impact it has on immersion. This study explores how the realism of crowd trajectories affects the perceived immersion in VR. We do so by running a psychophysical experiment in which participants rate the realism of real/synthetic trajectories data, showing similar level of perceived realism.

SESSION: Papers 7: Input Methods

Pressing a Button You Cannot See: Evaluating Visual Designs to Assist Persons with Low Vision through Augmented Reality

Partial vision loss occurs in several medical conditions and affects persons of all ages. It compromises many daily activities, such as reading, cutting vegetables, or identifying and accurately pressing buttons, e.g., on ticket machines or ATMs. Touchscreen interfaces pose a particular challenge because they lack haptic feedback from interface elements and often require people with impaired vision to rely on others for help. We propose a smartglasses-based solution to utilize the user’s residual vision. Together with visually-impaired individuals, we designed assistive augmentations for touchscreen interfaces and evaluated their suitability to guide attention towards interface elements and to increase the accuracy of manual inputs. We show that augmentations improve interaction performance and decrease cognitive load, particularly for unfamiliar interface layouts.

Flyables: Haptic Input Devices for Virtual Realityusing Quadcopters

Virtual Reality (VR) has made its way into everyday life. While VR delivers an ever-increasing level of immersion, controls and their haptics are still limited. Current VR headsets come with dedicated controllers that are used to control every virtual interface element. However, the controller input mostly differs from the virtual interface. This reduces immersion. To provide a more realistic input, we present Flyables, a toolkit that provides matching haptics for virtual user interface elements using quadcopters. We took five common virtual UI elements and built their physical counterparts. We attached them to quadcopters to deliver on-demand haptic feedback. In a user study, we compared Flyables to controller-based VR input. While controllers still outperform Flyables in terms of precision and task completion time, we found that Flyables present a more natural and playful way to interact with VR environments. Based on the results from the study, we outline research challenges that could improve interaction with Flyables in the future.

Object Manipulations in VR Show Task- and Object-Dependent Modulation of Motor Patterns

Humans can perform object manipulations in VR in spite of missing haptic and acoustic information. Whether their movements under these artificial conditions do still rely on motor programs based on natural experience or are impoverished due to the restrictions imposed by VR is unclear. We investigated whether reach-to-place and reach-to-grasp movements in VR can still be adapted to the task and to the specific properties of the objects being handled, or whether they reflect a stereotypic, task- and object-independent motor program. We analyzed reach-to-grasp and reach-to-place movements from participants performing an unconstrained ”set-the-table” task involving a variety of different objects in virtual reality. These actions were compared based on their kinematic features. We encountered significant differences in peak speed and the duration of the deceleration phase which are modulated depending on the action and on the manipulated object. The flexibility of natural human sensorimotor control thus is at least partially transferred and exploited in impoverished VR conditions. We discuss possible explanations of this behavior and the implications for the design of object manipulations in VR.

Modeling Pointing for 3D Target Selection in VR

Virtual reality (VR) allows users to interact similarly to how they do in the physical world, such as touching, moving, and pointing at objects. To select objects at a distance, most VR techniques rely on casting a ray through one or two points located on the user’s body (e.g., on the head and a finger), and placing a cursor on that ray. However, previous studies show that such rays do not help users achieve optimal pointing accuracy nor correspond to how they would naturally point. We seek to find features, which would best describe natural pointing at distant targets. We collect motion data from seven locations on the hand, arm, and body, while participants point at 27 targets across a virtual room. We evaluate the features of pointing and analyse sets of those for predicting pointing targets. Our analysis shows an 87% classification accuracy between the 27 targets for the best feature set and a mean distance of 23.56 cm in predicting pointing targets across the room. The feature sets can inform the design of more natural and effective VR pointing techniques for distant object selection.

Virtual Object Categorisation Methods: Towards a Richer Understanding of Object Grasping for Virtual Reality

Object categorisation methods have been historically used in literature for understanding and collecting real objects together into meaningful groups and can be used to define human interaction patterns (i. e grasping). When investigating grasping patterns for Virtual Reality (VR), researchers used Zingg’s methodology which categorises objects based on shape and form. However, this methodology is limited and does not take into consideration other object attributes that might influence grasping interaction in VR. To address this, our work presents a study into three categorisation methods for virtual objects. We employ Zingg’s object categorisation as a benchmark against existing real and virtual object interaction work and introduce two new categorisation methods that focus on virtual object equilibrium and virtual object component parts. We evaluate these categorisation methods using a dataset of 1872 grasps from a VR docking task on 16 virtual representations of real objects and report findings on grasp patterns. We report on findings for each virtual object categorisation method showing differences in terms of grasp classes, grasp type and aperture. We conclude by detailing recommendations and future ideas on how these categorisation methods can be taken forward to inform a richer understanding of grasping in VR.

Actions, not gestures: contextualising embodied controller interactions in immersive virtual reality

Modern immersive virtual reality (IVR) often uses embodied controllers for interacting with virtual objects. However, it is not clear how we should conceptualise these interactions. They could be considered either gestures, as there is no interaction with a physical object; or as actions, given that there is object manipulation, even if it is virtual. This distinction is important, as literature has shown that in the physical world, action-enabled and gesture-enabled learning produce distinct cognitive outcomes. This study attempts to understand whether sensorimotor-embodied interactions with objects in IVR can cognitively be considered as actions or gestures. It does this by comparing verb-learning outcomes between two conditions: (1) where participants move the controllers without touching virtual objects (gesture condition); and (2) where participants move the controllers and manipulate virtual objects (action condition). We found that (1) users can have cognitively distinct outcomes in IVR based on whether the interactions are actions or gestures, with actions providing stronger memorisation outcomes; and (2) embodied controller actions in IVR behave more similarly to physical world actions in terms of verb memorization benefits.

SESSION: Poster and Demonstration Abstracts

3D Printing an Accessory Dock for XR Controllers and its Exemplary Use as XR Stylus

This article introduces the accessory dock, a 3D printed multi-purpose extension for consumer-grade XR controllers that enables flexible mounting of self-made and commercial accessories. The uniform design of our concept opens new opportunities for XR systems being used for more diverse purposes, e.g., researchers and practitioners could use and compare arbitrary XR controllers within their experiments while ensuring access to buttons and battery housing. As a first example, we present a stylus tip accessory to build an XR Stylus, which can be directly used with frameworks for handwriting, sketching, and UI interaction on physically aligned virtual surfaces. For new XR controllers, we provide instructions on how to adjust the accessory dock to the controller’s form factor. A video tutorial for the construction and the source files for 3D printing are publicly available for reuse, replication, and extension (https://go.uniwue.de/hci-otss-accessory-dock).

A Compact and Low-cost VR Tooth Drill Training System using Mobile HMD and Stylus Smartphone

Drilling teeth is a significant technique for dental learners. However, the existing VR tooth drill training simulators are physically large and costly, which cannot be used at home or classroom for a private study. This work presents a novel low-cost mobile VR dental simulator using off-the-shelf devices, including a mobile HMD and a stylus smartphone. In this system, a 3D-printed physical teeth prop is placed on an EMR stylus smartphone where its stylus tracks the tip position of a physical drill. Unlike existing solutions using haptic/force devices, our approach involving physical contact between the prop and drill tip enables the user’s natural teeth hardness sensation. The use of smartphone stylus would enable significantly more accurate drill position sensing around the teeth than HMD’s accompanying controllers. We also developed VR software to simulate tooth drilling on this setup. This demo will show how our new mobile simulator offers a realistic feeling of drilling teeth.

A Hat-shaped Pressure-Sensitive Multi-Touch Interface for Virtual Reality

We developed a hat-shaped touch interface for virtual reality viewpoint control. The hat is made of conductive fabric and thus is lightweight. The user can touch, drag, and push the surface, enabling three-dimensional viewpoint control.

A Perceptual Evaluation of the Ground Inclination with a Simple VR Walking Platform

We evaluate how highly realistic the inclination of the ground can be perceived with our simple VR walking platform. Firstly we prepared seven maps with different ground inclinations of -30 to 30 degrees and every 10 degrees. Then we conducted a perception experiment of the inclination feeling with each of the treadmill and our proposed platform, and questionnaire evaluation about the presence, the fatigue, and the exhilaration. As a result, it was clarified that even if our proposed platform is used, not only the feeling of presence equivalent to that of the treadmill can be felt, but also the inclination of the ground up and down can be perceived.

A Pilot Study Examining the Unexpected Vection Hypothesis of Cybersickness.

The relationship between vection (illusory self-motion) and cybersickness is complex. This pilot study examined whether only unexpected vection provokes sickness during head-mounted display (HMD) based virtual reality (VR). 20 participants ran through the tutorial of Mission: ISS (an HMD VR app) until they experienced notable sickness (maximum exposure was 15 minutes). We found that: 1) cybersickness was positively related to vection strength; and 2) cybersickness appeared to be more likely to occur during unexpected vection. Given the implications of these findings, future studies should attempt to replicate them and confirm the unexpected vection hypothesis with larger sample sizes and rigorous experimental designs.

A sharing system for the annoyance of menstrual symptoms using electrical muscle stimulation and thermal stimulations

A System for Practicing Ball/Strike Judgment in VR Environment

The purpose of this study is to develop an easy-to-use ball/strike judgment practice system for inexperienced baseball umpires. The main idea is to provide a practice environment in a Virtual Reality (VR) space. With our system, users observe a pitched ball, perform ball/strike judgment, and review their judgment in a VR space. Since the whole process is completed in VR, users can practice the judgments without preparing a pitcher and catcher. A user investigation in which participants practiced with our system and judged balls thrown by a pitching machine was conducted. The participants responded positively when asked about the usefulness of our system.

A Tangible Haptic Feedback Box for Mixed Reality Billiard in Tight Spaces

This paper presents a system for a simulated billiard game with two players and an emphasis on haptic feedback. We devised a feedback box that is responsible for generating the inputs and providing immediate haptic feedback to the user. The simulation runs as an AR application and the player can use a real queue to hit the real ball. Although the haptic feedback is precise due to the usage of a real billiard ball and queue, the input accuracy of the angle and impulse measurement is limited.

ALiSE: Non-wearable AR display through the looking glass, and what looks solid there

With the Augmented Reality mirror display method using a half-mirror, there is a difference in the focal length between the mirror image and the AR image. Therefore, the observer perceives a mismatch in depth perception, which impairs usability. In this study, we developed an optical-reflection AR display, ALiSE (Augment Layer interweaved Semi-reflecting Existence), which enhances the depth perception experience of AR images by adding a gap zone with the same depth as the target depth between the display and the half-mirror. We conducted an experiment to view 3D objects and achieve virtual fitting using the existing AR with video synthesis and the proposed ALiSE method. As a result of the questionnaire survey, although the comfort of wearing virtual objects was below existing methods, we confirmed that the presence and solidity were superior with the proposed method to other approaches. This is an attempt to create a sense of the stereoscopic effect despite the 2D projection, as the object to be projected is simultaneously reflected in the mirror along with the observer themselves.

An Evaluation of Methods for Manipulating Virtual Objects at Different Scales

Immersive Virtual Reality enables users to experience 3D models and other virtual content in ways that cannot be achieved on a flat screen, and several modern Virtual Reality applications now give users the ability to include or create their own content and objects. With user-generated content however, objects may come in all shapes and sizes. This necessitates the use of object manipulation methods that are effective regardless of object size. In this work we evaluate two methods for manipulating virtual objects of varying sizes. World Pull enables the user to directly manipulate and scale the virtual environment, while Pivot Manipulation enables the user to rotate objects around a set of predefined pivot points. The methods were compared to a traditional 6 degree of freedom manipulation method during a user study and the results showed that World Pull performed better in terms of precision for small and large objects, while Pivot Manipulation performed better for large objects.

An Infant-Like Device that Reproduces Hugging Sensation with Multi-Channel Haptic Feedback

Proximity interaction, such as hugging, plays an essential role in building relationships between parents and children. However, parents and children cannot freely interact in the neonatal intensive care unit due to visiting restrictions imposed by COVID-19. In this study, we develop a system of pseudo-proximity interaction with a remote infant through a VR headset by using an infant-like device that reproduces the haptic feedback features of the hugging sensation, such as weight, body temperature, breathing, softness, and unstable neck.

An Interactive Flight Operation with 2-DOF Motion Platform

We propose an interactive flight operation with 2-DOF motion platform that enables user to tilt greatly according to the user posture and VR environment. In order to realize a flight like a hang glider, this work interactively controls the motion platform according to the attitude of the user. By tilting the body back and forth and left and right while keeping the body horizontal based on a posture like the planche exercise, the virtual aircraft tilts in that direction and the motion platform also rolling movements. In addition, since our motion platform with the balance board swings by rolling motion, it is possible to realize a large swing at low-cost and safely.

CAVE applications: from craft manufacturing to product line engineering

Product line engineering model is suitable for engineering related software products in an efficient manner, taking advantage of their similarities while managing their differences. Our feature driven software product line (SPL) solution based on that model allows for instantiation of different CAVE products based on the set of core assets and driven by a set of common VR features with the minimal budget and time to market.

Conference Talk Training With a Virtual Audience System

This paper presents the first prototype of a virtual audience system (VAS) specifically designed as a training tool for conference talks. This system has been tailored for university seminars dedicated to the preparation and delivery of scientific talks. We describe the required features which have been identified during the development process. We also summarize the preliminary feedback received from lecturers and students during the first deployment of the system in seminars for bachelor and doctoral students. Finally, we discuss future work and research directions. We believe our system architecture and features are providing interesting insights on the development and integration of VR-based educational tools into university curriculum.

Content-rich and Expansive Virtual Environments Using Passive Props As World Anchors

In this paper, we present a system that allows developers to add passive haptic feedback into their virtual reality applications by making use of existing physical objects in the user’s real environment. Our approach has minimal dependence on procedural generation and does not limit the virtual space to the dimensions of the physical play-area.

Dealing with a Panic Attack: a Virtual Reality Training Module for Postgraduate Psychology Students

In this paper we present a virtual reality training simulator for postgraduate psychology students. This simulator features an interaction between a clinical psychologist (student) and a patient (virtual agent) suffering from Obsessive Compulsive Disorder (OCD). Our simulation focuses on the form of OCD treatment called “Exposure Therapy”. The traditional way of learning how to perform Exposure Therapy (ET) currently involves watching video recordings and discussing those in the class. In our simulation we conduct an immersive exposure therapy session in VR. This session involves a live interaction with a patient that at one stage triggers a panic attack. Our hypothesis is that the immersive nature of the training session will affect the decision making process of the students so that they are more likely to cease the exposure task than those student participating in a less immersive form of learning (watching a video recording). We also hypothesise that participating in an immersive VR training session is more effective than watching videos, as far as information retention goes.

Designing Obstacle Reminder for Safe AR Navigation

AR navigation has been widely used in mobile devices. However, users sometimes immerse in the navigation interface and ignore probable dangers in the real environment. It is necessary to remind users of potential dangers and avoid accidents. Most of existing works focus on how to effectively guide users in AR but few concern about design of danger reminder. In this paper, we build a virtual AR navigation system and compare user experience on different types of obstacle reminder. Furthermore, we compare the influence of color, motion and appearance distance on effectiveness of the reminder. Results show that red color and bi-color are more obvious than blue color for reminder. Motion such as flickering effect helps enhance remind effectiveness.

Double-Layered Cup-Shaped Device to Amplify Taste Sensation of Carbonation by the Electrical Stimulation on the Human Tongue

We show that electrical stimulation on the human tongue amplifies the taste sensation of carbonated beverages. We have developed a novel electric taste system with two components: a cup-shaped device and its circuit for stimulation. The cup-shaped device has a double-layer structure. The circuit has a constant current control circuit and a signal generator, which allow adjustment of the electrical parameters. The device is hygiene when we demonstrate electric taste because the device has two-layered. Thus we can change the inner layer that touches the user’s mouth. The device is also inexpensive and easy to manufacture so that many people can experience them.

Effect of Visual Feedback on Understanding Timbre with Shapes Based on Crossmodal Correspondences

Timbre is a crucial element in playing musical instruments, and it is difficult for beginners to learn it independently. Therefore, external feedback (FB) is required. However, conventional FB methods lack intuitiveness in visualization. In this study, we propose a novel FB method that adopts crossmodal correspondence to enhance the intuitive visualization of timbre with visual shapes. Based on the experiments, it was inferred that the FB based on crossmodal correspondence prevents dependence on FB and promotes learning.

Effects of User’s Gaze on the Unintended Positional Drift in Walk-in-Place

Walk-In-Place (WIP) is a technique in which users perform walking or jogging-like movements in a stationary place to move around in virtual environments (VEs). However, unintended positional drift (UPD) while performing WIP often occurs, thus weakening its benefits of keeping users in a fixed position in a physical space. In this paper, we present our preliminary study exploring whether users’ gaze while WIP affects the direction of the UPD. Participants of the study jogged in a VE five times. Each time, we manipulated their gaze direction by displaying visual information in 5 different locations in their view. Although a correlation between the gaze and UPD direction was not found, we report the results from this study, including the amount of observed drift and preferred location of visual information, and discuss future research directions.

Efficient Mapping Technique under Various Spatial Changes for SLAM-based AR Services

Recently, many attempts have been made to apply real-time simultaneous localization and mapping (SLAM) technology to augmented reality (AR) applications. Such AR systems based on SLAM technology are generally implemented by augmenting virtual objects onto a diorama or three-dimensional sculpture. However, a new SLAM map needs to be generated if the space or lighting where the diorama is installed changes. This leads to the problem of updating the coordinate system each time a new SLAM map is generated. Updates to the coordinate system signify that the positions of the virtual objects placed in the AR space change as well. Therefore, we proposed a SLAM map regeneration technique in which the existing coordinate system is maintained even if a new map is generated.

Emotional Virtual Reality Stroop Task: Pilot Design

Anxiety-inducing and assessment methods in Virtual Reality has been a topic of discussion in recent literature. The importance of the topic is related to the difficulty of getting accurate and timely measurements of anxiety without relying on self-report and breaking the immersion. To this end, the current study utilises the emotional version of a well-established cognitive task; the Stroop Color-Word Task and brings it to Virtual Reality. It consists of three levels; congruent which is used as control and corresponds with no anxiety, incongruent, which corresponds with mild anxiety and emotional, which corresponds with severe anxiety. This pilot serves two functions. The first is to validate the effects of the task using biosignal measurements. The second is to use the bio signal information and the labels to train a machine-learning algorithm. The information collected by the pilot will be used to decide what types of signals and devices to use in the final product, as well as what algorithm and time frame will be better suited for the purpose of accurately determining the user’s anxiety level within Virtual Reality without breaking the immersion.

Estimate the Difference Threshold for Curvature Gain of Redirected Walking

Redirected walking (RDW) allows users to navigate a large virtual world in a small physical space. At this time, if the applied redirection is below the detection threshold, the human hardly notice. However, some papers reported that users perceived changes in curvature gain even when redirections smaller than the detection threshold were applied. This means that the change in curvature gain caused human perception. Therefore, in this paper, we identified a threshold for the change in curvature gain, which was found to be 3.06°/m. Further experiments using different variation methods for variations in curvature gain will follow.

Evaluating the influence of interaction technology on procedural learning using Virtual Reality

Within the context of industry 4.0, this paper studies the influence of interaction technology (Vive controller and Knuckles) on manufacturing assembly procedural training using Virtual Reality. To do so, an experiment with 24 volunteers have been conducted and these participants have been separated in two groups: one using Vive controller and the other using Knuckles. Our conclusions are based on two indicators: Time to realize all tasks and the number of manipulations. This study shows that, after get used to, volunteers using Knuckles are faster than the other group but for some very delicate tasks, they need more manipulations to succeed.

Exploring Emotion Brushes for a Virtual Reality Painting Tool

We present emoPaint, a virtual reality application that allows users to create paintings with expressive emotion-based brushes and shapes. While previous systems have introduced painting in 3D space, emoPaint focuses on supporting emotional characteristics by allowing users to use brushes corresponding to specific emotions or to create their own emotion brushes and paint with the corresponding visual elements. Our system provides a variety of line textures, shape representations and color palettes for each emotion to enable users to control expression of emotions in their paintings. In this work we describe our implementation and illustrate paintings created using emoPaint.

Fishtank Sandbox: A Software Framework for Collaborative Usability Testing of Fish Tank Virtual Reality Interaction Techniques

Human-computer interaction researchers have been studying how we can interact with virtual objects in a virtual environment efficiently. Many usability experiments do not have the same control parameters. The lack of consistency makes comparing different interaction techniques difficult. In this article, we present a software framework for usability study in FTVR interaction techniques. The software framework provides fixed control parameters (e.g., task, graphic settings, and measuring parameters), the ability for other researchers to incorporate their interaction techniques as an add-on, and enabling individuals to participate in the experiment over the internet. The article explores a new way for VR/AR researchers to approach usability experiments using the framework and discuss the challenges that it brings.

Fluid3DGuides: A Technique for Structured 3D Drawing in VR

We propose Fluid3DGuides, a drawing guide technique to help users draw structured sketches more accurately in VR. The prototype system continuously infers visual guide lines for the user based on the user’s instant stroke drawing intention and its potential constraint relationship with the existing strokes. We evaluated our prototype through a pilot user study with six participants by comparing the proposed guide technique against the non-guide drawing condition. Participants gave positive comments on ease of use and drawing accuracy. They found that the technique could reduce the time and effort required to find the corrected drawing perspective and obtain more accurate 3D structured sketches.

Force-Based Foot Gesture Navigation in Virtual Reality

Navigation is a primary interaction in virtual reality. Previous research has explored different forms of artificial locomotion techniques for navigation, including hand gestures and body motions. However, few studies have investigated force-based foot gestures as a locomotion technique. We present three force-based foot gestures (Foot Fly, Foot Step and Foot Teleportation) for navigation in a virtual environment, relying on surface electromyography sensors readings from leg muscles. A pilot study comparing our techniques with controller-based techniques indicates that force-based foot gestures can provide a fun and engaging alternative. Of all six input techniques evaluated, Foot Fly was often most preferred despite requiring more exertion than the Controller Fly technique.

Freehand Interaction in Virtual Reality: Bimanual Gestures for Cross-Workspace Interaction

This work presents the design and evaluation of three bimanual interaction modalities for cross-workspace interaction in virtual reality (VR), in which the user can move items between a personal workspace and a shared workspace. We conducted an empirical study to understand three modalities and their suitability for cross-workspace interaction in VR.

GazeMOOC: A Gaze Data Driven Visual Analytics System for MOOC with XR Content

MOOC is widely used and more popular after COVID-19.In order to improve the learning effect, MOOC is evolving with XR technologies such as avatars, virtual scenes and experiments. This paper proposes a novel visual analytics system GazeMOOC, that can evaluate learners’ learning engagement in MOOC with XR content. For same MOOC content, gaze data of all learners are recorded and clustered. By differentiating gaze data of distracted learners and active learners, GazeMOOC can help evaluate MOOC content and learners’ learning engagement.

HapticPanel: An Open System to Render Haptic Interfaces in Virtual Reality for Manufacturing Industry

Virtual Reality (VR) allows simulation of machine control panels without physical access to the machine, enabling easier and faster initial exploration, testing, and validation of machine panel designs. However, haptic feedback is indispensable if we want to interact with these simulated panels in a realistic manner. We present HapticPanel, an encountered-type haptic system that provides realistic haptic feedback for machine control panels in VR. To ensure a realistic manipulation of input elements, the user’s hand is continuously tracked during interaction with the virtual interface. Based on which virtual element the user intends to manipulate, a motorized panel with stepper motors moves a corresponding physical input element in front of the user’s hand, enabling realistic physical interaction.

HoloKeys: Interactive Piano Education Using Augmented Reality and IoT

The rise of online learning poses unique challenges in music education, where live demonstration and musical synchronization are critical for student success. We present HoloKeys, a music education interface which allows instructors to play remotely located pianos using an augmented reality headset and wifi-enabled microcontrollers. This approach allows students to receive distance education which is more direct, immersive, and comprehensive than conventional video conferencing allows for. HoloKeys enables remote students to observe live instructional demonstration on a physical keyboard in their immediate environment just as they would in traditional settings. HoloKeys consists of two separate components: an augmented reality user interface and a piano playing apparatus. Our system aims to extend online music education beyond desktop platforms into the physical world, thereby addressing crucial obstacles encountered by educators and students transitioning into online education.

IMAGEimate - An End-to-End Pipeline to Create Realistic Animatable 3D Avatars from a Single Image Using Neural Networks

Current advances in image based 3D human shape estimation and parametric human models enable creating realistic 3D virtual humans. We present a pipeline which takes advantage of these models and takes a single input image to create realistic 3D animatable avatars. The pipeline extracts shape and pose parameters from the input image and builds an implicit surface representation, which is then fitted onto a parametric human model. This fitted human model is animated to new and novel poses extracting pose parameters from a motion capture dataset. We extend the pipeline showcasing realism and interaction by texture painting it using Substance Painter and embedding it in an AR scene using Adobe Aero respectively.

Immersive Analytics: A User-Centered Perspective

Researchers have explored using VR and 3D data visualizations for analyzing and presenting data for several decades. Surveys of the literature in the field usually adopt a technical or systemic lens. We propose a survey of the Immersive Analytics literature from the user’s perspective that relates the purpose of the visualization to its technical qualities. We present our preliminary review to describe how device technologies, kinds of representation, collaborative features, and research design have been utilized to accomplish the purpose of the visualization. This poster demonstrates our preliminary investigation, inviting feedback from the VRST community. Our hope is the final version of our review will benefit designers, developers, and practitioners who want to implement immersive visualizations from a Human-Centered Design perspective, and help Immersive Analytics researchers get a better understanding of the gaps in current literature.

Immersive Furnishing - Randomized Big Five Personality Traits Based Interior Layouts

In this work, we created an application to generate different apartment settings based on the Big Five personality model and investigated user perception and level of immersion in VR. The goal was to achieve a believable, immersive randomization of apartments interior for open world applications. The Big Five Model is a modern personality theory that defines five central personality traits: extraversion, agreeableness, openness, conscientiousness, and neuroticism. For each apartment layout, we simulated a personality by randomizing values of each of the five traits. We then calculated a series of derived traits from these base five traits and used these base and derived traits to influence the interior layout of the apartment. To test how much the personality-based interior layout system affected perceived personality, we set up a key-finding game and asked participants a series of questions about their perception of the apartment tenant after they found the keys on each floor. We found that participants’ perceptions of tenants’ personalities generally matched the original Big Five personality model used to generate the apartment layout with a higher rate than random chance.

Immersive Visual Interaction with Autonomous Multi-Vehicle Systems

With the emergence of multi-vehicular autonomous systems, such as AI controlled multiple fully autonomous vehicles, we need novel systems that provide tools for planning, executing, and reviewing of missions and keeping humans in the loop during all phases. We therefore present an immersive visualization system for interacting with these systems at a higher cognitive level than piloting of individual vehicles. Our system provides both desktop and VR modes for visual interaction with the robotic multi-vehicle AI system.

Incorporating Human Behavior in VR Compartmental Simulation Models

A novel strand of Coronavirus has affected a large number of individuals worldwide, putting a considerable stress to national health services and causing many deaths. Many control measures have been put in place across different countries with the aim to save lives at the cost of personal freedom. Computer simulations have played a role in providing policy makers with critical information about the virus. However, despite their importance in applied epidemiology, general simulation models, are difficult to validate because of how hard it is to predict and model human behavior. To this end, we propose a different approach by developing a virtual reality (VR) multi-agent virus propagation system where a group of agents interact with the user in a university setting. We created a VR digital twin replica of a building in the University of Derby campus, to enhance the user’s immersion in our study. Our work integrates human behavior seamlessly in a simulation model and we believe that this approach is crucial to have a deeper understanding on how to control the spread of a virus such as COVID-19.

Interactive Visualization of Deep Learning Models in an Immersive Environment

The development of deep learning (DL) models has been prevalent among software engineers. However, it is difficult for non-experts to analyze and understand their behavior. Hence, we propose an interactive visualization system of DL models in an immersive environment. Because an immersive environment offers unlimited displays and visualization of high-dimensional data, it enables a comprehensive analysis on data propagations through the layers, and compares the multiple performance metrics. In this research, we implemented a prototype system, demonstrated it to machine learning engineers, and discussed the future benefits of visualizing DL models in an immersive environment. Accordingly, our concept received positive feedback; however, we inferred that most of the engineers consider the visualization technology as a unique introduction to the immersive environment.

Managing a Crisis in Virtual Reality - Tackling a Wildfire

In this paper we present a virtual reality application, where multiple users can observe and interact with a portion of geo-referenced terrain where a real wildfire took place. The application presents a layout with two maps, one is a three-dimensional view with terrain elevation and the other is a conventional two-dimensional view. The VR users can control different layers (roads, waterways, etc), control the wildfire’s playback, command vehicles to change positions and paint the terrain conveying information to one-another. This work explores how users interact with map visualizations and plan for a crisis management scenario within a virtual environment.

Miniature AR: Multi-view 6DOF Virtual Object Visualization for a Miniature Diorama

We describe a miniature diorama AR system called ‘Miniature AR’ which can be applied to a mechanical diorama to extend the content’s feasibility by overlapping virtual objects on the complex diorama structure. The previous AR researches for the diorama are usually based on 2D planar recognition, and multiple user experience cannot be considered due to multi-devices synchronization. To overcome these constraints, in this paper, we show a new diorama AR system suitable for a tiny complex structure. The contributions of our work are i) design of the diorama AR system, ii) AR space generation and 6DOF view device tracking for diorama, iii) multiple view and event synchronization for multiple users. The utility of the approach has been demonstrated under a real diorama environment (miniature of a ski slope) using mobile devices.

Multi-Componential Analysis of Emotions Using Virtual Reality

In this study, we propose our data-driven approach to investigate the emotional experience triggered using Virtual Reality (VR) games. We considered a full Component Process Model (CPM) which theorise emotional experience as a multi-process phenomenon. We validated the possibility of the proposed approach through a pilot experiment and confirmed that VR games can be used to trigger a diverse range of emotions. Using hierarchical clustering, we showed a clear distinction between positive and negative emotion in the CPM space.

Multi-View AR Streams for Interactive 3D Remote Teaching

In this work, we present a system that adds augmented reality interaction and 3D-space utilization to educational videoconferencing for a more engaging distance learning experience. We developed infrastructure and user interfaces that enable the use of an instructor’s physical 3D space as a teaching stage, promote student interaction, and take advantage of the flexibility of adding virtual content to the physical world. The system is implemented using hand-held mobile augmented reality to maximize device availability, scalability, and ready deployment, elevating traditional video lectures to immersive mixed reality experiences. We use multiple devices on the teacher’s end to provide different simultaneous views of a teaching space towards a better understanding of the 3D space.

Natural walking speed prediction in Virtual Reality while using target selection-based locomotion

Travelling speed plays an essential role in the overall user experience while navigating inside a virtual environment. Researchers have used various travelling speed that matches the user speed profile in order to give a natural walking experience. However, predicting a user’s instantaneous walking speed can be challenging when there is no continuous input from the user. Target selection-based techniques are those where the user selects the target to reach there automatically. These techniques also lack naturalness due to their low interaction fidelity. In this work, we have proposed a mathematical model that can dynamically compute the instantaneous natural walking speed while moving from one point to another in a virtual environment. We formulated our model with the help of user studies.

Of Leaders and Directors: A visual model to describe and analyse persistent visual cues directing to single out-of view targets

Researchers have come up with many visual cues that can guide Virtual (VR) and Augmented Reality (AR) users to out of view objects. The paper provides a classification of cues and tasks and visual model to describe and analyse cues to support their design.

Preliminary analysis of visual cognition estimation in VR toward effective assistance timing for iterative visual search tasks

This research aims to develop a method to assist iterative visual search tasks, and it focuses on visual cognition to achieve effective assistance. As a first step to this goal, we analyzed the participants’ gaze behaviors when they visually recognized a target in a VR environment. In the experiment, the effect of visual cognition difficulty (VCD) is considered. Analysis results show that the participants could visually recognize lower-VCD targets at an earlier timing. This suggests that VCD-based guidance may improve task performance.

Preservation and Reproduction of Real Soundscapes in Virtual Space for the "100 Best Soundscapes in Japan"

In this study, we developed a soundscape system that reproduces a real soundscape in 3D in a virtual space, based on the "100 Best Soundscapes in Japan" selected by the Ministry of the Environment. Using the game engine "Unity", we proposed a method of embedding recorded sound sources into virtual objects placed in a virtual space, and sequentially playing back the sound when the user walks or turns around in the virtual space. By using this system, it is possible to reproduce the same sound field in the virtual space as in the real space, and it can be applied in various places.

ProMVR - Protein Multiplayer Virtual Reality Tool

Due to the pandemic limitations caused by Covid-19, people need to work at home and carry on the meetings virtually. Virtual meeting tools start popularizing and thriving. Those tools allow users to see each other through screen and camera, chat through voice and text, and share content or ideas through screen share. However, screen sharing protein models through virtual meetings is not easy due to the difficulty of viewing protein 3D (Three Dimensional) structures from a 2D (Two Dimensional) screen. Moreover, interactions upon a protein are also limited. ProMVR is a tool the author developed to tackle the issue that protein designers may find limitations working in a traditional 2D or 3D environment and they may find it hard to communicate their ideas with other designers. Since ProMVR is a VR tool, it allows users to “jump into” a virtual environment, take a close look at protein models, and have intuitive interactions.

Recreating a Medieval Mill as a Virtual Learning Environment

Historic buildings shown in open-air museums often lack a good accessibility and visitors rarely can interact with them as well as displayed tools to learn about processes. Providing these buildings in Virtual Reality could be a great supplement for museums to provide accessible and interactive offers. To investigate the effectiveness of this approach and to derive design guidelines, we developed an interactive virtual replicate of a medieval mill. We present the design of the mill and the results of a preliminary usability evaluation.

Remote Visual Line-of-Sight: A Remote Platform for the Visualisation and Control of an Indoor Drone using Virtual Reality

The COVID-19 pandemic has created the distinct challenge for the piloting of drones/other UAVs for researchers and educators who are restricted to working remotely. We propose a Remote Visual Line-of-Sight system that leverages the advantages of Virtual Reality (VR) and motion capture to allow users to fly a real-world drone from a remote location. The system was developed while our researcher (VR operator) was remotely working in Vietnam with the enclosed real-world environment located in Australia. Our paper will present the system design and the challenges found during the development of our system.

Safety First: A Study of Users’ Perception of VR Adoption in Vehicles

The increasing ubiquity and mobility of VR devices has introduced novel use cases, one of which is using VR while in dynamic, on-the-go environments. Hence, there is a need to examine the perceptual, cognitive, and behavioral aspects of both the driving experience and VR immersion, and how they influence each other. As an initial step towards this goal, we report on the results of an online survey that investigated users’ perceived safety of using VR in an AV. The results of the survey show a mix of expected and surprising attitudes towards VR-in-the-car.

SpArc: A VR Animating Tool at Your Fingertips

3D animation is becoming a popular form of storytelling in many fields, bringing life to games, films, and advertising. However, the complexity of conventional 3D animation software poses steep learning curves for novices. Our work aims to lower such barriers by creating a simple yet immersive interface that users can easily interact with. Based on the focus-group interviews, we identified key functionalities in animation workflows. The resulting tool, SpArc, is designed for two-handed setups and allows users to dive into animating without complex rigging and skinning process or learning multiple menu interactions. Instead of using a conventional horizontal slider, we designed a radial time slider to reduce possible arm fatigue and enhance the accuracy of keyframe selection. The demo will showcase this interactive 3D animation tool.

Study of Heart Rate Visualizations on a Virtual Smartwatch

In this paper, we present three visualizations showing heart rate (HR) data collected over time. Two visualizations present a summary chart (bar or radial chart), summarizing the amount of time spent per HR zone (i.e., low, moderate, high intensity). We conducted a pilot study with five participants to evaluate the efficiency of the visualizations when monitoring the intensity of an activity while playing a tennis-like Virtual Reality game. Preliminary results show that participants were performing (with respect to time and accuracy) better with and preferred the bar chart summary.

Swaying Locomotion: A VR-based Locomotion System through Head Movements

Locomotion systems used in virtual reality (VR) content have a significant impact on the content user experience. One of the most important factors of a walking system in VR is whether it can provide a plausible walking sensation because it is considered directly related to the user’s sense of presence. However, joystick-based and teleportation-based locomotion systems, which are commonly used today, can hardly provide an appropriate sense of presence to a user. To solve this problem, we present Swaying Locomotion, which is a novel VR-based locomotion system that uses head movements to support a user walking in a VR space while actually sitting in real space. Our user study suggests that Swaying Locomotion provides a better walking sensation than the traditional joystick-based approach.

Table-Based Interactive System for Augmenting Japanese Food Culture Experience

Washoku, traditional Japanese food culture, was evaluated as a social custom for food that embodies the Japanese spirit of respect for nature and was registered as a UNESCO Intangible Cultural Heritage in 2013. However, an actual meal is limited to taste and visual information such as taste, ingredients, tableware, and arrangements; it is difficult to become thoroughly familiar with the cultural characteristics of Japanese cuisine. This study achieved a system that conveys the characteristics of Japanese cuisine, such as the importance of seasonality and ingredients, by displaying the cultural background related to food in text form. The natural environment is projected by a projector on a table, and seasons progress as the meal advances. The food was created in consultation with the chef to be suitable for the system. The users who participated in our survey and experienced the system were conveyed that Japanese cuisine is supported by the richness and seasons of nature and that it also affects traditional events.

Technical Factors Affecting Augmented Reality User Experiences in Sports Spectating

The maturity of augmented reality (AR) technology and research now paves the way for dissemination of AR outside of the laboratory. However, it is still under-explored which factors are influencing the user experience of an AR application. In this poster, we describe some of the technical factors that could influence the user experience. We focus on a use-case in the field of on-site sports spectating with mobile AR. We present a study design which analyzes the influence of latency, registration accuracy, and jitter as factors on AR user experience.

The Application of Virtual Reality in Student Recruitment

In this paper we present details of a virtual tour and game for VR headset that are designed to investigate an interactive and engaging approach of applying VR to student recruitment for an undergraduate course. The VR tour employs a floating menu to navigate through a set of 360° panoramic photographs of the teaching environment and uses hotspot interaction to display further information about the course. The VR game is a fast-paced shooting game. The course information is embedded on cubes that the player needs to focus on and destroy. The game experience is expected to generate an engaging way to promote the course. This work in progress outlines the concept and development of the prototype, and discusses the next stages of testing in order to evaluate the effectiveness of applying VR to undergraduate student recruitment.

The Effect of 2D Stylized Visualization of the Real World for Obstacle Avoidance and Safety in Virtual Reality System Usage

Using virtual reality systems with the head-mounted display can incur interaction difficulties and safety problems because of the user’s view being isolated from the real world operating space. One possible solution is to super-impose the real world objects or environment information onto the virtual scene. A variety of such visualization methods have been proposed, all in hopes of minimizing the negative effects of introducing foreign elements to the original virtual scene. In this poster, we propose to apply the neural style transfer technique to blend in the real world operating environment in the style of the given virtual space to make the super-imposed resulting image as natural as possible, maintaining the sense of immersion with the least level of distraction. Our pilot experimental study has shown that the stylization obscured the clear presentation of the environment and worsened or did not improve the safe user performance, and was neither considered sufficiently natural.

UGRA in VR: A Virtual Reality Simulation for Training Anaesthetists

We present a virtual reality training simulator for medical interns practicing ultrasound-guided regional anaesthesia (UGRA). UGRA is a type of nerve block procedure performed commonly by critical care doctors such as anaesthetists, emergency medicine physicians, and paramedics. This procedure is complex and requires intense training. It is traditionally taught one-on-one by experts and is performed on simulated models long before attempting the procedure on live patients. We present our virtual reality application that allows for training this procedure in a simulated environment. The use of virtual reality makes training future doctors performing UGRA safer and more cost efficient than current approaches.

Using Hand Tracking and Voice Commands to Physically Align Virtual Surfaces in AR for Handwriting and Sketching with HoloLens 2

In this paper, we adapt an existing VR framework for handwriting and sketching on physically aligned virtual surfaces to AR environments using the Microsoft HoloLens 2. We demonstrate a multimodal input metaphor to control the framework’s calibration features using hand tracking and voice commands. Our technical evaluation of fingertip/surface accuracy and precision on physical tables and walls is in line with existing measurements on comparable hardware, albeit considerably lower compared to previous work using controller-based VR devices. We discuss design considerations and the benefits of our unified input metaphor suitable for controller tracking and hand tracking systems. We encourage extensions and replication by providing a publicly available reference implementation (https://go.uniwue.de/hci-otss-hololens).

Validating Social Distancing through Deep Learning and VR-Based Digital Twins

The Covid-19 pandemic resulted in a catastrophic loss to global economies, and social distancing was consistently found to be an effective means to curb the virus's spread. However, it is only as effective when every individual partakes in it with equal alacrity. Past literature outlined scenarios where computer vision was used to detect people and to enforce social distancing automatically. We have created a Digital Twin (DT) of an existing laboratory space for remote monitoring of room occupancy and automatically detecting violation of social distancing. To evaluate the proposed solution, we have implemented a Convolutional Neural Network (CNN) model for detecting people, both in a limited-sized dataset of real humans, and a synthetic dataset of humanoid figures. Our proposed computer vision models are validated for both real and synthetic data in terms of accurately detecting persons, posture, and intermediate distances among people.

Visual Transition of Avatars Improving Speech Comprehension in Noisy VR Environments

In order to construct a comfortable communication in the VR space, it is important to improve the speech comprehension in environmental noise. Although there have been many reports on the interaction between vision and acoustic, few studies using noisy VR spaces. In this study, sixteen Japanese male and female were tested to listen to a some sentence in a VR space with environmental noise, to evaluate the effect of the visual stimulus to the avatar speech comprehension against the environmental noise, with using the up-and-down method. The results showed that the cocktail party effect was also observed in the VR avatars, and the cocktail party effect continued even if the avatar vanished visually. In addition, it was suggested that the cocktail party effect was enhanced if the lip of the avatar synchronized correctly.

Visualisation methods for patient monitoring in anaesthetic procedures using augmented reality

In health care, there are still many devices with poorly designed user interfaces that can lead to user errors. Especially in acute care, an error can lead to critical conditions in patients. Previous research has shown that the use of augmented reality can help to better monitor the condition of patients and better detect unforeseen events. The system created in this work is intended to aid in the detection of changes in patient and equipment-data in order to increase detection of critical conditions or errors.

VR Rehearse & Perform - A platform for rehearsing in Virtual Reality

In this paper, we propose VR Rehearse & Perform - a Virtual Reality application for enhancing the rehearsal efforts of performers by providing them access to accurate recreations - both visual and acoustical - of iconic concert venues.

VRBT: A Non-pharmacological VR approach towards hypertension

Hypertension is a prevalent disease that is known to affect the vascular system especially to the people with poor living habits and lifestyles. Virtual reality (VR) is effective to interact with people to release their pressure and cheer them up, which however is less conducted towards manipulating blood pressure and hypertension. In this paper, we consider how hypertension can be treated with VR devices and design virtual reality river bathing therapy (VRBT) with respect to a combination of traditional methods through sensory stimulation, audio interventions, and motor training.

XRSpectator: Immersive, Augmented Sports Spectating

In-stadium sports spectating delivers a unique social experience in a variety of sports. However, in contrast to broadcast delivery, it lacks the provision of real-time information augmentation, like game statistics overlaid on screen. In an earlier iteration, we developed ARSpectator, a prototypical, mobile system which can be brought to the stadium to experience both, the live sport action and situated infographics spatially augmented into the scene. In some situations it is difficult or often impossible to go to the stadium though, for instance because of limited stadium access during pandemics or when wanting to conduct controlled user studies. We address this by turning our ARSpectator system into an indirect augmented reality experience deployed to an immersive, virtual reality head-mounted display: The live stadium experience is delivered by way of a surrounding 360 video recording while maintaining and extending the provision of interactive, situated infographics. With our XRSpectator demo prototype presented here, users can have an ARSpectator experience of a rugby game in our local stadium.