VRST 2023: 29th ACM Symposium on Virtual Reality Software and Technology

Full Citation in the ACM Digital Library

SESSION: Session 1: Locomotion & Games

Planning Locomotion Techniques for Virtual Reality Games

Locomotion is a fundamental component in many virtual reality (VR) games. However, few techniques have been designed with game’s demands in mind. In this paper, we propose two locomotion techniques for fast-paced VR games: Repeated Short-Range Teleports and Continuous Movement Pads. We conducted a user study with 27 participants using these techniques against Smooth Locomotion and Teleport in a game-like scenario. We found that Movement Pads can be a suitable alternative for games, with competitive performance on various criteria such as time, damage taken, usability, workload, and user preference. On the other hand, Repeated Short-Range Teleport displayed lower usability and higher mental workload.

Versatile Mixed-method Locomotion under Free-hand and Controller-based Virtual Reality Interfaces

Locomotion systems that allow the user to interact with large virtual spaces require precise input, competing with the same inputs available for performing a task in the virtual world. Despite extensive research on hand tracking input modalities, there is a lack of a widely adopted mechanism that offers general-purpose, high-precision locomotion across various applications. This research aims to address this gap by proposing a design that combines teleportation with a grab-pull locomotion scheme to bridge the divide between long-distance and high-precision locomotion in both a tracked-controller and free-hand environment. The implementation details for both tracked controller and tracked hand environments are presented and evaluated through a user study. The study findings indicate that each locomotion mechanism holds value for different tasks, with grab-pull providing more benefit in scenarios where smaller, more precise positioning is required. As found in prior research, controller tracking was found to be faster than hand tracking, but all participants were able to successfully use the locomotion system with both interfaces.

Exploring User Engagement in Immersive Virtual Reality Games through Multimodal Body Movements

User engagement in Virtual Reality (VR) games is crucial for creating immersive and captivating gaming experiences that meet the expectations of players. However, understanding and measuring these levels in VR games presents a challenge for game designers, as current methods, such as self-reports, may be limited in capturing the full extent of user engagement. Additionally, approaches based on biological signals to measure engagement in VR games present complications and challenges, including signal complexity, interpretation difficulties, and ethical concerns. This study explores body movements, as a novel approach to measure user engagement in VR gaming. We employ E4, emteqPRO, and off-the-shelf IMUs to measure the body movements from diverse participants engaged in multiple VR games. Further, we examine the simultaneous occurrence of player motivation and physiological responses to explore potential associations with body movements. Our findings suggest that body movements hold promise as a reliable and objective indicator of user engagement, offering game designers valuable insights on generating more engaging and immersive experiences.

Cross-Reality Gaming: Comparing Competition and Collaboration in an Asymmetric Gaming Experience

Due to the level of immersion and differences in the user interface, there can be a very large discrepancy in the user experience between users in immersive systems and non-immersive systems when playing games together. To investigate the impact of the cross-reality experience, which refers to the asymmetric use of eXtended Reality, we aim to understand the different affordances and experiences in an asymmetric setup, where one participant uses a desktop setup with a mouse and keyboard, and one uses a virtual reality (VR) headset and controller in two different task modes, Competition or Collaboration. In our research, a pair of participants played a game in real-time, using either the VR setup or the desktop setup. In Competition mode, the two participants were asked to defeat each other. In Collaboration mode, the pair of participants played as a team and were asked to defeat a pair of AI enemies. Our results show the VR group reported a better gaming experience and perceptual responses compared to the desktop group regardless of game mode, but that the desktop group showed superior gaming performance compared to the VR group in Competition mode.

SESSION: Session 2: Interaction I

Does One Keyboard Fit All? Comparison and Evaluation of Device-Free Augmented Reality Keyboard Designs

Virtual keyboard designs are widely discussed with the increasing prevalence of head-mounted and lightweight Mixed Reality devices. However, isolated design suggestions with distinct implementations may lack comparability in terms of performance, learnability, and user preference. We compare three promising device-free text-entry solutions for Augmented Reality (AR) on the Microsoft HoloLens 2. The virtual keyboards comprise dwell-based eye-gaze input, eye-gaze with pinch-gesture-commit input, and mid-air tap typing on virtual QWERTY-keyboards. We conducted a controlled within-subjects lab experiment with 27 subjects measuring typing performance, task load, usability, and preference across the three keyboards. Users state distinct preferences for the respective keyboards and weight the advantages and disadvantages differently. Considering diverse usage scenarios, subjects would even prefer these input modes over speech or physical keyboard input. The results indicate that virtual keyboard design shall be tailored to individual user preferences. Therefore, this study provides essential insights into designing AR keyboards for heterogeneous user groups.

Exploring Augmented Reality for Situated Analytics with Many Movable Physical Referents

Situated analytics (SitA) uses visualization in the context of physical referents, typically by using augmented reality (AR). We want to pave the way toward studying SitA in more suitable and realistic settings. Toward this goal, we contribute a testbed to evaluate SitA based on a scenario in which participants play the role of a museum curator and need to organize an exhibition of music artifacts. We conducted two experiments: First, we evaluated an AR headset interface and the testbed itself in an exploratory manner. Second, we compared the AR headset to a tablet interface. We summarize the lessons learned as guidance for designing and evaluating SitA.

Exploring Users' Pointing Performance on Virtual and Physical Large Curved Displays

Large curved displays have emerged as a powerful platform for collaboration, data visualization, and entertainment. These displays provide highly immersive experiences, a wider field of view, and higher satisfaction levels. Yet, large curved displays are not commonly available due to their high costs. With the recent advancement of Head Mounted Displays (HMDs), large curved displays can be simulated in Virtual Reality (VR) with minimal cost and space requirements. However, to consider the virtual display as an alternative to the physical display, it is necessary to uncover user performance differences (e.g., pointing speed and accuracy) between these two platforms. In this paper, we explored users’ pointing performance on both physical and virtual large curved displays. Specifically, with two studies, we investigate users’ performance between the two platforms for standard pointing factors such as target width, target amplitude as well as users’ position relative to the screen. Results from user studies reveal no significant difference in pointing performance between the two platforms when users are located at the same position relative to the screen. In addition, we observe users’ pointing performance improves when they are located at the center of a semi-circular display compared to off-centered positions. We conclude by outlining design implications for pointing on large curved virtual displays. These findings show that large curved virtual displays are a viable alternative to physical displays for pointing tasks.

Re-investigating the Effect of the Vergence-Accommodation Conflict on 3D Pointing

The vergence-accommodation conflict (VAC) limits user performance in current Virtual Reality (VR) systems. In this paper, we investigate the effects of the VAC in a single-focal VR system using three experimental conditions: with no VAC, with a constant VAC, and with a varying VAC. Previous work in this area had yielded conflicting results, so we decided to re-investigate this issue. Eighteen participants performed an ISO 9241:411 task in a study that closely replicates previous work, except that the angle of the task space was rotated 20 degrees downward, to make the task less fatiguing to perform, which addresses a potential confound in previous work. We found that the varying VAC condition had worse performance than the other conditions, which indicates that the contrasting results in previous work were very likely due to biomechanical factors. We hope that our work contributes to the understanding of the influence of the VAC in VR systems and potential strategies for improving user experience and performance in immersive virtual environments.

SESSION: Session 3: Interaction II

Dialogues For One: Single-User Content Creation Using Immersive Record and Replay

Non-player characters are an essential element of many 3D and virtual reality experiences. They can make the experiences feel more lively and populated. Animation for non-player characters is often motion-captured using expensive hardware and the post-processing steps are time-consuming, especially when capturing multiple people at once. Using record and replay techniques in virtual reality can offer cheaper and easier ways of motion capture since the user is already tracked. We use immersive record and replay to enable a single user to create stacked recordings of themselves. We provide tools to help the user interact with their previous recorded self and in doing so allow them to create believable interactive scenarios with multiple characters that can be used to populate virtual environments. We create a small dialogue dataset with two amateur actors who used our tool to record dialogues alone and together in virtual reality. To evaluate whether stacked recordings are qualitatively comparable to conventional multi-user recordings and whether people could tell the difference between the two, we conducted two user studies, one online and one in virtual reality with 89 participants in total. We found that participants could not tell the difference and even slightly preferred stacked recordings.

Dynascape : Immersive Authoring of Real-World Dynamic Scenes with Spatially Tracked RGB-D Videos

In this paper, we present Dynascape, an immersive approach to the composition and playback of dynamic real-world scenes in mixed and virtual reality. We use spatially tracked RGB-D cameras to capture point cloud representations of arbitrary dynamic real-world scenes. Dynascape provides a suite of tools for spatial and temporal editing and composition of such scenes, as well as fine control over their visual appearance. We also explore strategies for spatiotemporal navigation and different tools for the in situ authoring and viewing of mixed and virtual reality scenes. Dynascape is intended as a research platform for exploring the creative potential of dynamic point clouds captured with mobile, tracked RGB-D cameras. We believe our work represents a first attempt to author and playback spatially tracked RGB-D video in an immersive environment, and opens up new possibilities for involving dynamic 3D scenes in virtual space.

Exploring Unimodal Notification Interaction and Display Methods in Augmented Reality

As we develop computing platforms for augmented reality (AR) head-mounted display (HMDs) technologies for social or workplace environments, understanding how users interact with notifications in immersive environments has become crucial. We researched effectiveness and user preferences of different interaction modalities for notifications, along with two types of notification display methods. In our study, participants were immersed in a simulated cooking environment using an AR-HMD, where they had to fulfill customer orders. During the cooking process, participants received notifications related to customer orders and ingredient updates. They were given three interaction modes for those notifications: voice commands, eye gaze and dwell, and hand gestures. To manage multiple notifications at once, we also researched two different notification list displays, one attached to the user’s hand and one in the world. Results indicate that participants preferred using their hands to interact with notifications and having the list of notifications attached to their hands. Voice and gaze interaction was perceived as having lower usability than touch.

Intuitive User Interfaces for Real-Time Magnification in Augmented Reality

Various reasons exist why humans desire to magnify portions of our visually perceived surroundings, e.g., because they are too far away or too small to see with the naked eye. Different technologies are used to facilitate magnification, from telescopes to microscopes using monocular or binocular designs. In particular, modern digital cameras capable of optical and/or digital zoom are very flexible as their high-resolution imagery can be presented to users in real-time with displays and interfaces allowing control over the magnification. In this paper, we present a novel design space of intuitive augmented reality (AR) magnifications where an AR head-mounted display is used for the presentation of real-time magnified camera imagery. We present a user study evaluating and comparing different visual presentation methods and AR interaction techniques. Our results show different advantages for unimanual, bimanual, and situated AR magnification window interfaces, near versus far vergence distances for the image presentation, and five different user interfaces for specifying the scaling factor of the imagery.

SESSION: Session 4: Displays & Perception

Retinal Homing Display: Head-Tracking Auto-stereoscopic Retinal Projection Display

This paper introduces Retinal Homing Display, which presents focus-free stereoscopic images via retinal projection, thus eliminating the need for the user to wear additional equipment. Traditional 3D displays, typically classified as either naked-eye stereoscopic or wearable, present inherent challenges: the former involves a compromise between resolution and accurate depth perception, while the latter imposes an additional burden on the user. Our proposed display employs optical and mechanical mechanisms to converge projector light at the user’s pupil center, simultaneously tracking eye movements. This lets the user perceive focus-free, high-resolution stereoscopic images without wearable equipment. We implemented a proof-of-concept system utilizing a robotic arm and a Dihedral Corner Reflector Array (DCRA), subsequently evaluating image quality and its eyebox. Finally, we discuss the limitations of the current prototype and outline potential directions for future research.

When Filters Escape the Smartphone: Exploring Acceptance and Concerns Regarding Augmented Expression of Social Identity for Everyday AR

Mass adoption of Everyday Augmented Reality (AR) glasses will enable pervasive augmentation of our expression of social identity through AR filters, transforming our perception of self and others. However, despite filters’ prominent and often problematic usage in social media, research has yet to reflect on the potential impact AR filters might have when brought into everyday life. Informed by our survey of 300 existing popular AR filters used on Snapchat, Instagram and Tiktok, we conducted an AR-in-VR user study where participants (N=24) were exposed to 18 filters across six categories. We evaluated the social acceptability of these augmentations around others and attitudes towards an individual’s augmented self.Our findings highlight 1) how users broadly respected another individual’s augmented self; 2) positive use cases, such as supporting the presentation of gender identity; and 3) tensions around applying AR filters to others (e.g. censorship, changing protected characteristics) and their impact on self-perception (e.g. perpetuating unrealistic beauty standards). We raise questions regarding the rights of individuals to augment and be augmented that provoke the need for further consideration of AR augmentations in society.

From Clocks to Pendulums: A Study on the Influence of External Moving Objects on Time Perception in Virtual Environments

This paper investigates the relationship between perceived object motion and the experience of time in virtual environments. We developed an application to measure how the motion properties of virtual objects and the degree of immersion and embodiment may affect the time experience. A first study (n = 145) was conducted remotely using an online video survey, while a second study (n = 60) was conducted under laboratory conditions in virtual reality (VR). Participants in both studies experienced seven different virtual objects in a randomized order and then answered questions about time experience. The VR study added an "embodiment" condition in which participants were either represented by a virtual full body or lacked any form of virtual body representation. In both studies, time was judged to pass faster when viewing oscillating motion in immersive and non-immersive settings and independently of the presence or absence of a virtual body. This trend was strongest when virtual pendulums were displayed. Both studies also found a significant inverse correlation between the passage of time and boredom. Our results support the development of applications that manipulate the perception of time in virtual environments for therapeutic use, for instance, for disorders such as depression, autism, and schizophrenia. Disturbances in the perception of time are known to be associated with these disorders.

SESSION: Session 5: Assistive & Gaze

Visual Hearing Aids: Artificial Visual Speech Stimuli for Audiovisual Speech Perception in Noise

Speech perception is optimal in quiet environments, but noise can impair comprehension and increase errors. In these situations, lip reading can help, but it is not always possible, such as during an audio call or when wearing a face mask. One approach to improve speech perception in these situations is to use an artificial visual lip reading aid. In this paper, we present a user study (N = 17) in which we compared three levels of audio stimuli visualizations and two levels of modulating the appearance of the visualization based on the speech signal, and we compared them against two control conditions: an audio-only condition, and a real human speaking. We measured participants’ speech reception thresholds (SRTs) to understand the effects of these visualizations on speech perception in noise. These thresholds indicate the decibel levels of the speech signal that are necessary for a listener to receive the speech correctly 50% of the time. Additionally, we measured the usability of the approaches and the user experience. We found that the different artificial visualizations improved participants’ speech reception compared to the audio-only baseline condition, but they were significantly poorer than the real human condition. This suggests that different visualizations can improve speech perception when the speaker’s face is not available. However, we also discuss limitations of current plug-and-play lip sync software and abstract representations of the speaker in the context of speech perception.

Music Therapy in Virtual Reality for Autistic Children with Severe Learning Disabilities

Music Therapy (MT) has shown many benefits in helping autistic children, but some challenges remain due to children’s social anxiety and sensory issues. Yet, very few studies have investigated how Virtual Reality (VR) could help to increase the accessibility of MT approaches. This paper presents an exploratory study investigating the use of VR to perform MT sessions for autistic children with severe learning disabilities and complex needs. The study is performed in terms of acceptability, usability, and social communication. A collaborative MT approach was designed in close collaboration with music therapists from Denmark and psychologists from France, using head-mounted display-based VR. Testing were conducted with thirteen children with various neurodevelopmental conditions and intellectual disabilities at a children’s day hospital in Paris. The results indicate positive acceptability and usability for these children, and suggest a positive effect of MT in VR regarding communication.

Gaze Assistance for Older Adults during Throwing in Virtual Reality and its Effects on Performance and Motivation

Initial motivation when starting exergaming is a key factor towards enabling long-term engagement and adherence, especially among older adults. To increase, in particular, the initial motivation of older adults, we introduce the concept of diminishing gaze assistance (GA), assess its feasibility for virtual reality (VR) exergames, and investigate the effects on motor learning, performance, and motivation in older adult users. First, we conducted a focus group followed by a pre-study on the development of VR exergames for older adults and VR gaze assistance. The results informed the design and implementation of our gaze-assisted throwing exergame, which was then evaluated in a follow-up main study. Participants of the main study were randomly assigned to the GA and Motor (control) group, and had to complete a VR throwing task, in which participants had to aim and throw at three targets at varying angles. The GA group received declining gaze assistance, in which the ball trajectory was initially guided by their gaze (rather than their physical (motor) throwing) before guidance was gradually reduced until their physical (motor) throwing ability was solely responsible for hitting the target. Motivation and user experience were assessed using the Questionnaire on Current Motivation before and during, and the short scale of intrinsic motivation questionnaire after the task. The results show that the GA was generally perceived positively. In particular, the initial confidence of the GA group was rated higher, and we observed evidence suggesting increased confidence throughout the trial.

GazeRayCursor: Facilitating Virtual Reality Target Selection by Blending Gaze and Controller Raycasting

Raycasting is a common method for target selection in virtual reality (VR). However, it results in selection ambiguity whenever a ray intersects multiple targets that are located at different depths. To resolve these ambiguities, we estimate object depth by projecting the closest intersection between the gaze and controller rays onto the controller ray. An evaluation of this method found that it significantly outperformed a previous eye convergence depth estimation technique. Based on these results, we developed GazeRayCursor, a novel selection technique that enhances Raycasting, by leveraging gaze for object depth estimation. In a second study, we compared two variations of GazeRayCursor with RayCursor, a recent technique developed for a similar purpose, in a dense target environment. The results indicated that GazeRayCursor decreased selection time by 45.0% and reduced manual depth adjustments by a factor of 10 in a dense target environment. Our findings showed that GazeRayCursor is an effective method for target disambiguation in VR selection without incurring extra effort.

SESSION: Session 6: Teaching & Collaboration

Evaluating Augmented Reality Communication: How Can We Teach Procedural Skill in AR?

Augmented reality (AR) has great potential for use in healthcare applications, especially remote medical training and supervision. In this paper, we analyze the usage of an AR communication system to teach a medical procedure, the placement of a central venous catheter (CVC) under ultrasound guidance. We examine various AR communication and collaboration components, including gestural communication, volumetric information, annotations, augmented objects, and augmented screens. We compare how teaching in AR differs from teaching through videoconferencing-based communication. Our results include a detailed medical training steps analysis in which we compare how verbal and visual communication differs between video and AR training. We identify procedural steps in which medical experts give visual instructions utilizing AR components. We examine the change in AR usage and interaction over time and recognize patterns between users. Moreover, AR design recommendations are given based on post-training interviews.

Hands-on DNA: Exploring the Impact of Virtual Reality on Teaching DNA Structure and Function

Molecular biology is a demanding subject, requiring students to master abstract, three-dimensional (3D) concepts across a range of spatial scales. Virtual reality (VR) is a medium that excels at portraying scale and 3D concepts, and allows people to have tangible experiences of otherwise intangible subjects. This paper describes Hands-on DNA, a virtual reality learning experience for teaching undergraduate university students about the scale and structure of deoxyribose nucleic acid (DNA), a central molecule in molecular biology. The intention of Hands-on DNA is to leverage the advantages of virtual reality against specific challenges faced in teaching molecular biology. We derive design requirements motivated by pedagogy, provide guidelines, and discuss lessons learned during development. Our user study shows that students perceive Hands-on DNA as a fun, engaging, effective learning tool, and that it addresses some of the weaknesses in molecular biology education. Our results also suggest that new interaction techniques to support learning in VR need to be developed (e.g., for note taking) and that the increasing penetration of recreational VR increases students’ expectations and hence the risk of students being disappointed of VR learning tools.

Measuring and Comparing Collaborative Visualization Behaviors in Desktop and Augmented Reality Environments

Augmented reality (AR) provides a significant opportunity to improve collaboration between co-located team members jointly analyzing data visualizations, but existing rigorous studies are lacking. We present a novel method for qualitatively encoding the positions of co-located users collaborating with head-mounted displays (HMDs) to assist in reliably analyzing collaboration styles and behaviors. We then perform a user study on the collaborative behaviors of multiple, co-located synchronously collaborating users in AR to demonstrate this method in practice and contribute to the shortfall of such studies in the existing literature. Pairs of users performed analysis tasks on several data visualizations using both AR and traditional desktop displays. To provide a robust evaluation, we collected several types of data, including software logging of participant positioning, qualitative analysis of video recordings of participant sessions, and pre- and post-study questionnaires including the NASA TLX survey. Our results suggest that the independent viewports of AR headsets reduce the need to verbally communicate about navigating around the visualization and encourage face-to-face and non-verbal communication. Our novel positional encoding method also revealed the overlap of task and communication spaces vary based on the needs of the collaborators.

Vicarious: Context-aware Viewpoints Selection for Mixed Reality Collaboration

Mixed-perspective, combining egocentric (first-person) and exocentric (third-person) viewpoints, have been shown to improve the collaborative experience in remote settings. Such experiences allow remote users to switch between different viewpoints to gain alternative perspectives of the remote space. However, existing systems lack seamless selection and transition between multiple perspectives that better fit the task at hand. To address this, we present a new approach called Vicarious, which simplifies and automates the selection between egocentric and exocentric viewpoints. Vicarious employs a context-aware method for dynamically switching or highlighting the optimal viewpoint based on user actions and the current context. To evaluate the effectiveness of the viewpoint selection method, we conducted a user study (n = 27) using an asymmetric AR-VR setup where users performed remote collaboration tasks under four distinct conditions: No-view, Manual, Guided, and Automatic selection. The results showed that Guided and Automatic viewpoint selection improved users’ understanding of the task space and task performance, and reduced cognitive load compared to Manual or No-view selection. The results also suggest that the asymmetric setup had minimal impact on spatial and social presence, except for differences in task load and preference. Based on these findings, we provide design implications for future research in mixed reality collaboration.

SESSION: Session 7: Technologies in the Wild

Ready Worker One? High-Res VR for the Home Office

Many employees prefer to work from home, yet struggle to squeeze their office into an already fully-utilized space. Virtual Reality (VR) seemingly offered a solution with its ability to transform even modest physical spaces into spacious, productive virtual offices, but hardware challenges—such as low resolution—have prevented this from becoming a reality. Now that hardware issues are being overcome, we are able to investigate the suitability of VR for daily work. To do so, we (1) studied the physical space that users typically dedicate to home offices and (2) conducted an exploratory study of users working in VR for one week. For (1) we used digital ethnography to study 430 self-published images of software developer workstations in the home, confirming that developers faced myriad space challenges. We used speculative design to re-envision these as VR workstations, eliminating many challenges. For (2) we asked 10 developers to work in their own home using VR for about two hours each day for four workdays, and then interviewed them. We found that working in VR improved focus and made mundane tasks more enjoyable. While some subjects reported issues—annoyances with the fit, weight, and umbilical cord of the headset—the vast majority of these issues seem to be addressable. Together, these studies show VR technology has the potential to address many key problems with home workstations, and, with continued improvements, may become an integral part of creating an effective workstation in the home.

UniteXR: Joint Exploration of a Real-World Museum and its Digital Twin

The combination of smartphone Augmented Reality (AR) and Virtual Reality (VR) makes it possible for on-site and remote users to simultaneously explore a physical space and its digital twin through an asymmetric Collaborative Virtual Environment (CVE). In this paper, we investigate two spatial awareness visualizations to enable joint exploration of a space for dyads consisting of a smartphone AR user and a head-mounted display VR user. Our study revealed that both, a mini-map-based method and an egocentric compass method with a path visualization, enabled the on-site visitors to locate and follow a virtual companion reliably and quickly. Furthermore, the embodiment of the AR user by an inverse kinematics avatar allowed the use of natural gestures such as pointing and waving which was preferred over text messages by the participants of our study. In an expert review in a museum and its digital twin we observed an overall high social presence for on-site AR and remote VR visitors and found that the visualizations and the avatar embodiment successfully facilitated their communication and collaboration.

Comparing Mixed Reality Agent Representations: Studies in the Lab and in the Wild

Mixed-reality systems provide a number of different ways of representing users to each other in collaborative scenarios. There is an obvious tension between using media such as video for remote users compared to representations as avatars. This paper includes two experiments (total n = 80) on user trust when exposed to two of three different user representations in an immersive virtual reality environment that also acts as a simulation of typical augmented reality simulations: full body video, head and shoulder video and an animated 3D model. These representations acted as advisors in a trivia quiz. By evaluating trust through advisor selection and self-report, we found only minor differences between representations, but a strong effect of perceived advisor expertise. Unlike prior work, we did not find the 3D model scored poorly on trust, perhaps as a result of greater congruence within an immersive context.

Dynamic Theater: Location-Based Immersive Dance Theater, Investigating User Guidance and Experience

Dynamic Theater explores the use of augmented reality (AR) in immersive theater as a platform for digital dance performances. The project presents a locomotion-based experience that allows for full spatial exploration. A large indoor AR theater space was designed to allow users to freely explore the augmented environment. The curated wide-area experience employs various guidance mechanisms to direct users to the main content zones. Results from our 20-person user study show how users experience the performance piece while using a guidance system. The importance of stage layout, guidance system, and dancer placement in immersive theater experiences are highlighted as they cater to user preferences while enhancing the overall reception of digital content in wide-area AR. Observations after working with dancers and choreographers, as well as their experience and feedback are also discussed.

SESSION: Session 8: Measuring Behaviour

Revisiting Consumed Endurance: A NICE Way to Quantify Shoulder Fatigue in Virtual Reality

Virtual Reality (VR) is increasingly being adopted in fitness, gaming, and workplace productivity applications for its natural interaction with body movement. A widely accepted method for quantifying the physical fatigue caused by VR interactions is through metrics such as Consumed Endurance (CE). Proposed in 2014, CE calculates the shoulder torque to infer endurance time (ET)—i.e. the maximum amount of time a pose can be maintained—during mid-air interactions. This model remains widely cited but has not been closely examined beyond its initial evaluation, leaving untested assumptions about exertion from low-intensity interactions and its basis on torque. In this paper, we present two VR studies where we (1) collect a baseline dataset that replicates the foundation of CE and (2) extend the initial evaluation in a pointing task from a two-dimensional (2D) screen to a three-dimensional (3D) immersive environment. Our baseline dataset collected from a high-precision tracking system found that the CE model overestimates ET for low-exertion interactions. Further, our studies reveal that a biomechanical model based on only torque cannot account for additional exertion measured when the shoulder angle exceeds 90° elevation. Based on these findings, we propose a revised formulation of CE to highlight the need for a hybrid approach in future fatigue modelling.

Cognitive Load Measurement with Physiological Sensors in Virtual Reality during Physical Activity

Many Virtual Reality (VR) experiences, such as learning tools, would benefit from utilising mental states such as cognitive load. Increases in cognitive load (CL) are often reflected in the alteration of physiological responses, such as pupil dilation (PD), electrodermal cctivity (EDA), heart rate (HR), and electroencephalography (EEG). However, the relationship between these physiological responses and cognitive load are usually measured while participants sit in front of a computer screen, whereas VR environments often require a high degree of physical movement. This physical activity can affect the measured signals, making it unclear how suitable these measures are for use in interactive Virtual Reality (VR).

We investigate the suitability of four physiological measures as correlates of cognitive load in interactive VR. Suitable measures must be robust enough to allow the learner to move within VR and be temporally responsive enough to be a useful metric for adaptation. We recorded PD, EDA, HR, and EEG data from nineteen participants during a sequence memory task at varying levels of cognitive load using VR, while in the standing position and using their dominant arm to play a game. We observed significant linear relationships between cognitive load and PD, EDA, and EEG frequency band power, but not HR. PD showed the most reliable relationship but has a slower response rate than EEG. Our results suggest the potential for use of PD, EDA, and EEG in this type of interactive VR environment, but additional studies will be needed to assess feasibility under conditions of greater movement.

Exploring the Stability of Behavioral Biometrics in Virtual Reality in a Remote Field Study: Towards Implicit and Continuous User Identification through Body Movements: Towards Implicit and Continuous User Identification through Body Movements

Behavioral biometrics has recently become a viable alternative method for user identification in Virtual Reality (VR). Its ability to identify users based solely on their implicit interaction allows for high usability and removes the burden commonly associated with security mechanisms. However, little is known about the temporal stability of behavior (i.e., how behavior changes over time), as most previous works were evaluated in highly controlled lab environments over short periods. In this work, we present findings obtained from a remote field study (N = 15) that elicited data over a period of eight weeks from a popular VR game. We found that there are changes in people’s behavior over time, but that two-session identification still is possible with a mean F1-score of up to 71%, while an initial training yields 86%. However, we also see that performance can drop by up to over 50 percentage points when testing with later sessions, compared to the first session, particularly for smaller groups. Thus, our findings indicate that the use of behavioral biometrics in VR is convenient for the user and practical with regard to changing behavior and also reliable regarding behavioral variation.

Beyond Mirrors: Exploring Behavioral Changes through Comparative Avatar Design in VR Taiko Drumming

Most studies on the Proteus Effect, which examines how avatars can influence users’ behavior through evoked stereotypes, have primarily manipulated only participants’ own avatars as the independent variable. However, in reality, there are numerous scenarios where individuals recognize their uniqueness by comparing themselves to others. Therefore, this study aimed to explore the impact of recognizing one’s distinctiveness by comparing one’s own avatar’s appearance with others on behavioral changes. In our experiment, participants and non-player characters engaged in playing the Japanese drum ‘Taiko’ together within a virtual environment. They utilized avatars dressed in suits or ‘Happi,’ which is a traditional Japanese festival costume. The results demonstrated that both the uniformity/distinctiveness and the type of avatar appearance played a joint role in influencing the speed and amplitude of arm swings during the taiko performance. This finding provides valuable insights into comprehending the mechanisms of behavior change in settings where multiple avatars interact, such as social virtual reality, and aids in designing virtual spaces that foster appropriate interactions among individuals.

SESSION: Session 9: Haptics & Operation

Effect of Virtual Hand's Fingertip Deformation on the Stiffness Perceived Using Pseudo-Haptics

In this study, using a novel method for haptic presentation based on pseudo-haptics, the perceived stiffness was visually altered by changing the fingertips shape of a virtual hand, as users engaged with objects in a VR environment. While past approaches have primarily focused on instigating such sensations through object deformation, we focused on how an individual’s fingertips deform upon making contact with an object. In this study, we investigated pseudo-haptics based on the deformation of the fingertips of a virtual hand. In Experiment 1, we determined how the shape deformation of a virtual hand’s fingertip affected the sense of body ownership. The experiment determined that the maximum change in the fingertip width should be 2.25 times. In Experiment 2, subjects touched a virtual object in the VR space and evaluated the perception of the stiffness of the virtual object. The results confirmed that when the deformation of the fingertip shape of the virtual hand was small, the object was perceived as hard, whereas when it was large, the object was perceived as soft. These results indicated that a haptic presentation is possible without using a haptic device that restricts user movement, which will users could broaden the range of natural interactions in VR spaces.

Predicting Perceptual Haptic Attributes of Textured Surface from Tactile Data Based on Deep CNN-LSTM Network

This paper introduces a framework to predict multi-dimensional haptic attribute values that humans use to recognize the material by using the physical tactile signals (acceleration) generated when a textured surface is stroked. To this end, two spaces are established: a haptic attribute space and a physical signal space. A five-dimensional haptic attribute space is established through human adjective rating experiments with the 25 real texture samples. The physical space is constructed using tool-based interaction data from the same 25 samples. A mapping is modeled between the aforementioned spaces using a newly designed CNN-LSTM deep learning network. Finally, a prediction algorithm is implemented that takes acceleration data and returns coordinates in the haptic attribute space. A quantitative evaluation was conducted to inspect the reliability of the algorithm on unseen textures, showing that the model outperformed other similar models.

Exploring Real-time Precision Feedback for AR-assisted Manual Adjustment in Mechanical Assembly

Augmented Reality (AR) based manual assembly nowadays enables to guide the process of physical tasks, providing intuitive instructions and detailed information in real-time. However, very limited studies have explored AR manual adjustment tasks with precision requirements. In this paper, we develop an AR-assisted guidance system for manual adjustments with relatively high-precision requirements. We first assessed the accuracy of the special-set OptiTrack system to determine the threshold of precision requirements for our user study. We further evaluated the performance of Number-based and Bar-based precision feedback by comparing orienting assembly errors and task completion time, as well as the usability in the user study. We found that the assembly errors of orientation in the Number-based and Bar-based interfaces were significantly lower than the baseline condition, while there was no significant difference between the Number-based and Bar-based interfaces. Furthermore, the Number-based showed faster task completion time, lower workload, and higher usability than the Bar-based condition.

Exploring Visual Augmentations for Improving the Operation of a Hydraulic Excavator using Expert Operation Replay

Hydraulic excavators are widely used in construction work owing to their versatility. However, the general operation of these excavators is complex and novice operators require extensive training to operate them. In this study, we propose a virtual reality (VR)-based training system with three types of visual augmentations using pre-recorded expert operations to support the skill acquirement for handling a hydraulic excavator. To evaluate the effectiveness of the proposed visual augmentations in terms of skill improvement, we compared the scores of the trainees before and after training including combinations of visual augmentations. The results indicated that the display of the lever movement significantly improved the trajectory of the bucket tip, while ghost animation and slow motion did not show significant effects. Furthermore, by showing the lever input and excavator movement of the expert in slow motion, the task completion time increased because of the aftereffect. Our findings not only provide a design guideline for VR-based excavator operation training but can also be applied to augmented reality (AR)/mixed reality (MR) support systems for supporting practical excavator operations.

SESSION: Session 10: Redirection

Redirected Placement: Evaluating the Redirection of Passive Props during Reach-to-Place in Virtual Reality

Hand redirection is an effective technique that can provide users with haptic feedback in virtual reality (VR) when a disparity exists between virtual objects and their physical counterparts. Psychophysiological research has revealed the distinct motion profiles of different kinematic phases when people operate hand-object interaction. In this paper, we proposed the Redirected Placement (RP), which determines the new placement of a physical prop using a constrained optimization problem. The visual illusion is used during the "reach-to-place" kinematic phase in the proposed RP method rather than the "reach-to-grasp" phase in the typical Redirected Reach (RR) method. We conducted two experiments based on the proposed RP method. Our first experiment showed that detection thresholds are generally higher with the proposed method compared to the RR method. The second experiment evaluated the embodiment experience with hand redirection using RR-only, RP-only, and RR&RP methods. The results report an enhanced sense of embodiment with the combined use of both RR and RP techniques. Our study further indicates that a 1:1 combination ratio of RR&RP resulted in the closest subjective experience to the baseline.

Instant Hand Redirection in Virtual Reality Through Electrical Muscle Stimulation-Triggered Eye Blinks

In this paper we investigate the use of electrical muscle stimulation (EMS) to trigger eye blinks for instant hand redirection in virtual reality (VR). With the rapid development of VR technology and increasing user expectations for realistic experiences, maintaining a seamless match between real and virtual objects becomes crucial for immersive interactions. However, hand movements are fast and sometimes unpredictable, increasing the need for instantaneous redirection. We introduce EMS to the field of hand redirection in VR through precise stimulation of the eyelid muscles. By exploiting the phenomenon of change blindness through natural eye blinks, our novel stimulation model achieves instantaneous, imperceptible hand redirection without the need for eye tracking. We first empirically validate the efficiency of our EMS model in eliciting full eye closure. In a second experiment, we demonstrate the feasibility of using such a technique for seamless instantaneous displacement in VR and its particular impact for hand redirection. Among other factors, our analysis also delves into the under-explored domain of gender influence on hand redirection techniques, revealing significant gender-based performance disparities.

Redirecting Rays: Evaluation of Assistive Raycasting Techniques in Virtual Reality

Raycasting-based interaction techniques are widely used for object selection in immersive environments. Despite their intuitive use, they come with challenges due to small or far away objects, hand tremor, and tracking inaccuracies. Previous adaptations for raycasting, such as directly snapping the ray to the closest target, extruding the ray to a cone, or multi-step selection techniques, require additional time for users to become familiar with them. To address these issues, we propose three assistive techniques in which the visible selection ray is subtly redirected towards a target, with a proximity and gain based increase in the redirection amount. In a user study (N = 26), we compared these redirection techniques with a baseline condition based on a Fitts’ law task and collected performance measures as well as comprehensive subjective feedback. The results indicate that the three redirection techniques are significantly faster and have higher effective throughput than the baseline condition. Participants retained a high sense of agency with all redirection techniques and reported significantly lower total workload compared to the baseline. The majority of participants preferred selection with assistive ray redirection and perceived it as not distracting or intrusive. Our findings support that assistive redirected raycasting techniques can improve object selection performance and user experience in virtual environments.

Stay Vigilant: The Threat of a Replication Crisis in VR Locomotion Research

The ability to reproduce previously published research findings is an important cornerstone of the scientific knowledge acquisition process. However, the exact details required to reproduce empirical experiments vary depending on the discipline. In this paper, we summarize key replication challenges as well as their specific consequences for VR locomotion research. We then present the results of a literature review on artificial locomotion techniques, in which we analyzed 61 papers published in the last five years with respect to their report of essential details required for reproduction. Our results indicate several issues in terms of the description of the experimental setup, the scientific rigor of the research process, and the generalizability of results, which altogether points towards a potential replication crisis in VR locomotion research. As a countermeasure, we provide guidelines to assist researchers with reporting future artificial locomotion experiments in a reproducible form.

SESSION: VRST 2023 Posters and Demos

A Pilot Study on the Impact of Discomfort Relief Measures on Virtual Reality Sickness and Immersion

While there are several theories of virtual reality (VR) sickness causes and pertinent methods suggested for mitigation, it remains an important problem. One possible solution might be to prescribe measures for just relieving the immediate symptoms (vs. addressing the very root causes). Understanding that the severity of the sickness may affect individuals differently, we examined three methods: (1) reducing the weight of the headset (using a suspension mechanism); (2) refreshing the user with a gentle breeze of wind (using a fan); (3) accompanying the VR viewing experience with mindful breathing. We assess the relative sickness reduction effect, if any, of these three measures through a comparative pilot experiment and individual case analysis. The preliminary results point to rather the importance of system usability and how it affects the relationship between the perceived immersion and the extent of sickness. The initial proposition to enhance the user’s physical condition as a way to better withstand VR sickness symptoms could not be established.

A Virtualized Augmented Reality Simulation for Exploring Perceptual Incongruencies

When blending virtual and physical content, certain incongruencies emerge from hardware limitations, inaccurate tracking, or different appearances of virtual and physical content. They restrain us from perceiving virtual and physical content as one experience. Hence, it is crucial to investigate these issues to determine how they influence our experience. We present a virtualized augmented reality simulation that can systematically examine single incongruencies or different configurations.

ActioNet: A Lightweight Architecture for Efficient Action Recognition

We present ActioNet, a groundbreaking lightweight neural network architecture optimized for action recognition tasks, particularly in resource-constrained environments such as drones and edge devices. Utilizing a strategically modified 3D U-Net encoder followed by fully connected layers for fine-grained classification, ActioNet manages to achieve a promising validation accuracy of 72%. This is accomplished with a notably compact model size of just 46MB, making it uniquely suitable for devices with limited computational capabilities. Although ActioNet may not surpass state-of-the-art models in terms of sheer accuracy, it distinguishes itself through its fast inference times and small footprint. These attributes make real-time action recognition not only feasible but also efficient in constrained operational settings. We argue that ActioNet serves as a meaningful contribution to the emerging field of efficient deep learning and provides a solid foundation for future advancements in lightweight action recognition models.

Audio-based Vibrotactile Feedback in Multimodal VR Interactions

While consumer-grade virtual reality (VR) hardware can deliver immersive audiovisual experiences, these systems often lack the ability to display realistic haptic feedback, or incorporate cost-efficient vibrotactile actuators with very limited abilities to provide tactile feedback. To overcome these limitations, we introduce an approach based on audio-based vibrotactile actuators. Due to their wide frequency response and multiple resonant frequencies, they can provide more tactile details. In our implementation, every VR interaction uses standard audio clips to provide simultaneous auditory and tactile feedback, as well as coupled realistic physics simulations for the visual feedback. We evaluate our approach to assess the benefits on the user’s experience regarding various interaction scenarios in VR, comparing our approach to a simulated fixed-frequency actuator as a baseline. The results confirmed the benefits of our approach in terms of user preference, perceived realism, comfort, sense of agency, and texture perception. Furthermore, multimodal feedback resulted in the best user experience.

Augmented Aroma: The Influence of Augmented Particles' Movement and Color on Emotion during Olfactory Perception

This study investigates the impact of visual augmentation on the olfactory system by analyzing users’ emotional responses. Augmented particles were presented using HoloLens through five methods, involving adjustment in color and movement, alongside six odors. Through the experiments with 30 participants, we discovered that augmented particles could intensify or reduce emotional reactions based on their colors and movement directions.

Combining embodiment and 360 video for teaching protection of civilians to military officers

This demo presents an innovative use of embodiment in combination with 360º video to support teaching the threat-based approach to protection of civilians at a military university. To create a realistic and emotionally appealing XR experience and at the same time save on developing time and costs, scanned 3D objects and avatar were integrated in 360º videos. The app also includes interactions with a virtual perpetrator and collaborative map exercise and received positive feedback from end users.

Comparing Performance of Dry and Gel EEG Electrodes in VR using MI Paradigms

Brain–computer interfaces (BCIs) are an emerging technology with numerous applications. Electroencephalogram (EEG) motor imagery (MI) is among the most common BCI paradigms and has been used extensively in healthcare applications such as post-stroke rehabilitation. Using a Virtual Reality (VR) game, Push Me, we conducted a pilot study to compare MI accuracy with Gel or active-dry EEG electrodes. The motivation was to (1) investigate the MI paradigm in a VR environment and (2) compare MI accuracy using active dry and gel electrodes with different Machine Learning (ML) classifications (SVM, KNN and RF). The results indicate that while gel-based electrodes, in combination with SVM, achieved the highest accuracy, dry electrode EEG caps achieved similar outcomes, especially with SVM and KNN models.

Deforming Skin Illusion by Visual-tactile Stimulation

While previous studies have investigated virtual body embodiment in various forms, such as different colors, species, and transparency, little research has been conducted on elastic, non-rigid, or deforming bodies. If we can embody such elastic bodies, it will contribute to providing novel bodily experiences, such as being an octopus and developing a tele-operation system with deforming machines. We focused on skin deformation as the first step, and aimed to induce illusory experiences of having illusory sensation on deforming skin of a realistic virtual body using wearable tactile devices. The participants experience deforming skin illusions when a part of skin of the virtual arm is visually indented or stretched with synchronous tactile vibrations. The illusion is enhanced when the virtual arm’s location matches the real hand and when visual-tactile brush stroking is applied.

Design of Time-Continuous Emotion Rating Interfaces

We present a preliminary study on the design of visual interfaces for users to continuously rate their emotion while viewing VR content. Interfaces consist of two from the literature, a continuous adaptation of the popular SAM interface, and two novel interfaces. Designs were tested to discern what elements are intuitive or distracting. Study phases included initial impressions of interface visuals, tuning the interface control scheme, training by rating a list of emotion labels, and continuous rating of 360° video content. Results suggest that an interactive face icon (Smiley) is a promising design choice and suggest further evaluation of possible benefits.

Directional Multimodal Flow to Help Mitigate VR Sickness

One way to alleviate VR sickness is to reduce the sensory mismatch between the visual and vestibular organ regarding the motion perception. Mixing in the motion trail in the reverse direction to the original has been suggested as one such method. However, as such visual feedback can be content intrusive, we consider supplementing it by the non-visual multimodal reverse flow. In particular, we have devised methods to supply sound effects as if heard from the reverse direction, and vibration and air flow likewise. Our validation experiment has shown that the multimodal feedback was effective in significantly reducing the sickness, but its direction (reverse or not) did not have an effect as hypothesized.

Diving Into The Twilight Zone VR for Marine Biology

Teaching students about underwater marine science is difficult due to the limitations required to access underwater environments. Marine science is typically not taught until tertiary education levels. We have developed a Virtual Reality experience for teaching marine science activities focusing on high school students. Our education programme and VR tool can help train the next generation of students into learning and being aware about marine science.

Early User Feedback on a VR Interface Draft for Interaction with a Multi-Robot System in Ship Hull Inspection

The use of multi-robot systems is a field that can benefit from VR by strengthening understanding of the situation and enabling seamless interaction with the actors involved. This work investigates how the usability of a design for interaction with a multi-robot system for ship maintenance is assessed. Furthermore, comments from the participants are consulted as impulses for improving the design.

Earnormous: An educational VR game about how humans hear

We present a demo of Earnormous, a virtual reality game to teach about the human ear.

Effect of voice imitation using voice conversion by avatar on customer service in Virtual Environments

We investigate the impact of voice imitation on rapport building in customer service in a virtual environment(VE). We simulated a VE store and conducted a within-subject experiment in three customer service scenarios with 16 participants. The imitation condition used a voice generated by a machine learning model to reproduce the participant’s linguistic content, with the voice identity set midway between the participant and the salesperson. In a group of men, voice imitation significantly improved their impression of salespeople. The findings of this study can be used to design interpersonal services in VE with the help of salesperson avatar voice design.

Effects of Source Location, Loudness, and Understanding of Speech on Interpersonal Distance in a Virtual Environment: Interpersonal Distance in a Virtual Environment

Perception of the ambient loudness affects feelings of interpersonal intimacy, while the voice loudness modulates the interpersonal distance. We aimed to test whether the interpersonal distance could be modulated by the ambient loudness as well as the loud speech emanating from a virtual human. Participants were asked to walk toward a virtual human with either the ambient sound or the virtual human’s speech. We found that the interpersonal distance to the virtual human increased only when the louder speech was emitted from the virtual human’s head irrespective of the understanding speech, while the walking time increased only when the ambient sound was louder, suggesting that the interpersonal distance is modulated by the virtual human’s voice but not by the ambient loudness.

Effects of symmetrical avatar arm movements on the sense of ownership of both hands inverted in a mirror

In this study, we investigated whether visual and tactile symmetrical stimuli in the arm affect the mirrored avatar’s sense of ownership of the hand. In the experiment, we tested the user, avatar, and mirrored avatar’s sense of ownership of their hand by catching a ball multiple times in a virtual space. The results suggest that non-ambidextrous individuals were significantly more likely to recognize the mirror-image avatar’s hand as their own in scenarios that included symmetrical visual and tactile stimuli than in scenarios without such stimuli. This experiment may contribute to a better understanding of the relationship between left-right sensation and body ownership sensation. It may contribute to the pursuit of immersive experiences in virtual environments.

Effects of Vibrotactile Feedback on Aim-and-Throw Tasks in Virtual Environments

Vibrotactile feedback has been actively utilized in many virtual reality applications to provide the sense of touch. In this preliminary work, we investigated the effects of vibrotactile feedback in the dart throwing task in a virtual environment. The user study compared the task performance, as well as observed the participants’ behavior in throwing tasks with vibrotactile feedback or not. The results showed that the participants made larger body movements during the task when vibrotactile feedback was on, while the feedback did not affect the task performance.

Emotional Enhancement Techniques in Online Music Concerts by Presenting Force Stimuli from Light Sticks

In this study, we attempt to enhance emotion in online live music concerts by focusing on the influence of entrainment to the rhythm of music on emotion. We propose a light stick-type device that changes the audience's own light stick swinging motion and their perception of it by adding a force stimulus to the audience's light stick swinging motion. Experimental results suggest that the proposed device promoted the user's light stick swinging behavior and enhanced emotion. Results also suggested that presenting force stimuli in accordance with musical rhythms could be a useful for presenting the presence of other people.

Enhancing VR Based Serious Games and Simulations Design: Bayesian Knowledge Tracing and Pattern-Based Approaches

This paper explores how Bayesian Knowledge Tracing (BKT) can be integrated with a pattern-based approach to enhance the development of virtual reality (VR) based serious games and simulations. These technologies allow for the prediction of user progress and the utilization of Artificial Intelligence (AI) methods to tailor difficulty levels based on individual needs. By combining BKT, pattern-based mechanics, and affective feedback, comprehensive data on user interactions, skills, and emotional states can be collected. This data enables the estimation of learners’ knowledge levels and the prediction of their progress.

Enhancing User Experience in VR using Wearable Olfactory System

This paper introduces a mounted olfactory device prototype that instantly emits and quickly switches scents. Earlier approaches, such as fixed olfactory devices, have limitations in terms of the time it took for users to perceive a scent in Virtual Reality (VR) due to dissipating lingering scents in physical space, as well as the lack of instantaneous scent switching. To evaluate its effectiveness, we conducted a pilot user study comparing mounted and fixed olfactory devices.The results show that the mounted olfactory device provides a better VR experience than the fixed olfactory device.

Estimating mechanical properties of soft objects using surface measurements from AR headsets

Physics-driven predictions of soft tissue mechanics are vital for various medical interventions. Insights on the mechanical properties of soft tissues are essential for obtaining personalised predictions from these models. This study aims to provide a workflow to identify the material parameters of soft homogeneous materials under gravity loading using 3D surface geometrical measurements acquired from a wearable augmented reality (AR) headset’s depth camera. Preliminary results show that the parameter estimation procedure can successfully recover the ground truth material parameter C1 of a cantilever beam using synthetic surface data. This workflow could be used for real-time navigational guidance during soft tissue treatment procedures.

Exploratory Study on the Reinstatement Effect Under 360-Degree Video-Based Virtual Environments

Episodic memory incorporates environmental contexts, and memory retrieval is aided by matching the retrieval context to the encoding context. This study tested whether similar context-dependency of memory could be confirmed in virtual environments. Participants learned words in a 360-degree video-based virtual environment depicting either natural or urban landscapes. Immediately, they completed a test in the same virtual environment. After two days, half of the participants underwent a final test with the same context as that on the initial day, whereas the other half underwent it with a different context. Surprisingly, participants tested in a different context exhibited significantly lower forgetting than those tested in the same context, which contradicted our hypothesis.

Fabric Electrodes for Physiological Sensing in a VR HMD

This paper explores the development and testing of fabric electrodes to collect a range of physiological measures. The aim is to integrate these sensors into a Virtual Reality (VR) headset to collect physiological and muscular motion data that will help detect emotion, cognitive load and facial expressions. As part of an on-going project, we have already developed prototypes of the EMG and GSR sensors. A head phantom has been developed for the purpose of testing and validating electrode performance.

Immersive Climate Narratives: Using Extended Reality to Raise Climate Change Awareness

Shadows of Tomorrow is an innovative public art installation utilizing LiDAR body tracking and augmented reality to promote climate change awareness. The installation projects depictions of real-world climate change scenarios from Australia, Kuwait, the United States, and Greenland on a large display. Shadows utilizes the Azure Kinect to capture and integrate audience member silhouettes as a simple user interface for the display. As audiences move in front of the display, their silhouette reveals the impact of climate change on the projected environment. By bringing global climate change stories to local audiences, we emphasize the universal, yet highly localized impacts of climate crisis. These interactive visualizations encourage audiences to engage with and understand the stark realities of climate change in regions far removed from their own. Created for display in high-transit public areas like museums, airports, and city centers, Shadows of Tomorrow aims to create a global dialogue around our shared responsibility for climate action.

Immersive visualization for ecosystem services analysis

Ecosystem services are benefits provided to humans by ecosystems through the natural processes and conditions which occur [7]. Interviews with land use scientists identified problems with currently available software applied to their ecosystem services analysis. A user centred design process is adopted and a visualization system, Immersive ESS Visualizer, is presented for visualizing data relating to ecosystem services analysis. Immersive ESS Visualizer is designed for both experts and non-experts and allows users to compare data visualized with multiple hand-manipulated maps. Users can glide over a landscape with data layers draped to analyse areas of interest. Immersive ESS Visualizer could augment a process for presenting ecosystem services analysis results.

Listen again: virtual reality based training for children with hearing impairments

Although hearing loss is treated with hearing technology and rehabilitation, children with hearing loss still face challenges. Factors such as distance to the sound source and noise from the surroundings are the children’s biggest enemies. In the "Listen Again" project, a listening- and spatial awareness training application was co-designed together with deaf and hard-of-hearing children who use cochlear implants. This paper presents quantitative and qualitative results from a two-month evaluation where 22 children were asked to play with the VR solution for two months, three times a week.

Movement Creation by Choreographers with a Partially Self-controllable Human Body in VR

We developed a system that creates artistic dance movements by augmenting body movements with a VR device. Our proposed system controls a virtual dancer’s body in a virtual space in real time based on a VR device’s input. Whole-body movements, which are different from those of the user, are created by changing the body parts to which the input movements are applied, processing and amplifying them, and synthesizing the IK control. Six choreographers investigated the possibilities of creating dance movements in a virtual space. The results identified two types of interaction: manipulating the body of another person or interacting as the body of another person. Both can be applied to choreographic creation.

Navigating in VR using free-hand gestures and embodied controllers: A comparative evaluation

While natural body-based movements are essential features for immersive VR (Virtual Reality) experiences, most of the available input techniques for navigation in VR involve the use of hand-held controllers. Alternatively, while body-based input for VR navigation has previously been explored in HCI using external tracking devices, there is little to no work that utilizes the in-built tracking functionalities of the predominant VR headsets (such as Meta Quest 2) for gesture-based navigation in VR. This paper addresses this research gap by proposing five free-hand gestures for 3-D navigation in VR using internal gesture-tracking functionality of Quest 2 headset. Additionally, a qualitative and quantitative comparison is presented between free-hand and controller-based navigation in VR using a custom designed task (with 10 users). Overall, the findings from the task-analysis indicate that while in-built tracking functionalities in VR headsets open doors for inexpensive gesture-based VR navigation, the mid-air hand-gestures result into greater fatigue as compared to using controllers for navigation in VR.

Open Video Game Library: Developing a Video Game Database for Use in Research and Experimentation

Video games are utilized in some studies for evaluation experiments and demonstrations. However, because commercial video games cannot be edited, optimizing them for specific experiments is challenging. Thus, researchers may have to develop their own video games individually, which requires effort and impedes the comparison of their study with other similar studies. To overcome these limitations, in this study, we propose a user-friendly "Open Video Game Library" that can be used by researchers for experiments. We conducted a survey of previous studies and designed video games to meet researcher needs. The library features characteristics such as GUI-based parameter editing, play log output, open-source availability, and support for operation on multiple devices, with the aim of becoming the de facto standard for video games used for research purposes.

Pain Distraction for Children Through VR- or Audio-haptic Soundscapes in Situ

In this pilot study we compare two prototype applications developed in collaboration with Rigshospitalet, the main hospital in Denmark, aimed to evaluate the effectiveness of a virtual reality (VR)- versus an audio-haptic based solution, as a pain distraction tool for children aged 5 to 8 during needle-related medical procedures. Both prototypes were developed with a narrative where children help a farmer find hidden animals. The final prototype underwent testing in situ, at Rigshospitalet’s clinic for blood tests. Here, participants’ pain levels were assessed using the Wong-Baker FACES Scale [9] and the Visual Analogue Scale [5]. Both prototypes saw participants report reduced pain perception, skewing more in favor of the VR prototype. However, the audio-haptic prototype showed similar levels of reduction in pain perception when effective. The study concludes that both VR- and audio-haptic based distraction are viable methods, that each cover a group’s needs within medical procedures involving young children (those who need to not see the procedure, and those who do), and that these should be further developed and implemented in said medical procedures.

Performing Tasks in Virtual Reality. Interplay between Realism and Visual Imagery

The main aims of the presented study are to verify whether the amount of textures in a virtual scene affects task performance and to test whether visual imagery changes the relationship between realism and task performance. An experimental study with three groups differed in visual realism was conducted (n=100). Participants were asked to perform a task: taking on the role of a marshaller and positioning the plane on the airport apron. Results indicate that texturing does not affect task performance. Visual imagery is a moderator of the relationship between perceived realism and task performance. A high level of imagery interferes with a high realism assessment decreasing task performance.

Pigments of Imagination: An Interactive Virtual Reality Composition

Pigments of Imagination is an artistic interactive virtual reality experience based on an original fixed media composition. It is designed to reimagine the popular music video in a virtual space as a dynamic, emotionally engaging experience through exploration of novel approaches toward audiovisual reactivity and interactivity. In this piece the user can interact, directly affect, and build upon prior user interpolations of the environment’s sonic and visual qualities, allowing a narrative immersion that maintains a structured arc and conclusion but unique experience with each use.

Reducing Sensing Errors in a Mixed Reality Musical Instrument

This paper describes the design and evaluation of Netz, a novel mixed reality musical instrument that leverages artificial intelligence for reducing errors in gesture interpretation by the system. We followed a participatory design approach over three months through regular sessions with a professional musician. We explain our design process and discuss technological sensing errors in mixed reality devices, which emerged during the design sessions. We investigate the use of interactive machine learning techniques to mitigate such errors. Results from statistical analyses indicate that a deep learning model based on interactive machine learning can significantly reduce the number of technological errors in a set of musical performance tasks with the mixed reality musical instrument. Based on our findings, we argue that the application of interactive machine learning techniques can be beneficial for embodied, hand-controlled musical instruments in the mixed reality domain.

ShadowPlayVR: Understanding Traditional Shadow Puppetry Performance Techniques Through Non-Intuitive Embodied Interactions.

"ShadowPlayVR" is a virtual reality system designed to introduce the intricate art of Chinese shadow puppetry into Virtual Reality (VR), focusing on the non-intuitive embodied interactions that emulate puppetry performance. By incorporating an immersive, experiential learning approach, ShadowPlayVR offers users a hands-on understanding of this art form. Preliminary testing reveals the significant role of contextual information in facilitating understanding and mastering these complex interactions. The work also presented showcases how VR can serve as a powerful tool to preserve and engage with traditional cultural heritage in a contemporary digital context.

Sickness Reduction in FPV Drone Control: Improved Effects of Reverse Optical Flow with Static Landmarks Only

Drones are controlled remotely through the on-board live camera often using a headset to immersively (without external distraction) situate the drone operator with a first-person view. The highly dynamic imagery of drone piloting can elicit significant VR sickness. In this poster, we demonstrate the application of mixing the reverse optical flow pattern into the drone piloting imagery for mitigating VR sickness, using only the features of the static landmarks in the scene. We compare it to the cases of applying no mitigation technique (baseline) and mixing in the optical flow pattern from “all” object features. The results show that the suggested method was significantly more effective in reducing sickness than when considering all object features, and had a higher preference for improved visibility and controllability.

Simple and Practical Dual Rendering for Reducing Eye Fatigue from Vergence-Accomodation Conflict in Stereoscopic Viewing

Eye fatigue and unpleasant symptoms caused by the vergence and accommodation conflict with stereoscopic rendering pose a substantial usability issue in virtual reality. To address this problem, dual rendering of the stereoscopic imagery is proposed, aimed to alleviate such stress on the user. First, the scene is divided into two regions: the front proximal zone and the rest behind it. The back layer is rendered first for objects in the rest, with the conventional stereoscopic viewing parameter values. Then, remaining objects in the proximal zone are rendered using viewing parameters adjusted to reduce the VAC. The validation experiment confirmed that the proposed approach significantly reduced the overall eye fatigue without compromising the user’s depth perception ability to manipulate objects in the parameters-altered proximal zone.

Slingshot: A Novel Gesture Locomotion System for Fast-paced Gameplay in Virtual Reality

SpaceVR: Virtual Reality Space Science Outreach Experience

Teaching people about Space concepts is challenging with traditional text book and teaching methods. It is hard to encourage prospective students with these traditional methods as they lack engagement and interactivity. We have developed SpaceVR which is a VR application that provides high school students with an engaging outreach experience about Space Science. The application uses real images of the sun from NASA’s solar dynamics observatory satellite to create an interactive digital Sun for students to explore. The project investigates if adding gamification elements will increase student engagement with Space Science outreach efforts.

Stress visualization in geometrically complex structures using Thermoelastic Stress Analysis and Augmented Reality

We present a framework for the visualization of mechanical stress using augmented reality (AR) using Thermoelastic Stress Analysis (TSA). The 2D stress images generated by TSA are converted to a 3D stress map using computer vision technology and then superimposed on the real object using AR. Our framework enables in-situ visualization of stress in geometrically complex structural components, which can assist in the design, manufacture, test, and through-life sustainment of failure-critical engineering assets. We also discuss the challenges of such a TSA-AR combination and present a case study that demonstrates the performance and significance of our system.

Study of User Training Methods Using Onomatopoeia in Brain Computer Interfaces Based on Mental Imagery

Mental-imagery-based brain-computer interfaces (MI-BCI) control external devices using specific thoughts or mental imagery. MI-BCIs are promising for many applications, but because they lack reliability, they are rarely used outside laboratories. Therefore, user training is critical for controlling an MI-BCI, and users must stably be able to generate specific EEG patterns. However, optimal training methods for acquiring this ability are still being investigated. Onomatopoeias are sensory expressions that symbolically describe sounds, scenes, and feelings. A multimodal approach for training with visual and auditory imagery using onomatopoeias was used to investigate its effects on how user perform.

Study of Visual Guidance Cues in VR Field Trips at High Schools

We assess the effectiveness of attention guidance cues in an educational platform in local high schools with real students. Three eye-tracked visual cues, previously assessed for their ability to guide and restore attention, are compared against a baseline absence of cue in a VR field trip of a virtual solar energy field. Students experienced four presentations on solar energy production including in-world animations and teacher imagery, in three of which the visual cues guided attention to the relevant object or teacher in the scene. Attention guidance using visual cues is commonly studied using “search and selection” style tasks, but has not been studied in the context of maintaining attention in real-world environments.

Temporal Foveated Rendering for VR on Modern Mobile Architecture

We introduce Temporal Foveated Rendering (TFR), a method of achieving GPU savings for VR content by reducing rendering frequency in the periphery of a fixed or eye tracked mobile VR headset utilizing tiled rendering. TFR saves GPU compute by rendering a peripheral “outset” at half rate, while maintaining full frame rate in a smaller "inset" centered at the gaze position. Judder is mitigated in the peripheral outset by applying asynchronous space warp, driven by System on Chip (SoC) derived motion vectors. This technique saves up to 17% more GPU compute on a mobile device, compared to a spatial foveated rendering technique called Fixed Foveated Rendering (FFR).

The Detectability of Saccadic Hand Offset in Virtual Reality

On the way towards novel hand redirection (HR) techniques that make use of change blindness, the next step is to take advantage of saccades for hiding body warping. A prerequisite for saccadic HR algorithms, however, is to know how much the user’s virtual hand can unnoticeably be offset during saccadic suppression. We contribute this knowledge by conducting a psychophysical experiment, which lays the ground for upcoming HR techniques by exploring the perceptual detection thresholds (DTs) of hand offset injected during saccades. Our findings highlight the pivotal role of saccade direction for unnoticeable hand jumps, and reveal that most offset goes unnoticed when the saccade and hand move in opposite directions. Based on the gathered perceptual data, we derived a model that considers the angle between saccade and hand offset direction to predict the DTs of saccadic hand jumps.

The Effect of False but Stable Heart Rate Feedback via Sound and Vibration on VR User Experience

Vital signals tend to become destabilized and generally increase when one’s physical condition is not well. For example, experiencing virtual reality (VR) sickness brings about a deteriorated physical condition, often accompanied by an increased heart rate. Several research have shown that providing feedback of false heart rate can induce various altered perceptions, such as increased effort and anxiety. In this poster, we propose to provide false but “stable” heart rate feedback through sound and vibration while navigating a sickness-inducing VR scene. We hypothesize that the false but stable heart rate feedback will have an induced effect of calming the user down (even stabilizing the heart rate itself) and reducing the unpleasant VR sickness symptoms. A pilot study was conducted to compare three conditions, namely viewing a sickness eliciting VR content, (1) as is, (2) with the false but stable heart rate feedback through sound, and (3) with the false but stable heart rate feedback through vibration. Results showed that the level of sickness was significantly reduced by sound and vibration feedback, respectively.

The Effect of Virtual Reality Level of Immersion on Spatial Learning Performance and Strategy Usage

The utilization of the immersive and ecological nature of head-mounted displayed (HMD) virtual reality (VR) has been increasing in studies of human spatial learning, and various aspects of VR's impact on navigation have been examined. Nevertheless, the effect the VR level of immersion has on spatial learning strategy usage is yet to be determined. Here, we addressed this gap by comparing spatial learning properties and experience measures in three modality settings. We translated a classic spatial learning task from animals to humans, where three spatial learning strategies were observed (place/cue/response). We compared 3 conditions: wearing a VR headset while physically walking vs. using a controller, and a 2D screen display using a mouse and a keyboard for navigation. We examined various learning properties and used presence questionnaires to analyze experience measures. Our results show that learning measures including strategy usage were affected by the VR level of immersion, suggesting that modality characteristics should be considered during VR task design.

The Effects of Customized Strategies for Reducing VR Sickness

The extent of virtual reality (VR) sickness varies widely among users, as each user has different sensitivities to diverse causes of VR sickness. This can make prescribing a single particular reduction technique difficult and ineffective. In this poster, we present preliminary work examining the more effective and preferred sickness-reduction techniques for a given user under varied sickness-inducing motions. Based on the user-specific information collected using VR content, a customized strategy is developed for a given user and applied to the same VR content. We report the experimental results for testing the effectiveness of the customized reduction technique, comparing it to a single particular reduction method.

The Staircase Procedure Toolkit: Psychophysical Detection Threshold Experiments Made Easy

We propose a novel open-source software toolkit to support researchers in the domains of human-computer interaction (HCI) and virtual reality (VR) in conducting psychophysical experiments. Our toolkit is designed to work with the widely-used Unity engine and is implemented in C# and Python. With the toolkit, researchers can easily set up, run, and analyze experiments to find perceptual detection thresholds using the adaptive weighted up/down method, also known as the staircase procedure. Besides being straightforward to integrate in Unity projects, the toolkit automatically stores experiment results, features a live plotter that visualizes answers in real time, and offers scripts that help researchers analyze the gathered data using statistical tests.

Utilizing AR as a Tool for Assessing Accessibility in the Home

Home modification interventions can remedy deficiencies in the home environments of the growing number of older adults that want to age in place. Unfortunately, performing an assessment of a home environment is a difficult process, often requiring numerous measurements from a skilled practitioner. To address these gaps in practice, we used an iterative co-design process to develop a first prototype of a novel augmented reality home assessment tool (ARHAT) that can be utilized more rapidly by both individuals in and outside of health care, as well as performed either on or off-site. The aim of this work is to create a tool for major stakeholders involved in supporting housing design and aging in place, thereby reaching and making a difference in the lives of more older adults.

Visually Augmenting Underfoot Tactile Perception in Augmented Virtuality

Underfoot haptics, a largely unexplored area, offers rich tactile information close to that of hand-based interactions. Haptic feedback gives a sense of physicality to virtual environments making for a more realistic and immersive experience. Augmented Virtuality offers the ability to render virtual materials on a physical object, or haptic proxy, without the user being aware of the object’s physical appearance while seeing their own body. In this research, we investigate how the visual appearance of physical objects can be altered virtually to impact the tactile perception of the object. An Augmented Virtuality system was developed to explore this, and two tactile perception experiments, consisting of 18 participants, were conducted. Specifically, we explore whether changing the visual appearance of materials affects a person’s underfoot tactile perception and which tactile perception is most affected by the change through a within-subjects experiment. The study shows that a change in visual appearance significantly impacted the tactile perception of roughness.

VR Experiences of Pregnant Women During Antenatal Care

Pregnant women use a range of non-pharmacological pain relief methods to help manage and reduce pain intensity and to induce relaxation. We conducted a study with 18 pregnant women to explore VR experiences as a non-pharmacological method of pain relief to determine the effect on pain intensity. The results of the study identified several themes: evoking emotion with sub-themes, memory, and imagination. The theme presence, with sub-themes of relatability, realism, immersion, interactivity, and narration. Finally, the escape and anchoring themes were descriptions of how women envisaged using VR antenatally. This study provides a novel contribution to the field of VR and antenatal and labour care which can help inform the design of VR experiences for pregnant women.

Waddle: using virtual penguin embodiment as a vehicle for empathy and informal learning

This paper presents, Waddle, a virtual experience to promote informal learning by embodying the user as an Adélie Penguin to partake in a narrative-based virtual reality application that shares the story of the lives of these unique animals. We test the effects of this experience on informal learning and empathy, an important component for fostering social engagement with ecology. The research demonstrates that the developed experience is able to support informal learning, virtual embodiment, and is able to create a positive change in empathy.

Walking-in-Flat-Place on Non-flat Virtual Environment can be Sickening!

It is well-known that employing the Walking-in-Place (WIP) interface can significantly reduce VR sickness in addition to promoting the sense of presence, immersion, and natural interaction. In this poster, we re-examine the conditions for which WIP will effectively reduce VR sickness. In particular, we investigate and compare the cases of applying WIP to navigating on flat terrain vs. up-and-down ramps with respect to the sickness reduction effect. We point out that naively designed WIP/navigation content has the possibility of actually worsening the VR sickness due to the sensory and reality mismatch between the flat real operating environment and the inclined virtual terrain.

Whispering salesperson: perceptual illusion of interpersonal distance and ventriloquism effect in service of virtual environment by use of whisper voice

The opportunities to receive and provide services on virtual environment (VE) continue to increase. In this study, we focused on the voice in VE, and investigated the effects of whisper voice, which is used in intimate relationships, on interpersonal distance between salesperson and customer and the effective range of ventriloquism effect. The experimental results showed two results when the salesperson used whisper voice. The interpersonal distance between them was significantly smaller than that of the normal voice. The effective range of the ventriloquism effect was significantly larger on the front side and significantly smaller on the backside compared to normal voice.

XR for Improving Cardiac Catheter Ablation Procedure

Arrhythmia refers to abnormalities of the heart rhythm, and it is considered a life-threatening pathology. Catheter ablation is a minimally invasive procedure which provides the best therapeutic outcomes to cure the arrhythmia. The procedure consists of a series of intraoperative and training challenges that could potentially affect the procedure outcome. This study examines how Extended Reality (XR) technologies (AR/VR) can be used to improve the cardiac catheter ablation procedure for electrophysiologists.