SUI '19: Symposium on Spatial User Interaction

Full Citation in the ACM Digital Library

SESSION: Session I: Multimodality

Pursuit Sensing: Extending Hand Tracking Space in Mobile VR Applications

Field of view limitations are one of the major persisting setbacks for camera-based motion tracking systems and the need for flexible ways to improve capture volumes remains. We present Pursuit Sensing, a technique to considerably extend the tracking volume of a camera sensor through self-actuated reorientation using a customized gimbal, thus enabling a Leap Motion to dynamically follow the user’s hand position in mobile HMD scenarios. This technique provides accessibility and high hardware compatibility for both users and developers while remaining simple and inexpensive to implement. Our technical evaluation shows that the proposed solution successfully increases hand tracking volume by 142% in pitch and 44% in yaw compared to the camera’s base FOV, while featuring low latency and robustness against fast hand movements.

Minuet: Multimodal Interaction with an Internet of Things

A large number of Internet-of-Things (IoT) devices will soon populate our physical environments. Yet, IoT devices’ reliance on mobile applications and voice-only assistants as the primary interface limits their scalability and expressiveness. Building off of the classic ‘Put-That-There’ system, we contribute an exploration of the design space of voice + gesture interaction with spatially-distributed IoT devices. Our design space decomposes users’ IoT commands into two components—selection and interaction. We articulate how the permutations of voice and freehand gesture for these two components can complementarily afford interaction possibilities that go beyond current approaches. We instantiate this design space as a proof-of-concept sensing platform and demonstrate a series of novel IoT interaction scenarios, such as making ‘dumb’ objects smart, commanding robotic appliances, and resolving ambiguous pointing at cluttered devices.

Analysis of Peripheral Vision and Vibrotactile Feedback During Proximal Search Tasks with Dynamic Virtual Entities in Augmented Reality

A primary goal of augmented reality (AR) is to seamlessly embed virtual content into a real environment. There are many factors that can affect the perceived physicality and co-presence of virtual entities, including the hardware capabilities, the fidelity of the virtual behaviors, and sensory feedback associated with the interactions. In this paper, we present a study investigating participants’ perceptions and behaviors during a time-limited search task in close proximity with virtual entities in AR. In particular, we analyze the effects of (i) visual conflicts in the periphery of an optical see-through head-mounted display, a Microsoft HoloLens, (ii) overall lighting in the physical environment, and (iii) multimodal feedback based on vibrotactile transducers mounted on a physical platform. Our results show significant benefits of vibrotactile feedback and reduced peripheral lighting for spatial and social presence, and engagement. We discuss implications of these effects for AR applications.

SESSION: Session II : Virtual Reality & Avatars

Investigating the Effect of Distractor Interactivity for Redirected Walking in Virtual Reality

Due to the mismatch in size between a Virtual Environment and the physical space available, the use of alternative locomotion techniques becomes necessary. In small spaces, Redirected Walking methods provide limited benefits and approaches such as the use of distractors can provide an alternative. Distractors are virtual elements or characters that attempt to catch the attention of the user while the system subtly steers them away from physical boundaries. In this research we explicitly focused on understanding how different levels of interactivity affect user performance and behaviour. We developed three types of continuous redirecting distractors, with varying levels of interaction possibilities, called Looking, Touching, and Interacting. We compared them in a user study to a discrete reorientation technique, called Stop and Reset, in a task requiring users to traverse a 30 m path. While discrete reorientation is faster, continuous redirection through distractors was significantly less noticeable. Results suggest that more complex interaction is preferred and able to better captivate user attention for longer.

LIVE: the Human Role in Learning in Immersive Virtual Environments

This work studies the role of a human instructor within an immersive VR lesson. Our system allows the instructor to perform “contact teaching” by demonstrating concepts through interaction with the environment, and the student to experiment with interaction prompts. We conducted a between-subjects user study with two groups of students: one experienced the VR lesson while immersed together with an instructor; the other experienced the same contents demonstrated through animation sequences simulating the actions that the instructor would take. Results show that the Two-User version received significantly higher scores than the Single-User version in terms of overall preference, clarity, and helpfulness of the explanations. When immersed together with an instructor, users were more inclined to engage and progress further with the interaction prompts, than when the instructor was absent. Based on the analysis of videos and interviews, we identified design recommendations for future immersive VR educational experiences.

Blended Agents: Manipulation of Physical Objects within Mixed Reality Environments and Beyond

Mixed reality (MR) environments allow real users and virtual agents to coexist within the same virtually augmented physical space. While tracking of different body parts such as the user’s head and hands allows virtual objects to show plausible reactions to actions of the real user, virtual agents only have a very limited influence on their physical environment.

In this paper, we introduce the concept of blended agents, which are capable of manipulations of physical properties related to the object’s location and surface material. We present two prototypic implementations of virtual-physical interactions using robotic actuators and thermochromic ink. As both interactions show considerably different characteristics, e.g., with regard to their persistence, explicability, and observability, we performed a user study to investigate their effects on subjective measures such as the agent’s perceived social and spatial presence. In the context of a golf scenario, participants were interacting with a blended agent that was capable of virtual-physical manipulations such as hitting a golf ball and writing on physical paper. A statistical analysis of quantitative data did not yield any significant differences between blended agents and VAs without physical capabilities. However, qualitative feedback of the participants indicates that persistent manipulations improve both the perceived realism of the agent and the overall user experience.

SESSION: Session III: Displays

Extending Virtual Reality Display Wall Environments Using Augmented Reality

Two major form factors for virtual reality are head-mounted displays and large display environments such as CAVE®and the LCD-based successor CAVE2®. Each of these has distinct advantages and limitations based on how they’re used. This work explores preserving the high resolution and sense of presence of CAVE2 environments in full stereoscopic mode by using a see-though augmented reality HMD to expand the user’s field of regard beyond the physical display walls. In our explorative study, we found that in a visual search task in a stereoscopic CAVE2, the addition of the HoloLens to expand the field of regard did not hinder the performance or accuracy of the participant, but promoted more physical navigation which in post-study interviews participants felt aided in their spatial awareness of the virtual environment.

Extramission: A Large Scale Interactive Virtual Environment Using Head Mounted Projectors and Retro-reflectors

We present Extramission, a method to a large scale interactive virtual environment. It consists of dual head mounted pico projectors and retro-reflective materials. With high-accuracy retro-reflective materials, laser beams scanned on user’s retina makes clear and free-focus vision. In this retinal scanning configuration, even if the luminance of the projector is low, scanned images can be seen clearly, which helps to evade overlaps between projected images. Due to small overlaps, Extramission can provide multi-user virtual experiences showing different images to each individual, and dual pico projectors can provide each user with stereoscopic vision. Moreover, the tolerance of low luminance allows larger distance between users and retro-reflectors, which is required for large scale virtual experiences using head mounted projectors. In this paper, we describe the principle and the implementation of Extramission. We also see its performance of displaying images.

Effects of Dark Mode on Visual Fatigue and Acuity in Optical See-Through Head-Mounted Displays

Light-on-dark color schemes, so-called “Dark Mode,” are becoming more and more popular over a wide range of display technologies and application fields. Many people who have to look at computer screens for hours at a time, such as computer programmers and computer graphics artists, indicate a preference for switching colors on a computer screen from dark text on a light background to light text on a dark background due to perceived advantages related to visual comfort and acuity, specifically when working in low-light environments.

In this paper, we investigate the effects of dark mode color schemes in the field of optical see-through head-mounted displays (OST-HMDs), where the characteristic “additive” light model implies that bright graphics are visible but dark graphics are transparent. We describe a human-subject study in which we evaluated a normal and inverted color mode in front of different physical backgrounds and among different lighting conditions. Our results show that dark mode graphics on OST-HMDs have significant benefits for visual acuity, fatigue, and usability, while user preferences depend largely on the lighting in the physical environment. We discuss the implications of these effects on user interfaces and applications.

SESSION: Session IV : Augmented Reality Modelling & Gaze

Evaluating the Impact of Point Marking Precision on Situated Modeling Performance

Three-dimensional modeling in augmented reality allows the user to create or modify the geometry of virtual content registered to the real world. One way of correctly placing the model is by creating points over real-world features and designing the model derived from those points. We investigate the impact of using point marking techniques with different levels of precision on the performance of situated modeling, considering accuracy, and ease of use. Results from a formal user study indicate that high-precision point marking techniques are needed to ensure the accuracy of the model, while ease of use is affected primarily by perceptual issues. In domains where correctness of the model is critical for user understanding and judgment, higher precision is needed to ensure the usefulness of the application.

Gaze Direction Visualization Techniques for Collaborative Wide-Area Model-Free Augmented Reality

In collaborative tasks, it is often important for users to understand their collaborator’s gaze direction or gaze target. Using an augmented reality (AR) display, a ray representing the collaborator’s gaze can be used to convey such information. In wide-area AR, however, a simplistic virtual ray may be ambiguous at large distances, due to the lack of occlusion cues when a model of the environment is unavailable. We describe two novel visualization techniques designed to improve gaze ray effectiveness by facilitating visual matching between rays and targets (Double Ray technique), and by providing spatial cues to help users understand ray orientation (Parallel Bars technique). In a controlled experiment performed in a simulated AR environment, we evaluated these gaze ray techniques on target identification tasks with varying levels of difficulty. The experiment found that, assuming reliable tracking and an accurate collaborator, the Double Ray technique is highly effective at reducing visual ambiguity, but that users found it difficult to use the spatial information provided by the Parallel Bars technique. We discuss the implications of these findings for the design of collaborative mobile AR systems for use in large outdoor areas.

Effects of Shared Gaze Parameters on Visual Target Identification Task Performance in Augmented Reality

Augmented reality (AR) technologies provide a shared platform for users to collaborate in a physical context involving both real and virtual content. To enhance the quality of interaction between AR users, researchers have proposed augmenting users’ interpersonal space with embodied cues such as their gaze direction. While beneficial in achieving improved interpersonal spatial communication, such shared gaze environments suffer from multiple types of errors related to eye tracking and networking, that can reduce objective performance and subjective experience.

In this paper, we conducted a human-subject study to understand the impact of accuracy, precision, latency, and dropout based errors on users’ performance when using shared gaze cues to identify a target among a crowd of people. We simulated varying amounts of errors and the target distances and measured participants’ objective performance through their response time and error rate, and their subjective experience and cognitive load through questionnaires. We found some significant differences suggesting that the simulated error levels had stronger effects on participants’ performance than target distance with accuracy and latency having a high impact on participants’ error rate. We also observed that participants assessed their own performance as lower than it objectively was, and we discuss implications for practical shared gaze applications.

SESSION: Session V: Perception & Accessibility

Interaction can hurt - Exploring gesture-based interaction for users with Chronic Pain

Chronic Pain is a universal disorder affecting millions of people, influencing even the most basic decisions in their lives. With the computer becoming such an integral part of our society and the ever-expanding interaction paradigm, the need to explore potential computer interactions for people with Chronic Pain has only increased. In this paper we explore the used of gesture-based interaction as a medium with which these users can perform the base operations of computer interaction. We show that, for gestural pointing and selection, modeling users’ interaction space and multimodel interaction performed the best in terms of throughput.

Understanding the Effect of the Combination of Navigation Tools in Learning Spatial Knowledge

Spatial knowledge about the environment often helps people accomplish their navigation and wayfinding tasks more efficiently. Off-the-shelf mobile navigation applications often focus on guiding people to go between two locations, ignoring the importance of learning spatial knowledge. Drawing on theories and findings from the area of learning spatial knowledge, we investigated how the background reference frames (RF) and navigational cues can be combined in navigation applications to help people acquire better spatial (route and survey) knowledge. We conducted two user studies, where participants used our custom-designed applications to navigate in an indoor location. We found that having more navigational cues in a navigation application does not always assist users in acquiring better spatial knowledge; rather, these cues can be distracting in some specific setups. Users can acquire better spatial knowledge only when the navigational cues complement each other in the interface design. We discussed the implications of designing navigation interfaces that can assist users in learning spatial knowledge by combining navigational elements in a complimentary way.

Effects of Depth Layer Switching between an Optical See-Through Head-Mounted Display and a Body-Proximate Display

Optical see-through head-mounted displays (OST HMDs) typically display virtual content at a fixed focal distance while users need to integrate this information with real-world information at different depth layers. This problem is pronounced in body-proximate multi-display systems, such as when an OST HMD is combined with a smartphone or smartwatch. While such joint systems open up a new design space, they also reduce users’ ability to integrate visual information. We quantify this cost by presenting the results of an experiment (n=24) that evaluates human performance in a visual search task across an OST HMD and a body-proximate display at 30 cm. The results reveal that task completion time increases significantly by approximately 50% and the error rate increases significantly by approximately 100% compared to visual search on a single depth layer. These results highlight a design trade-off when designing joint OST HMD-body proximate display systems.

SESSION: Poster Abstracts

“SkyMap”: World-Scale Immersive Spatial Display

To relate typical survey map features to the real world during navigation, users must make time-consuming, error-prone cognitive transformations in scale and rotation and make frequent realignments over time. In this paper, we introduce SkyMap, a novel immersive display device method that presents a world-scaled and world-aligned map above the user that evokes a huge mirror in the sky. This approach, which we have implemented in a VR-based testbed, potentially reduces cognitive effort associated with survey map use. We discuss first-hand observations and further areas of research. User evaluations to compare performance under various task scenarios are currently under way.

A Comparison of Stairs and Escalators in Virtual Reality

In this poster we present an in-progress study to compare the usage of simulated escalators with simulated stairs within a virtual reality (VR) environment. We found no existing research that examines the usage of escalators in VR. Past research into virtual stairs has examined how to better simulate stairs in a virtual environment (VE) by using external tools to allow an individual to more closely match real world movements. With virtual stairs, the user moves forward horizontally while the virtual avatar moves horizontally and vertically. With escalators, the user may stand in place to move the same distance within the virtual space, while also closely mimicking the movements they would make on an actual escalator without requiring additional tools. This experiment will test if the advantage of escalators requiring less real-world movement is offset by other factors, such as nausea, general discomfort, and the presence of the participant during the simulation.

V-Rod: Floor Interaction in VR

We present a novel cane-based device for interacting with floors in Virtual Reality (VR). We demonstrate its versatility and flexibility in several use-case scenarios like gaming and menu interaction. Initial feedback from users point towards better control in spatial tasks and increased comfort for tasks which require the users’ arms to be raised or extended for extended periods. By including a networked example, we are able to explore the asymmetrical aspect of VR interaction using the V-Rod. We demonstrate that the hardware and circuitry can deliver acceptable performance even for demanding applications. In addition, we propose that using a grounded, passive haptic device gives the user a better sense of balance, therefore decreasing risk of VR sickness. VR Balance is a game that intends to quantify the difference in comfort, intuitiveness and accuracy when using or not using a grounded passive haptic device.

Virtual Window Manipulation Method for Head-mounted Display Using Smart Device

In this paper, we propose a virtual window manipulation method used for information search while utilizing a head-mounted display (HMD). Existing HMD operation methods have several issues like causing user fatigue and processing input tasks inefficiently. Such problems are difficult to solve simultaneously. Therefore, we propose using head tracking cursors and smart devices. The suggested method aims to operate a head tracking cursor by swiping input on the smart device. In this paper, we compared the operability of this new method and the classic hand tracking one based on the results of user experiments. As a result, it was confirmed that operability of the proposed method is deemed to be high.

Adjustable Adaptation for Spatial Augmented Reality Workspaces

Many cases in which augmented reality would be useful in everyday life requires the ability to access to information on the go. This means that interfaces should support user movement and also adjust to different physical environments. Prior research has showed that spatial adaptation can reduce the effort required to manage windows when walking and moving to different spaces. We designed and implemented an unified interaction system for AR windows that allow users to quickly switch and fine tune spatial adaptation. Our study indicates that a small number of adaptive behaviors is sufficient to facilitate information access in variety of conditions.

One-Handed Interaction Technique for Single-Touch Gesture Input on Large Smartphones

We propose a one-handed interaction technique using cursor based on touch pressure to enable users to perform various single-touch gestures such as a tap, swipe, drag, and double-tap on unreachable targets. In the proposed technique, cursor mode is started by swiping from the bezel. Touch-down and touch-up events occur at the cursor position when users increase and decrease touch pressure, respectively. Since touch-down and touch-up event triggers are different but easily performed by just adjusting the touch pressure of the thumb from low to high or vice versa, the user can perform single-touch gestures at the cursor position with the thumb. To investigate the performance of the proposed technique, we conducted a pilot study; the results showed that the proposed technique is promising for one-handed interaction technique.

Exploring the Effects of Stereoscopic 3D on Gaming Experience Using Physiological Sensors

Past studies have shown that playing games in 3D stereo does not provide any significant performance benefits than with using a 2D display. However, most previous studies used games that were not optimized for stereoscopic 3D viewing and used self-reported data (excitement level, sense of engagement, etc.) to measure user experience. We propose to study games that are optimized for stereoscopic 3D viewing and use physiological sensors (an EEG and a heart rate monitor) to better gauge the user’s experience with these games. Our preliminary results reveal that stereo 3D does provide benefits in tasks where depth information is useful for the game task at hand. Additionally, participants in the 3D group had lower levels of stress and higher heart rates indicating a higher sense of engagement and presence under stereoscopic 3D conditions.

Gaze Data Visualizations for Educational VR Applications

VR displays (HMDs) with embedded eye trackers could enable better teacher-guided VR applications since eye tracking could provide insights into student’s activities and behavior patterns. We present several techniques to visualize eye-gaze data of the students to help a teacher gauge student attention level. A teacher could then better guide students to focus on the object of interest in the VR environment if their attention drifts and they get distracted or confused.

Mixed-Reality Exhibition for Museum of Peace Corps Experiences using AHMED toolset

We present a mixed-reality exhibition for the Museum of Peace Corps Experiences designed using the Ad-Hoc Mixed-reality Exhibition Designer (AHMED) toolset. AHMED enables visitors to experience mixed-reality museum or art exhibitions created ad-hoc at any location. The system democratizes access to exhibitions for populations that cannot visit these exhibitions in person for reasons of disability, time-constraints, travel restrictions, or socio-economic status.

SIGMA: Spatial Interaction Gaming for Movie- and Arena-goers

We present SIGMA, a mass interaction system for playing games in movie theatres and arenas. SIGMA uses players’ smartphones as spatial game controllers. The games for SIGMA use novel techniques for aggregating mass interactions, which we introduce using “Little Red Riding Hood” interactive storybook as a case study.

Strafing Gain: A Novel Redirected Walking Technique

Redirected walking enables natural locomotion in virtual environments that are larger than the user’s real world space. However, in complex setups with physical obstacles, existing redirection techniques that were originally designed for empty spaces may be sub-optimal. This poster presents strafing gains, a novel redirected walking technique that can be used to shift the user laterally away from obstacles without disrupting their current orientation. In the future, we plan to conduct a study to identify perceptual detection thresholds and investigate new algorithms that can use strafing gains in combination with other existing redirection techniques to achieve superior obstacle avoidance in complex physical spaces.

Remote Robotic Arm Teleoperation through Virtual Reality

In this work, a spatial interface was designed and evaluated for enabling effective teleoperation of bi-manual robotic manipulators. Previous work in this area has investigated using immersive virtual reality systems to provide more natural, intuitive spatial control and viewing of the remote robot workspace. The current work builds upon this research through the design of the teleoperator interface and by additionally studying how varying the spatial interaction metaphor and devices employed to control the robot impacts task performance. A user study was conducted with 33 novice teleoperators split into two groups by interaction metaphor used to control the robot end-effectors, one group using a grabbing metaphor with tracked motion controllers (Oculus Touch) and the other using driving metaphor with two fixed 6-axis controllers (3Dconnexion SpaceMouse). Results indicated that, despite the challenging task, both interfaces were highly effective for bimanual teleoperation, but that motion controls provided higher peak performance, likely due to faster gross movement planning.

Improving Usability, Efficiency, and Safety of UAV Path Planning through a Virtual Reality Interface

As the capability and complexity of UAVs continue to increase, specifying the complex 3D flight paths necessary for instructing gets more complicated. Immersive interfaces, such as those afforded by virtual reality (VR), have several unique traits which may improve the user’s ability to perceive and specify 3D information. These traits include stereoscopic depth cues which induce a sense of physical space as well as six degrees of freedom (DoF) natural head-pose and gesture interactions. This work introduces an open-source platform for 3D aerial path planning in VR and compares it to existing UAV piloting interfaces. Our study has found statistically significant improvements in safety over a manual control interface and in efficiency over a 2D touchscreen interface. The results illustrate that immersive interfaces provide a viable alternative to touchscreen interfaces for UAV path planning.

Preliminary Study of Screen Extension for Smartphone Using External Display

There are some techniques to show the smartphone’s content on an external display in large. However, since smartphones are designed for mobility, a seamless interaction is necessary to make the best use of external display by a smartphone. We are currently exploring the feasibility of another technique, which we call Screen Extension. Our technique seamlessly adds display spaces to a smartphone using an external display, allowing users to use displays available in many places. To test search performance with Screen Extension, we conducted a pilot study; which suggested that Screen Extension helps users to search content faster.

SESSION: Demo Abstracts

Visual Cues to Restore Student Attention based on Eye Gaze Drift, and Application to an Offshore Training System

Drifting student attention is a common problem in educational environments. We demonstrate 8 attention-restoring visual cues for display when eye tracking detects that student attention shifts away from critical objects. These cues include novel aspects and variations of standard cues that performed well in prior work on visual guidance. Our cues are integrated into an offshore training system on an oil rig. While students participate in training on the oil rig, we can compare our various cues in terms of performance and student preference, while also observing the impact of eye tracking. We demonstrate experiment software with which users can compare various cues and tune selected parameters for visual quality and effectiveness.

Object Manipulation by Absolute Pointing with a Smartphone Gyro Sensor

The purpose of this study is to operate various computers around us using our own smartphones. Methods for operating computers around the home by voice, such as the Internet of Things (IoT) appliances, are now widespread. However, there are problems with operation by voice; it is limited in terms of instruction patterns that can be expressed, and it cannot be used simultaneously by many users. To solve the problem, we propose a method to determine the location pointed to by a user with a smartphone gyro sensor. This method achieves controller integration, multiple functions, and simultaneous use by multiple people.

An Adaptive Interface for Spatial Augmented Reality Workspaces

A promising feature of wearable augmented reality devices is the ability to easily access information on the go. However, designing AR interfaces that can support user movement and also adjust to different physical environments is a challenging task. We present an interaction system for AR windows that uses adaptation to automatically perform level window movement while allowing high-level user control.

A Viewpoint Control Method for 360° Media Using Helmet Touch Interface

We have developed a helmet touch interface for the viewpoint control of a 360° media. The user of this interface can control the camera in 360° media by touching the surface of the helmet. To detect touch, two micro-controllers and 54 capacitive touch sensor points mounted on the interface surface are used.

Collaborative Interaction in Large Explorative Environments

Building collaborative VR applications for exploring and interacting with large or abstract spaces presents several problems. Given a large space and a potentially large number of possible interactions, it is expected that users will need a tool selection menu that will be easily accessible at any point in the environment. Given the collaborative nature, users will also want to be able to maintain awareness of each other within the environment and communicate about what they are seeing or doing. We present a demo that shows solutions to these problems developed in the context of a collaborative geological dataset viewer.

A Social Interaction Interface Supporting Affective Augmentation Based on Neuronal Data

In this demonstration we present a prototype for an avatar-mediated social interaction interface that supports the replication of head- and eye movement in distributed virtual environments. In addition to the retargeting of these natural behaviors, the system is capable of augmenting the interaction based on the visual presentation of affective states. We derive those states using neuronal data captured by electroencephalographic (EEG) sensing in combination with a machine learning driven classification of emotional states.