VRST '18: Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology

Full Citation in the ACM Digital Library

SESSION: UI & display

Sublime: a hands-free virtual reality menu navigation system using a high-frequency SSVEP-based brain-computer interface

In this work we present Sublime, a new concept of Steady-State Visually Evoked Potential (SSVEP) based Brain-Computer Interface (BCI) where brain-computer communication occurs by capturing imperceptible visual stimuli integrated in the virtual scene and effortlessly conveying subliminal information to a computer. The technology was tested in a Virtual Reality (VR) environment, where the subject could navigate between the different menus by just gazing at them. The ratio between the stimuli frequencies and the refresh rate of the VR display creates an undesired perception of beats for which different solutions are proposed. To inform the user of target activation, real-time feedback in the form of loading bars is incorporated under each selectable object. We conducted experiments with several subjects and though the system is slower than a conventional joystick, users reported a satisfactory overall experience, in part due to the unexpected responsiveness of the system, as well as due to the fact that virtual objects flickered at a rate that did not cause annoyance. Since the imperceptible visual stimuli can be integrated unobtrusively to any element of the virtual world, we conclude that the potential applications of Sublime are extensive, especially in situations where knowing user's visual focus can be relevant.

Audio-tactile proximity feedback for enhancing 3D manipulation

In presence of conflicting or ambiguous visual cues in complex scenes, performing 3D selection and manipulation tasks can be challenging. To improve motor planning and coordination, we explore audio-tactile cues to inform the user about the presence of objects in hand proximity, e.g., to avoid unwanted object penetrations. We do so through a novel glove-based tactile interface, enhanced by audio cues. Through two user studies, we illustrate that proximity guidance cues improve spatial awareness, hand motions, and collision avoidance behaviors, and show how proximity cues in combination with collision and friction cues can significantly improve performance.

Tactile hand motion and pose guidance for 3D interaction

We present a novel forearm-and-glove tactile interface that can enhance 3D interaction by guiding hand motor planning and coordination. In particular, we aim to improve hand motion and pose actions related to selection and manipulation tasks. Through our user studies, we illustrate how tactile patterns can guide the user, by triggering hand pose and motion changes, for example to grasp (select) and manipulate (move) an object. We discuss the potential and limitations of the interface, and outline future work.

Standards-compliant HTTP adaptive streaming of static light fields

Static light fields are an effective technology to precisely visualize complex inanimate objects or scenes, synthetic and real-world alike, in Augmented, Mixed and Virtual Reality contexts. Such light fields are commonly sampled as a collection of 2D images. This sampling methodology inevitably gives rise to large data volumes, which in turn hampers real-time light field streaming over best effort networks, particularly the Internet. This paper advocates the packaging of the source images of a static light field as a segmented video sequence so that the light field can then be interactively network streamed in a quality-variant fashion using MPEG-DASH, the standardized HTTP Adaptive Streaming scheme adopted by leading video streaming services like YouTube and Netflix. We explain how we appropriate MPEG-DASH for the purpose of adaptive static light field streaming and present experimental results that prove the feasibility of our approach, not only from a networking but also a rendering perspective. In particular, real-time rendering performance is achieved by leveraging video decoding hardware included in contemporary consumer-grade GPUs. Important trade-offs are investigated and reported on that impact performance, both network-wise (e.g., applied sequencing order and segmentation scheme for the source images of the static light field) and rendering-wise (e.g., disk-versus-GPU caching of source images). By adopting a standardized transmission scheme and by exclusively relying on commodity graphics hardware, the net result of our work is an interoperable and broadly deployable network streaming solution for static light fields.

Design and implementation of a multi-person fish-tank virtual reality display

A mixed reality experience with a physical display, that situates 3D virtual content within the real world, has the potential to help people work and play with 3D information. However, almost all of such "fish tank virtual reality" (FTVR) systems have been isolated to a single-person experience, making them unsuitable for collaborative tasks. In this paper, we present a display system that allows two people to have unobstructed 3D perspective views into a spherical display while still being able to see and talk to one another. We evaluated the system through qualitative observation at a four-day exhibition and found it was effective for providing a convincing, shared 3D experience.

VR safari park: a concept-based world building interface using blocks and world tree

We present a concept-based world building approach, realized in a system called VR Safari Park, which allows users to rapidly create and manipulate a world simulation. Conventional world building tools focus on the manipulation and arrangement of entities to set up the simulation, which is time consuming as it requires frequent view and entity manipulations. Our approach focuses on a far simpler mechanic, where users add virtual blocks which represent world entities (e.g. animals, terrain, weather, etc.) to a World Tree, which represents the simulation. In so doing, the World Tree provides a quick overview of the simulation, and users can easily set up scenarios in the simulation without having to manually perform fine-grain manipulations on world entities. A preliminary user study found that the proposed interface is effective and usable for novice users without prior immersive VR experience.

Balloonygen: extended tabletop display embedded with balloon-like deformable spherical screen

Balloonygen, an extended tabletop display embedded with a balloon-like deformable spherical screen, is a display that can seamlessly expose a spherical screen for three-dimensional contents, such as omnidirectional images, in a conventional flat display. By continuously morphing between a two-dimensional shape called tabletop and a three-dimensional shape called sphere, we render the benefits of a flat display and a spherical display to coexist and propose a smoother approach for information sharing. Balloonygen dynamically provides an optimal way to display the contents by inflating the rubber membrane installed at the center of a tabletop display and morphing between the two- and three-dimensional shapes. In this study, by prototyping and designing the application scenario, we discuss the advantages and disadvantages of this display and possible interactions involved.

SESSION: AR / MR

Comparison of the usability of a car infotainment system in a mixed reality environment and in a real car

Instead of installing new control modes for infotainments systems in a real vehicle for testing, it is an attractive idea (saving time and cost) to evaluate and develop these systems in a mixed reality (MR) environment. The central question of the study is whether the usability evaluation of a car entertainment system within a MR environment provides the same results as the evaluation of the car entertainment system within a real car. For this purpose a prototypical car infotainment system was built and integrated into a real car and into a MR environment. The MR environment represents the interior of the car and uses finger tracking and real haptic control elements of the center console of a car. Two test groups were assigned to the two different test environments. The study shows, that the usability is rated similar in both environments although readability and representation within the infotainment system is problematic.

Camera time warp: compensating latency in video see-through head-mounted-displays for reduced cybersickness effects

We introduce Camera Time Warp (CamWarp), a novel reprojection technique for video-see-through augmented reality, which reduces the registration error between captured real-world videos and rendered virtual images. Instead of rendering the image plane locked to the virtual camera, CamWarp renders the image plane at the real-world position it was captured at, and compensates for potential artifacts. We conducted two experiments to evaluate the effectiveness of CamWarp. In the first experiment participants were asked to report subjective discomfort while moving their head in a pattern inspired by the ISO 9241-9 Fitts' Law task at different speeds while the video feed was rendered at varying frame rates. The results show that the technique can significantly reduce subjective levels of discomfort and cybersickness symptoms for all tested configurations. In the second experiment participants were asked to move physical objects on a projected path as quickly and precisely as possible. Results show a positive effect of CamWarp on speed and accuracy.

Performer vs. observer: whose comfort level should we consider when examining the social acceptability of input modalities for head-worn display?

The popularity of head-worn displays (HWD) technologies such as Virtual Reality (VR) and Augmented Reality (AR) headsets is growing rapidly. To predict their commercial success, it is essential to understand the acceptability of these new technologies, along with new methods to interact with them. In this vein, the evaluation of social acceptability of interactions with these technologies has received significant attention, particularly from the performer's (i.e., user's) viewpoint. However, little work has considered social acceptability concerns from observers' (i.e., spectators') perspective. Although HWDs are designed to be personal devices, interacting with their interfaces are often quite noticeable, making them an ideal platform to contrast performer and observer perspectives on social acceptability. Through two studies, this paper contrasts performers' and observers' perspectives of social acceptability interactions with HWDs under different social contexts. Results indicate similarities as well as differences, in acceptability, and advocate for the importance of including both perspectives when exploring social acceptability of emerging technologies. We provide guidelines for understanding social acceptability specifically from the observers' perspective, thus complementing our current practices used for understanding the acceptability of interacting with these devices.

Merging environments for shared spaces in mixed reality

In virtual reality a real walking interface limits the extent of a virtual environment to our local walkable space. As local spaces are specific to each user, sharing a virtual environment with others for collaborative work or games becomes complicated. It is not clear which user's walkable space to prefer, or whether that space will be navigable for both users.

This paper presents a technique which allows users to interact in virtual reality while each has a different walkable space. With this method mappings are created between pairs of environments. Remote users are then placed in the local environment as determined by the corresponding mapping.

A user study was conducted with 38 participants. Pairs of participants were invited to collaborate on a virtual reality puzzle-solving task while in two different virtual rooms. An avatar representing the remote user was mapped into the local user's space. The results suggest that collaborative systems can be based on local representations that are actually quite different.

Perceived weight of a rod under augmented and diminished reality visual effects

We can use augmented reality (AR) and diminished reality (DR) in combination, in practice. However, to the best of our knowledge, there is no research on the validation of the cross-modal effects in AR and DR. Our research interest here is to investigate how this continuous visual changes between AR and DR would change our weight sensation of an object. In this paper, we built a system that can continuously extend and reduce the amount of visual entity of real objects using AR and DR renderings to confirm that users can perceive things heavier and lighter than they actually are in the same manner as SWI. Different from the existing research where either AR or DR visual effects were used, we validated one of cross-modal effects in the context of both continuous AR and DR visuo-haptic. Regarding the weight sensation, we found that such cross-modal effect can be approximated with a continuous linear relationship between the weight and length of real objects. Our experimental results suggested that the weight sensation is closely related to the positions of the center of gravity (CoG) and perceived CoG positions lie within the object's entity under the examined conditions.

Tracking projection mosaicing by synchronized high-speed optical axis control

Projectors, as information display devices, have improved substantially and to achieve both the wide range and high resolution is desired for the dynamic human gaze. However, a fixed projector has a trade-off between the angle of projection and a resolution with limited pixels. Conventional methods with dynamic optical axis control lack the potential speed of the devices. We propose a tracking projection mosaicing with a high-speed projector and a high-speed optical axis controller for a randomly moving position, such as the gaze. We also propose a synchronization strategy by queuing and alternating operations to reduce motion-based artifacts, which realize a high-quality static image projection during the dynamic optical axis control. We have experimentally validated the geometric and temporal consistency of the proposed synchronization method and have attempted a demonstration of the tracking projection mosaicing for the dynamically moving bright spot of a laser pointer.

Gaze navigation in the real world by changing visual appearance of objects using projector-camera system

This paper proposes a method for gaze navigation in the real world by projecting an image onto a real object and changing its appearance. In the proposed method, a camera captures an image of objects in the real world. Next all the pixels in the image but those in a specified region are slightly shifted to left and right. Then the obtained image is projected onto the original objects. As a result, the objects not in the specified region looks blurred. We conducted user experiments and showed that the users' gaze were navigated to the specified region.

SESSION: Human & machine visual perception

Eyestrain impacts on learning job interview with a serious game in virtual reality: a randomized double-blinded study

Purpose: This study explores eyestrain and its possible impacts on learning performances and quality of experience using different apparatuses and imaging. Materials and Methods: 69 participants played a serious game simulating a job interview with a Samsung Gear VR Head Mounted Display (HMD) or a computer screen. The study was conducted according to a double-blinded protocol. Participants were randomly assigned to 3 groups: PC, HMD biocular and HMD stereoscopy (S3D). Participants played the game twice, allowing between group analyses. Eyestrain was assessed pre- and post-exposure on a chin-head rest with optometric measures. Learning traces were obtained in-game by registering response time and scores. Quality of experience was measured with questionnaires assessing Presence, Flow and Visual Comfort. Results: eyestrain was significantly higher with HMDs than PC based on Punctum Proximum of accommodation and visual acuity variables and tends to be higher with S3D. Learning was more efficient in HMDs conditions based on time for answering but the group with stereoscopy performed lower than the binocular imaging one. Quality of Experience was better based on visual discomfort with the PC condition than with HMDs. Conclusion: learning expected answers from a job interview is more efficient while using HMDs than a computer screen. However, eyestrain tends to be higher while using HMDs and S3D. The quality of experience was also negatively impacted with HMDs compared to computer screen. Not using S3D or lowering its impact should be explored to provide comfortable learning experience.1

Keep my head on my shoulders!: why third-person is bad for navigation in VR

Head-Mounted Displays are useful to place users in virtual reality (VR). They do this by totally occluding the physical world, including users' bodies. This can make self-awareness problematic. Indeed, researchers have shown that users' feeling of presence and spatial awareness are highly influenced by their virtual representations, and that self-embodied representations (avatars) of their anatomy can make the experience more engaging. On the other hand, recent user studies show a penchant towards a third-person view of one's own body to seemingly improve spatial awareness. However, due to its unnaturality, we argue that a third-person perspective is not as effective or convenient as a first-person view for task execution in VR. In this paper, we investigate, through a user evaluation, how these perspectives affect task performance and embodiment, focusing on navigation tasks, namely walking while avoiding obstacles. For each perspective, we also compare three different levels of realism for users' representation, specifically a stylized abstract avatar, a mesh-based generic human, and a real-time point-cloud rendering of the users' own body. Our results show that only when a third-person perspective is coupled with a realistic representation, a similar sense of embodiment and spatial awareness is felt. In all other cases, a first-person perspective is still better suited for navigation tasks, regardless of representation.

MR video fusion: interactive 3D modeling and stitching on wide-baseline videos

A major challenge facing camera networks today is how to effectively organizing and visualizing videos in the presence of complicated network connection and overwhelming and even increasing amount of data. Previous works focus on 2D stitching or dynamic projection to 3D models, such as panorama and Augmented Virtual Environment (AVE), and haven't given an ideal solution. We present a novel method of multiple video fusion in 3D environment, which produces a highly comprehensive imagery and yields a spatio-temporal consistent scene. User initially interact with a newly designed background model named video model to register and stitch videos' background frames offline. The method then fuses the offline results to render videos in a real time manner. We demonstrate our system on 3 real scenes, each of which contains dozens of wide-baseline videos. The experimental results show that, our 3D modeling interface developed with the our presented model and method can efficiently assist the users to seamlessly integrate videos by comparing to commercial-off-the-shelf software with less operating complexity and more accurate 3D environment. The stitching method proposed by us is much more robust against the position, orientation, attribute differences among videos than the start-of-the-art methods. More importantly, this study sheds light on how to use the 3D techniques to solve 2D problems in realistic and we validate its feasibility.

Dynamic HDR environment capture for mixed reality

Rendering accurate and convincing virtual content into mixed reality (MR) scenes requires detailed illumination information about the real environment. In existing MR systems, this information is often captured using light probes [1, 8, 9, 17, 19--21], or by reconstructing the real environment as a preprocess [31, 38, 54]. We present a method for capturing and updating a HDR radiance map of the real environment and tracking camera motion in real time using a self-contained camera system, without prior knowledge about the real scene. The method is capable of producing plausible results immediately and improving in quality as more of the scene is reconstructed. We demonstrate how this can be used to render convincing virtual objects whose illumination changes dynamically to reflect the changing real environment around them.

An evaluation of pupillary light response models for 2D screens and VR HMDs

Pupil diameter changes have been shown to be indicative of user engagement and cognitive load for various tasks and environments. However, it is still not the preferred physiological measure for applied settings. This reluctance to leverage the pupil as an index of user engagement stems from the problem that in scenarios where scene brightness cannot be controlled, the pupil light response confounds the cognitive-emotional response. What if we could predict the light response of an individual's pupil, thus creating the opportunity to factor it out of the measurement? In this work, we lay the groundwork for this research by evaluating three models of pupillary light response in 2D, and in a virtual reality (VR) environment. Our results show that either a linear or an exponential model can be fit to an individual participant with an easy-to-use calibration procedure. This work opens several new research directions in VR relating to performance analysis and inspires the use of eye tracking beyond gaze as a pointer and foveated rendering.

Occlusion handling using semantic segmentation and visibility-based rendering for mixed reality

Real-time occlusion handling is a major problem in outdoor mixed reality system because it requires great computational cost mainly due to the complexity of the scene. Using only segmentation, it is difficult to accurately render a virtual object occluded by complex objects such as vegetation. In this paper, we propose a novel occlusion handling method for real-time mixed reality given a monocular image and an inaccurate depth map. We modify the intensity of the overlayed CG object based on the texture of the underlying real scene using visibility-based rendering. To determine the appropriate level of visibility, we use CNN-based semantic segmentation and assign labels to the real scene based on the complexity of object boundary and texture. Then we combine the segmentation results and the foreground probability map from the depth image to solve the appropriate blending parameter for visibility-based rendering. Our results show improvement in handling occlusions for inaccurate foreground segmentation compared to existing blending-based methods.

SESSION: Presence

Co-presence and proxemics in shared walkable virtual environments with mixed colocation

The purpose of the experiment presented in this paper is to investigate co-presence and locomotory patterns in a walkable shared virtual environment. In particular, trajectories of users that use a walkable tracking space alone are compared to those of users who use the tracking space in pairs. Co-presence, in a sense of perception of another person being present in the same virtual space is analyzed through subjective responses and behavioral markers. The results indicate that both perception and proxemics in relation to co-located and distributed players differ. The effect on the perception is however mitigated if participants do not collide with the avatars of distributed co-players.

A longitudinal study of small group interaction in social virtual reality

Now that high-end consumer phones can support immersive virtual reality, we ask whether social virtual reality is a promising medium for supporting distributed groups of users. We undertook an exploratory in-the-wild study using Samsung Gear VR headsets to see how existing social groups that had become geographically dispersed could use VR for collaborative activities. The study showed a strong propensity for users to feel present and engaged with group members. Users were able to bring group behaviors into the virtual world. To overcome some technical limitations, they had to create novel forms of interaction. Overall, the study found that users experience a range of emotional states in VR that are broadly similar to those that they would experience face-to-face in the same groups. The study highlights the transferability of existing social group dynamics in VR interactions but suggests that more work would need to be done on avatar representations to support some intimate conversations.

Human upper-body inverse kinematics for increased embodiment in consumer-grade virtual reality

Having a virtual body can increase embodiment in virtual reality (VR) applications. However, comsumer-grade VR falls short of delivering sufficient sensory information for full-body motion capture. Consequently, most current VR applications do not even show arms, although they are often in the field of view. We address this shortcoming with a novel human upper-body inverse kinematics algorithm specifically targeted at tracking from head and hand sensors only. We present heuristics for elbow positioning depending on the shoulder-to-hand distance and for avoiding reaching unnatural joint limits. Our results show that our method increases the accuracy compared to general inverse kinematics applied to human arms with the same tracking input. In a user study, participants preferred our method over displaying disembodied hands without arms, but also over a more expensive motion capture system. In particular, our study shows that virtual arms animated with our inverse kinematics system can be used for applications involving heavy arm movement. We demonstrate that our method can not only be used to increase embodiment, but can also support interaction involving arms or shoulders, such as holding up a shield.

Immersion and coherence in a stressful virtual environment

We report on the design and results of two experiments investigating Slater's Place Illusion (PI) and Plausibility Illusion (Psi) in a virtual visual cliff environment. PI (the illusion of being in a place) and Psi (the illusion that the depicted events are actually happening) were proposed by Slater as orthogonal components of virtual experience which contribute to realistic response in a VE. To that end, we identified characteristics of a virtual reality experience that we expected to influence one or the other of PI and Psi. We designed two experiments in which each participant experienced a given VE in one of four conditions chosen from a 2×2 design: high or low levels of PI-eliciting characteristics (that is, immersion) and high or low levels of Psi-eliciting characteristics. Following Skarbez, we use the term "coherence" for those characteristics which contribute to Psi, parallel to the use of "immersion" for characteristics that contribute to PI. We collected both questionnaire-based and physiological metrics. Several existing presence questionnaires could not reliably distinguish the effects of PI from those of Psi. They did, however, indicate that high levels of PI-eliciting characteristics and Psi-eliciting characteristics together result in higher presence, compared any of the other three conditions. This suggests that "breaks in PI" and "breaks in Psi" belong to a broader category of "breaks in experience," any of which result in a degraded user experience. Participants' heart rates, however, responded markedly differently in the two Psi conditions; no such difference was observed across the PI conditions. This indicates that a VE that exhibits unusual or confusing behavior can cause stress in a user that affects physiological responses, and that one must take care to eliminate such confusing behaviors if one is using physiological measurement as a proxy for subjective experience in a VE.

The physical-virtual table: exploring the effects of a virtual human's physical influence on social interaction

In this paper, we investigate the effects of the physical influence of a virtual human (VH) in the context of face-to-face interaction in augmented reality (AR). In our study, participants played a tabletop game with a VH, in which each player takes a turn and moves their own token along the designated spots on the shared table. We compared two conditions as follows: the VH in the virtual condition moves a virtual token that can only be seen through AR glasses, while the VH in the physical condition moves a physical token as the participants do; therefore the VH's token can be seen even in the periphery of the AR glasses. For the physical condition, we designed an actuator system underneath the table. The actuator moves a magnet under the table which then moves the VH's physical token over the surface of the table. Our results indicate that participants felt higher co-presence with the VH in the physical condition, and participants assessed the VH as a more physical entity compared to the VH in the virtual condition. We further observed transference effects when participants attributed the VH's ability to move physical objects to other elements in the real world. Also, the VH's physical influence improved participants' overall experience with the VH. We discuss potential explanations for the findings and implications for future shared AR tabletop setups.

With a little help from a holographic friend: the OpenIMPRESS mixed reality telepresence toolkit for remote collaboration systems

Remote mixed reality (MR) collaboration systems allow for multimodal, real-time support from remote experts. We present our open toolkit that provides a flexible end-to-end solution for building such systems using off-the-shelf hardware. From related work, three core design aspects have been identified: 1) the independence of the viewpoint that the visitor (the remote expert) can take in relation to position and viewpoint of the visitee, 2) the immersiveness of the presentation technology for visitor and visitee, and 3) the extent to which the visitor's body is represented in the visitee's environment. This paper describes the implementation of our system, which includes these aspects. In a study aimed at validating whether we implemented these core aspects to good effect, conducted with a collaborative puzzle application built with our toolkit, we examine how variations of these aspects contribute to usability, performance and social presence related metrics.

Training in IVR: investigating the effect of instructor design on social presence and performance of the VR user

We investigate instructor representations (IRs) in the context of virtual trainings with head mounted displays (HMD). Despite the recently increased industry and research focus on virtual training in immersive virtual reality (IVR), the effect of IRs on the performer (VR user) has received little attention. We present the results of a study (N=33), evaluating the effect of three IRs - webcam, avatar and sound-only - on social presence (SP) and performance (PE) of the VR user during task completion. Our results show that instructor representation has an effect on SP and that, contrary to our assumption based on prior work, it affects performance negatively.

SESSION: VR environment

Am I in the theater?: usability study of live performance based virtual reality

Duplicating the audience experience of an art performance with VR technology is a promising VR application, which is considered to provide better viewer experience than the conventional video. As various forms of art performances are recorded by the panoramic camera and broadcasted on the Internet, the impact of this new VR-based media to the viewers needs to be systematically studied. In this work, a two-level usability framework is proposed, which combines the traditional concepts of presence and the quality evaluation of art performances, aiming to systematically study the usability of such VR application. Both the conventional video and the panoramic video of a theatre performance were captured simultaneously, and were replayed to two groups of viewers in a cinematic setup and through an HMD respectively. The psychological measurement methods, including the questionnaire and the interview, as well as the psychophysical measurement methods, including the EEG and the motion capture techniques were both used in the study. The results show that the such VR application duplicates the live performance better by providing a higher sense of presence, higher engagement levels, and stronger desire to see live performance. For visual intensive performance contents, the new VR-based media can provide a better user experience. The future development of the new media forms based on the panoramic video technique could benefit from this work.

Discrete scene rotation during blinks and its effect on redirected walking algorithms

Moving through a virtual environment (VE) by real walking is beneficial to user immersion, feeling of presence and way finding. However, the available physical spaces are of limited size and usually much smaller than the VE. One solution to this problem is using redirection techniques (RDTs). While the focus of existing research has been mostly on continuous RDTs, work on discrete RDTs is still limited.

In this paper, we present our research results on the discrete rotation of a virtual scene during walking. A study with 14 subjects was conducted to identify the detection threshold of the scene rotation in two conditions: during blinking and when eyes are open. Results showed that on average, users failed to detect a scene rotation of 9.1 degrees during blinking, as compared to 2.4 degrees when eyes are open. Simulations were then performed to investigate the effects of incorporating discrete scene orientation during blinks into existing algorithms such as steer-to-center and steer-to-orbit when different predefined paths are followed. Results showed that on average the number of resets is reduced by 13%, and the minimum space required for encountering no reset is reduced by 20%. A reset technique was also proposed and shown to give better performance than the existing two-one turn reset technique.

The effect of chair type on users' viewing experience for 360-degree video

The consumption of 360-degree videos with head-mounted displays (HMDs) is increasing rapidly. A large number of HMD users watch 360-degree videos at home, often on non-swivel seats; however videos are frequently designed to require the user to turn around. This work explores how the difference in users' chair type might influence their viewing experience. A between-subject experiment was conducted with 41 participants. Three chair conditions were used: fixed, half-swivel and full-swivel. A variety of measures were explored using eye-tracking, questionnaires, tasks and semi-structured interviews. Results suggest that the fixed and half-swivel chairs discouraged exploration for certain videos compared with the full-swivel chair. Additionally, participants in the fixed chair had worse spatial awareness and greater concern about missing something for certain video than those in the full-swivel chair. No significant differences were found in terms of incidental memory, general engagement and simulator sickness among the three chair conditions. Furthermore, thematic analysis of post-experiment interviews revealed four themes regarding the restrictive chairs: physical discomfort, difficulty following moving objects, reduced orientation and guided attention. Based on the findings, practical implications, limitations and future work are discussed.

Data-driven modeling of group entitativity in virtual environments

We present a data-driven algorithm to model and predict the socio-emotional impact of groups on observers. Psychological research finds that highly entitative i.e. cohesive and uniform groups induce threat and unease in observers. Our algorithm models realistic trajectory-level behaviors to classify and map the motion-based entitativity of crowds. This mapping is based on a statistical scheme that dynamically learns pedestrian behavior and computes the resultant entitativity induced emotion through group motion characteristics. We also present a novel interactive multi-agent simulation algorithm to model entitative groups and conduct a VR user study to validate the socio-emotional predictive power of our algorithm. We further show that model-generated high-entitativity groups do induce more negative emotions than low-entitative groups.

Hybrid orbiting-to-photos in 3D reconstructed visual reality

Virtually navigating through photos from a 3D image-based reconstruction has recently become very popular in many applications. In this paper, we consider a particular virtual travel maneuver that is important for this type of virtual navigation---orbiting to photos that can see a point-of-interest (POI). The main challenge with this particular type of orbiting is how to give appropriate feedback to the user regarding the existence and information of each photo in 3D while allowing the user to manipulate three degrees-of-freedom (DoF) for orbiting around the POI. We present a hybrid approach that combines features from two baselines---proxy plane and thumbnail approaches. Experimental results indicate that users rated our hybrid approach more favorably for several qualitative questionnaire statements, and that the hybrid approach is preferred over both baselines for outdoor scenes.

Automatic transfer of musical mood into virtual environments

This paper presents a method that automatically transforms a virtual environment (VE) according to the mood of input music. We use machine learning to extract a mood from the music. We then select images exhibiting the mood and transfer their styles to the textures of objects in the VE photorealistically or artistically. Our user study results indicate that our method is effective in transferring valence-related aspects, but not arousal-related ones. Our method can still provide novel experiences in virtual reality and speed up the production of VEs by automating its procedure.

Step aside: an initial exploration of gestural input for lateral movement during walking-in-place locomotion

Walking-in-place (WIP) techniques provide users with a relatively natural way of walking in virtual reality. However, previous research has primarily focused on WIP during forward movement and tasks involving turning. Thus, little is known about what gestures to use in combination with WIP in order to enable sidestepping. This paper presents two user studies comparing three different types of gestures based on movement of the hip, leaning of the torso, and actual sidesteps. The first study focuses on purely lateral movement while the second involves both forward and lateral movement. The results of both studies suggest that leaning yielded significantly more natural walking experiences and this gesture also produced significantly less positional drift.

SESSION: Modality

Haptic around: multiple tactile sensations for immersive environment and interaction in virtual reality

In this paper, we present Haptic Around, a hybrid-haptic feedback system, which utilizes fan, hot air blower, mist creator and heat light to recreate multiple tactile sensations in virtual reality for enhancing the immersive environment and interaction. This system consists of a steerable haptic device rigged on the top of the user head and a handheld device also with haptics feedbacks to simultaneously provide tactile sensations to the users in a 2m x 2m space. The steerable haptic device can enhance the immersive environment for providing full body experience, such as heat in the desert or cold in the snow mountain. Additionally, the handheld device can enhance the immersive interaction for providing partial body experience, such as heating the iron or quenching the hot iron. With our system, the users can perceive visual, auditory and haptic when they are moving around in virtual space and interacting with virtual object. In our study, the result has shown the potential of the hybrid-haptic feedback system, which the participants rated the enjoyment, realism, quality, immersion higher than the other.

Can we perceive changes in our moving speed: a comparison between directly and indirectly powering the locomotion in virtual environments

Many categories of the illusion of self-motion have been widely studied with the potential support of virtual reality. However, the effects of directly and indirectly powering the movement on the possibility of perceiving changes in moving speed and their relationship with sensory feedback on users' speed change perception have not been investigated before. In this paper, we present the results of our user study on the difference in perceiving changes in moving speed between two different movement techniques: "pedaling" and "throttling". We also explore the effects of different velocity gains, accelerations and speeds of airflow, and their interactions with the movement techniques on users' perception of speed changes in addition to user performance and perception. We built a bike simulator that supports both of the movement techniques and provides sensory feedback. In general, "pedaling" gave users more possibility to perceive changes in moving velocity than "throttling".

HFX studio: haptic editor for full-body immersive experiences

Current virtual reality systems enable users to explore virtual worlds, fully embodied in avatars. This new type of immersive experience requires specific authoring tools. The traditional ones used in the movie and the video games industries were modified to support immersive visual and audio content. However, few solutions exist to edit haptic content, especially when the whole user's body is involved. To tackle this issue we propose HFX Studio, a haptic editor based on haptic perceptual models. Three models of pressure, vibration and temperature were defined to allow the spatialization of haptic effects on the user's body. These effects can be designed directly on the body (egocentric approach), or specified as objects of the scene (allocentric approach). The perceptual models are also used to describe capabilities of haptic devices. This way the created content is generic, and haptic feedback is rendered on the available devices. The concept has been implemented with the Unity®game engine, a tool already used in VR production. A qualitative pilot user study was conducted to analyze the usability of our tool with expert users. Results shows that the edition of haptic feedback is intuitive for these users.

The impact of fear of the sea on working memory performance: a research based on virtual reality

The sea has been manifested to cause the emotion of fear to people when it comes to a very depth, especially to those who have thalassophobia. Many people have to work in the sea while nearly no research on influence of fear of the sea to cognition has been carried out. This study explores the impact of fear of the sea induced by immersive virtual reality on working memory which is a cognitive system with a limited capacity. Participants were required to complete n-back working memory task of three difficulty levels in the non-emotional environment and the undersea environment respectively by means of virtual reality. Pupil diameter changes were recorded along with the task performance. In addition to reaction times and accuracy (correctly press a button in response to targets) as two task performance indices used in most researches, the commission errors (incorrectly press a button in response to non-targets) and omission errors (incorrectly do not press a button in response to targets) were also differentiated herein. The results of the study indicated that the virtual undersea environment did induce the emotion of fear. As for the task performance, except that the performance of low-level task did not differ much between the two environments, the fear of the sea increased the accuracy of the medium level n-back task but decreased it of high-level n-back task. Result of omission errors was just the opposite and commission errors were increased in both levels of task. The findings, including the positive role of a moderate level of fear of the sea in the performance of working memory task, make a lot of sense for future cognitive work in the sea.

Investigating the reason for increased postural instability in virtual reality for persons with balance impairments

The objective of this study is to investigate how different visual components of Virtual Reality (VR), such as field of view, frame rate, and display resolution affect postural stability in VR. Although previous studies identified these visual components as some of the primary factors that differ significantly in VR from reality, the effect of each component on postural stability is yet unknown. While most people experience postural instability in VR, it is worse for people with balance impairments (BIs). This may be because they depend more on their visual cues to maintain postural stability. We conducted a study with ten people with balance impairments due to Multiple Sclerosis (MS) and seven people without balance impairments to investigate the effect of different visual components on postural stability. In each condition, we varied one of the visual components and kept all other components fixed. Each participant explored the virtual environment (VE) in a controlled fashion to make sure that the effect of the visual components was consistent for all participants. Results from our study suggest that for people with BIs, decreased field of view and decreased frame rate have significant negative effects on postural stability, but the effect of display resolution is inconclusive. However, for people without BIs, there were no significant differences for any of the visual components. Therefore, VR systems targeting people with balance impairments should focus on improving field of view and frame rate before improving display resolution.

In-pulse: inducing fear and pain in virtual experiences

Researchers have attempted to increase the realism of virtual reality (VR) applications in many ways. Combinations of the visual, auditory and haptic feedback have successfully simulated experiences in VR, however, multimedia contents may also stimulate emotions. In this paper, we especially paid attention to negative emotions that may be perceived in such experiences (e.g., fear). We hypothesized that volunteering, visual, mechanical, and electrical feedback may induce negative emotional feedback to users. In-Pulse is a novel system and approach to explore the potential of bringing this emotional feedback to users. We designed a head-mounted display (HMD) combined with mechanical and electrical muscle stimulation (EMS) actuators. A user study was performed to explore the effect of our approaches with combinations with VR contents. The results suggest that mechanical actuators and EMS can improve the experience of virtual experiences.

Investigating different modalities of directional cues for multi-task visual-searching scenario in virtual reality

In this study, we investigated and compared the effectiveness of visual, auditory, and vibrotactile directional cues on multiple simultaneous visual-searching tasks in an immersive virtual environment. Effectiveness was determined by the task-completion time, the range of head movement, the accuracy of the identification task, and the perceived workload. Our experiment showed that the on-head vibrotactile display can effectively guide users towards virtual visual targets, without affecting their performance on the other simultaneous tasks, in the immersive VR environment. These results can be applied to numerous applications (e.g. gaming, driving, and piloting) in which there are usually multiple simultaneous tasks, and the user experience and performance could be vulnerable.

Effects of haptic texture rendering modalities on realism

In haptics, two major modalities, force and vibration, are used to model real textures and recreate them in a virtual environment. This paper compares the perceptual advantages and disadvantages between the two approaches by a user study. In particular, the perceptual similarity of a virtual texture to a real texture is rated using five criteria of geometry, roughness, hardness, friction and overall similarity. These categorical comparisons allowed us to provide general guidelines to appropriate uses of the two approaches.

DEMONSTRATION SESSION: Demo abstracts

A lightweight and efficient system for tracking handheld objects in virtual reality

While the content of virtual reality (VR) has grown explosively in recent years, the advance of designing user-friendly control interfaces in VR still remains a slow pace. The most commonly used device, such as gamepad or controller, has fixed shape and weight, and thus can not provide realistic haptic feedback when interacting with virtual objects in VR. In this work, we present a novel and lightweight tracking system in the context of manipulating handheld objects in VR. Specifically, our system can effortlessly synchronize the 3D pose of arbitrary handheld objects between the real world and VR in realtime performance. The tracking algorithm is simple, which delicately leverages the power of Leap Motion and IMU sensor to respectively track object's location and orientation. We demonstrate the effectiveness of our system with three VR applications use pencil, ping-pong paddle, and smartphone as control interfaces to provide users more immersive VR experience.

A low-cost motion platform with balance board

We propose a low-cost motion platform which enables reduces the load to the actuator with a spherical body like a balance board. In our method, by supporting the movable base with a spherical body, most of the load is released to the ground via the fixed base, and the center of gravity of the load is lowered by attaching the spherical support to the movable base. As a result, the moment which increases at the time of rotation of the movable base is reduced, and the load applied to the actuator with no need for the complex control and with simple structure can be greatly reduced.

A low-cost omni-directional VR walking platform by thigh supporting and motion estimation

We propose a low-cost omni-directional VR walking platform by thigh supporting and motion estimation. Specifically, this platform supports the thighs of the user to the walking direction, and the user make the stepping motion while leaning to the walking direction. Thereby making it possible to change the center of gravity of the foot sole like an actual walking. Moreover, our platform estimate the foot movement which constrained by thigh supporting part with load cells around the user's thigh, and render to the HMD according to the estimated foot movement. As a result, our platform enables user to make the walking sensation more realistic at low-cost.

AR DeepCalorieCam V2: food calorie estimation with CNN and AR-based actual size estimation

In most of the cases, the estimated calories are just associated with the estimated food categories, or the relative size compared to the standard size of each food category which are usually provided by a user manually. In addition, in the case of calorie estimation based on the amount of meal, a user conventionally needs to register a size-known reference object in advance and to take a food photo with the registered reference object. In this demo, we propose a new approach for food calorie estimation with CNN and Augmented Reality (AR)-based actual size estimation. By using Apple ARKit framework, we can measure the actual size of the meal area by acquiring the coordinates on the real world as a three-dimensional vector, we implemented this demo app. As a result, it is possible to calculate the size more accurately than in the previous method by measuring the meal area directly, the calorie estimation accuracy has improved.

Automatic 3D modeling of artwork and visualizing audio in an augmented reality environment

In recent years, traditional art museums have begun to use AR/VR technology to make visits more engaging and interactive. This paper details an application which provides features designed to be immediately engaging and educational to museum visitors within an AR view. The application superimposes an automatically generated 3D representation over a scanned artwork, along with the work's authorship, title, and date of creation. A GUI allows the user to exaggerate or decrease the depth scale of the 3D representation, as well as to search for related works of music. Given this music as audio input, the generated 3D model will act as an audio visualizer by changing depth scale based on input frequency.

Design-led 3D visualization of nanomedicines in virtual reality

Nanomedicines are a promising addition to the arsenal of new cancer therapies. During development, scientists must precisely track their distribution in the body, a task that can be severely limited by traditional 2D displays. With its stereoscopic capacity and real-time interactivity, virtual reality (VR) provides an encouraging platform to accurately visualize dynamic 3D volumetric data. In this research, we develop a prototype application to track nanomedicines in VR. This platform has the potential to enhance data assessment, comprehension and communication in preclinical research which may ultimately influence the paradigm of future clinical protocols.

EXG wearable human-machine interface for natural multimodal interaction in VR environment

Current assistive technologies are complicated, cumbersome, not portable, and users still need to apply extensive fine motor control to operate the device. Brain-Computer Interfaces (BCIs) could provide an alternative approach to solve these problems. However, the current BCIs have low classification accuracy and require tedious human-learning procedures. The use of complicated Electroencephalogram (EEG) caps, where many electrodes must be attached on the user's head to identify imaginary motor commands, brings a lot of inconvenience. In this demonstration, we will showcase EXGbuds, a compact, non-obtrusive, and comfortable wearable device with non-invasive biosensing technology. People can comfortably wear it for long hours without tiring. Under our developed machine learning algorithms, we can identify various eye movements and facial expressions with over 95% accuracy, such that people with motor disabilities could have a fun time to play VR games totally "Hands-free".

Future-mine VR as narrative decision making tool

This work presents a narrative story of a Future Mine scenario that uses Virtual Reality as a medium to replace traditional spreadsheet-based policy making framework currently widely used in government agencies for decision making process. The scenario presented envisions user exploring underground mine, where extraction processes had been almost fully automated, and environment is constantly monitored by a variety of modern and futuristic sensors. The use of story-telling using VR is explored to present novel application scenarios for sensing technologies and to facilitate better understanding of the context in which they will be used. Further the experience is translated into informed decision making.

GravityCup: a liquid-based haptics for simulating dynamic weight in virtual reality

During interaction in a virtual environment, haptic displays provide users with sensations such as vibration, texture simulation, and electrical muscle stimulation. However, as humans perceive object weights naturally in daily life, objects picked up in virtual reality feel unrealistically light. To create an immersive experience in virtual reality that includes weight sensation, we propose GravityCup, a liquid-based haptic feedback device that simulates realistic object weights and inertia when moving virtual handheld objects. In different scenarios, GravityCup uses liquid to provide users with a dynamic weight sensation experience that enhances interaction with handheld objects in virtual reality.

Hand motion prediction for just-in-time thermo-haptic feedback

This paper presents two innovative design solutions for thermal feedback displays in virtual environments. First solution is aiming to eliminate or decrease the time delay between the user action and onset of the thermal feedback using Machine Learning for user motion prediction. Second is the design of compact but efficient water cooling system necessary to provide cold sensations using peltier elements. Presented thermal display is wearable and battery powered.

Immersive auditory display system 'sound cask': three-dimensional sound field reproduction system based on the boundary surface control principle

Sound cask was developed to realize the perfect 3D auditory display that creates 3D sound waves around the listener's head just the same as the primary sound field, based on the boundary surface control (BoSC) principle.

If we consider the sound pressure p(s) within a region V enclosed by a surface S, Kirchhoff-Helmholtz integral equation is given by

[EQUATION]

where n denote normal vectors on S. In the conventional theory of sound field reproduction, the Greenfs function G and its gradient ∂G/∂n are interpreted as monopole sound sources and dipole sound sources, respectively. Developing such sources is technically impossible. Perfect sound field reproduction is thus thought to be a hopeless idea [2]. However, the BoSC principle [4] presented another interpretation of the Kirchhoff-Helmholtz integral equation where p and ∂p/∂n are the acoustic pressure and particle velocity, and G and its gradient ∂G/∂n are those coefficients, respectively. Thus, designing the inverse system of the acoustic characteristics in the reproduction room reveals that perfect sound field reproduction can be realized without such acoustic problems. The basic concept of sound field reproduction based on the BoSC principle is shown in Fig. 1. To record sound in the primary field, the C80-shaped fullerene microphone array (BoSC microphone) was developed as shown in Fig. 2. The BoSC microphone is also used to measure the impulse responses between all possible combinations of loudspeakers and microphones in the secondary sound field (IRs in Fig. 1) to calculate the inverse filter matrix [5].

Compared with other sound reproduction method such as wave field synthesis (WFS) systems [1] or ambisonics systems [3], the sound cask has practical advantages in the following aspects:

(1) Sound image along the depth direction can be controlled even in the vicinity of the head of the listener.

(2) The whole system can be easily moved into any place.

(3) A theoretically assured combination with a recording system - an 80-channel fullerene-shaped microphone array in our case - can be constructed.

The main characteristic of the BoSC system is its ability to reproduce a sound field, not by points but in three dimensions. A listener can freely move his head, and the system can provide high performance in terms of spatial information reproduction such as sound localization and sound distance. Based on these system features, as an example of a more effective application of the BoSC system, we propose the design of a sound cask. In the design process of the sound field reproduction system based on the BoSC principle, space design, which is suitable for inverse filter calculation, will be important, since the quality of these filters directly affect the total performance of the system.

As a practical and reasonable compromise of the conditions, 96 loudspeakers are allocated inside the sound cask. Figure shows a picture of the sound cask designed in practice. In particular, a higher-grade loudspeaker unit (FOSTEX FX120) was adopted in the current version of the sound cask after several listening tests.

Indoor AR navigation using tilesets

This paper demonstrates the methodology and findings of creating an augmented reality navigation app that uses tilesets to create the navigation. It illustrates the method in which the app was created - using vector data and uploading it to MapBox, then accessing that data in Unity through the MapBox API and map editor and then overlaying the camera input with the navigation path layer. The application was tested by creating multiple arbitrary navigation scenarios and checking them for various factors. The main finding of this research is that this navigation solution works better than GPS indoor navigation.

Interactive virtual exhibition: creating custom virtual art galleries using web technologies

This paper presents an immersive 3D virtual reality application accessed through the web that allows users to create their own custom virtual art galleries. The application allows users to select paintings based on a time range or country and then it dynamically generates the 3D virtual exhibit. Various features about the exhibit can be customized, such as the floor texture and wall color. Users can also save their exhibit, so it can be shared with others.

Real-time virtual brain aneurysm clipping surgery

We propose a fast, interactive real-time 3DCG deformable simulation prototype for preoperative virtual practice of brain aneurysm clipping surgery, controlled by Position Based Dynamics (PBD). Blood vessels are reconstructed from their central lines, connected to the brain by automatically generated thin threads "virtual trabeculae", and colored by automatically estimated their dominant region.

Tap-tap menu: body touching for virtual interactive menus

Virtual and mixed realities make it possible to view and interact with virtual objects in 3D space. However, where to position menus in 3D space and how to interact with them are often problems. Existing studies developed methods of displaying a menu on the hand or arm. In this study, we proposed a menu system that appears at various body parts. By placing the menu on the body, it enables the user to operate the menus comfortably through kinesthesia, and perceive tactile feedback. Furthermore, displaying the menu not only in the hands and arms but also in the upper legs and the abdomen, the menu display area can be expanded. In this study, we developed a modeling application and introduced a proposed menu design for that application.

TransFork: using olfactory device for augmented tasting experience with video see-through head-mounted display

When people eat, the taste is very complex and be influenced easily by other senses. Such as visual, olfactory, and haptic, even past experiences, can affect the human perception, which in turn creates more taste possibilities. We present TransFork, an eating tool with olfactory feedback, which augments the tasting experience with video see-through head-mounted display. Additionally, we design a recipe via preliminary experiments to find out the taste conversion formula, which could enhance the flavor of foods and change the user perception to recognize the food. In this demonstration, we prepare a mini feast with bite-sized fruit, the participants use the TransFork to eat food A and smell the scent of food B stored at the aromatic box via airflow guiding. Before they deliver the food to their mouth, the head-mounted display augmented the color of food B on food A by the QR code on the aromatic box. With this augmented reality techniques and the recipe, the tasting experience could be augmented or enhanced, which is a potential approach and could be a playful used for eating.

Using mixed reality for promoting brand perception

Mixed reality offers an immersive and interactive experience through the use of head mounted displays and in-air gestures. Visitors can discover additional content virtually, on top of existing physical items. For a small-scale exhibition at a cafe, we developed a Microsoft HoloLens application to create an interactive experience on top of a collection of historic physical items. Through public experiences of this exhibition, we received positive feedback of our system, and found that it also helped to promote brand perception. In this demo, visitors can experience a similar mixed reality experience that was shown at the exhibition.

Virtual reality environment to support activity in the real world: a case of working environment using microscope

This manuscript introduces a virtual reality (VR) environment to support research activity in the real world. We constructed a prototype to support intellectual activity in the field of life sciences using VR. In the prototype, the users can operate a real microscope from a virtual space, along with other useful equipment such as huge displays, and analyze images carefully and intuitively using a immersive visualizer seamlessly integrated in the environment. We belive that our prototype is promising for expanding the potential of VR applications.

VirtualHaus: a collaborative mixed reality application with tangible interface

We present VirtualHaus, a collaborative mixed reality application allowing two participants to recreate Mozart's apartment as it used to be by interactively placing furniture. Each participant has a different role and therefore uses a different application: the visitor uses an immersive virtual reality application, while the supervisor uses an augmented reality application. The two applications are wirelessly synchronised and display the same information with distinct viewpoints and tools.

Visualizing and exploring OSGi-based software architectures in augmented reality

This demo presents an immersive augmented reality solution for visualizing OSGi-based software architectures. By employing an island metaphor, we map abstract software entities to tangible real-world objects. Using advanced input modalities, such as voice and gesture control, our approach allows for interactive exploration and examination of complex software systems.

VRTe do: the way of the virtual hand

We are presenting a Virtual Reality training system for Karate kata based on motion capture and Virtual Reality technologies. The system is built as a game, in which the player needs to learn and repeat different kata to progress and reach the next level. Different levels are represented by obi (belts) of different color, corresponding real Karate obi. We capture players' motion with a Kinect camera and enable interaction with game objects. A database is integrated in the game so that different players can use, save and track their training progress.

POSTER SESSION: Poster abstracts

3D model augmentation using depth information in an AR environment

This paper proposes a method for augmenting a 3D CAD model onto an image using a mobile device. An image and its depth map of a target are obtained using a Phab 2 mobile device. The image is processed to extract line segments of the target. Next a rectangular planar region of the target is selected, which is then refined using the depth data. The chosen region is then compared with the CAD model, and a planar face in the CAD model that matches the selected region is obtained using various geometric properties. Using the matching planar faces, a pose of the camera is computed, which is then used for augmenting the CAD model onto the image correctly. The test results demonstrate that the method can be used for real fabrication in a complex environment.

A binocular stereo effect parameter calculator towards visual comfort

Binocular stereoscopic contents may cause visual discomfort due to inappropriate settings of the parameters in production and display. In order to avoid this issue, the authors designed a calculator which can provide optimal parameters in 3D stereoscopic shooting and screening. It could be helpful on the adjustment of the stereo effect parameters for 3D contents creators.

A real-time golf-swing training system using sonification and sound image localization

There are real-time training systems to learn the correct golf swing form by providing visual feedback to the users. However, real-time visual feedback requires the users to see the display during their motion that leads to the wrong posture. This paper proposed a real-time golf-swing training system using sonification and sound image localization. The system provides real-time audio feedback based on the difference between the pre-recorded model data and real-time user data, which consists of the roll, pitch, and yaw angles of a golf club shaft. The system also used sound image localization so that the user can hear the audio feedback from the direction of the club head. The user can recognize the current posture of the club without moving their gaze.

Acquiring short range 4D light transport with synchronized projector camera system

Light interacts with a scene in various ways. For scene understanding, a light transport is useful because it describes a relationship between the incident light ray and the result of the interaction. Our goal is to acquire the 4D light transport between the projector and the camera, focusing on direct and short-range transport that include the effect of the diffuse reflections, subsurface scattering, and inter-reflections. The acquisition of the light transport is challenging since the acquisition of the full 4D light transport requires a large number of measurement. We propose an efficient method to acquire short range light transport, which is dominant in the general scene, using synchronized projector-camera system. We show the transport profile of various materials, including uniform or heterogeneous subsurface scattering.

An AR system for artistic creativity education

Creativity and innovation training is the core of the art education. Modern technology provides more effective tools to help students obtain artistic creativity. In this paper, we propose to employ augmented reality technology to assist artistic creativity education. We first analyze the inefficiency of traditional artistic creation training. We then introduce our AR-based smartphone app with technical detail and explain how it can improve accelerate artistic creativity training. We finally show 3 examples created by our AR app to demonstrate the effectiveness of our proposed method.

An evaluation of smartphone-based interaction in AR for constrained object manipulation

In Augmented Reality, interaction with the environment can be achieved with a number of different approaches. In current systems, the most common are hand and gesture inputs. However experimental applications also integrated smartphones as intuitive interaction devices and demonstrated great potential for different tasks. One particular task is constrained object manipulation, for which we conducted a user study. In it we compared standard gesture-based approaches with a touch-based interaction via smartphone. We found that a touch-based interface is significantly more efficient, although gestures are being subjectively more accepted. From these results we draw conclusions on how smartphones can be used to realize modern interfaces in AR.

Analysis of the R-V dynamics illusion behavior in terms of auditory stimulation

The R-V Dynamics illusion is a phenomenon where weight perception is changed by superimposing a CG case with a movable portion (CG) onto a real object using mixed reality technology. In previous studies, it has been confirmed that weight perception is affected by the size/volume of the CG, and a virtual collision sound between the case and the movable portion could also be a cause of this illusionary phenomenon. However, in previous studies, only one virtual collision sound is applied. Therefore, in this study, we consider the influence of the physical characteristics of virtual collision sound such as the size and weight of the movable object in the phenomenon. As a result, it was confirmed that the weight perception changes according to the virtual collision sound, and participants lightly perceived the real object when a virtual collision sound is played with a smaller and lighter object.

AR navigation solution using vector tiles

This study discusses the results and findings of an augmented reality navigation app that was created using vector data uploaded to an online mapping software for indoor navigation. The main objective of this research is to determine the current issues with a solution of indoor navigation that relies on the use of GPS signals, as these signals are sparse in buildings. The data was uploaded in the form of GeoJSON files to MapBox which relayed the data to the app using an API in the form of Tilesets. The application converted the tilesets to a miniaturized map and calculated the navigation path, and then overlaid that navigation line onto the floor via the camera.

Once the project setup was completed, multiple navigation paths have been tested numerous times between the different sync points and destination rooms. At the end, their accuracy, ease of access and several other factors, along with their issues, were recorded. The testing revealed that the navigation system was not only accurate despite the lack of GPS signal, but it also detected the device motion precisely. Furthermore, the navigation system did not take much time to generate the navigation path, as the app processed the data tile by tile. The application was also able to accurately measure the ground plane along with the walls, perfectly overlaying the navigation line. However, a few observations indicated various factors affected the accuracy of the navigation, and testing revealed areas where major improvements can be made to improve both accuracy and ease of access.

Automatic 3D modeling of artwork and visualizing audio in an augmented reality environment

In recent years, traditional art museums have begun to use AR/VR technology to make visits more engaging and interactive. This paper details an application which provides features designed to be immediately engaging and educational to museum visitors within an AR view. The application superimposes an automatically generated 3D representation over a scanned artwork, along with the work's authorship, title, and date of creation. A GUI allows the user to exaggerate or decrease the depth scale of the 3D representation, as well as to search for related works of music. Given this music as audio input, the generated 3D model will act as an audio visualizer by changing depth scale based on input frequency.

Being them: presence of using non-human avatars in immersive virtual environment

This work examines the differences of the effects between using humanoid and non-humanoid avatars on the user's Illusion of Virtual Body Ownership (IVBO) and experience. We used three kinds of avatars: bipedalism group (human), quadrupedalism group (wolf), and serpentine motion group (snake). The result shows that using non-humanoid avatars feel more sense of change of their body. Users feel more proficient when using the humanoid avatar, but are more pleased with the non-humanoid avatars.

BoatAR: a multi-user augmented-reality platform for boat

Augmented Reality (AR) allows virtual object projection with an unblocked view of the physical world which provides reference and other people. The mixed scene provides an agile platform for communication and collaboration, especially on a product that would be difficult or expensive to present otherwise. In the boating industry, high customization leaves dealers with a high cost on inventory, financially and spatially. In this work, we present BoatAR, a multi-user AR boat configuration system designed for addressing these issues. A prototype system was implemented using HoloLens with shared experience, and demonstrated to a group of boat dealers and received positive feedback. BoatAR provided an example of how a multi-user AR system could help in the conventional industry.

Chest compression simulator that presents vibrations at the moment of rib fracture: transition of learning effect of compression position over a month

In cardiopulmonary resuscitation (CPR), the compression position is considered to be one of the factors affecting survival rate. We developed a CPR simulator called RibFracture CPR, which notifies a trainee through haptics when they are compressing in an inappropriate position by simulating rib fracture. Our system is expected to help trainees to smoothly adjust themselves for an actual CPR situation once they have undergone an initial training phase using visual feedback equipment. In this study, we investigated the accuracy of compression position one month after subjects learned the correct compression position using RibFracture CPR, and we evaluated the sustainability of the learning effect.

Deep face rotation in the wild

Generating face images in various directions from an image will be useful to create avatars in VR. In this paper, we introduce a new deep generative model to generate turnaround face images from an image via a latent code space with a parameter. The model was learned with a large scale image dataset annotated with attributes but not including exact target images.

Designing dynamic aware interiors

We are pursuing a vision of reactive interior spaces that are aware of people's actions and transform according to changing needs. We envision furniture and walls that act as interactive displays and that shapeshift to the correct physical form, and the appropriate interactive visual content and modality. This paper briefly describes our proposal based on our recent efforts on realizing this vision.

Does automatic game difficulty level adjustment improve acrophobia therapy?: differences from baseline.

This paper presents the design and development of a Virtual Reality game for treating acrophobia, as well as a comparative study between the players' performance in the game, under two different conditions - one in which the difficulty levels are adjusted according to the subjects' biophysical data and one in which they are not. The results showed an improvement of the parameters correlated with fear level in the first experiment.

Dual-MR: interaction with mixed reality using smartphones

Mixed reality (MR) has changed the perspective we see and interact with our world. While the current-generation of MR head-mounted devices (HMDs) are capable of generating high quality visual contents, interation in most MR applications typically relies on in-air hand gestures, gaze, or voice. These interfaces although are intuitive to learn, may easily lead to inaccurate operations due to fatigue or constrained by the environment. In this work, we present Dual-MR, a novel MR interation system that i) synchronizes the MR viewpoints of HMD and handheld smartphone, and ii) enables precise, tactile, immersive and user-friendly object-level manipulations throught the multi-touch input of smartphone. In addition, Dual-MR allows multiple users to join the same MR coordinate system to facilite the collaborate in the same physical space, which further broadens its usability. A preliminary user study shows that our system easily overwhelms the conventional interface, which combines in-air hand gesture and gaze, in the completion time for a series of 3D object manipulation tasks in MR.

Effect of accompanying onomatopoeia to interaction sound for altering user perception in virtual reality

Onomatopoeia refers to a word that phonetically imitates, resembles the sound, or depict an event at hand. In languages like Korean and Japanese, it is used in everyday conversation to emphasize certain situation and enrich the prose. In this poster, we explore if the use of onomatopoeia, visualized and added to the usual sound feedback, could be taken advantage to increase or alter the perceived realism of the sound feedback itself, and furthermore of the situation at hand in virtual reality. A pilot experiment was run to compare the user's subjective perceived realism and experience under four test conditions of presenting a simple physical interaction, accompanying it with: (1) just the "as-is" sound (baseline), (2) "as-is" sound and onomatopoeia, (3) a representative sound sample (e.g. one for all different collision conditions), and (4) a representative sound sample and onomatopoeia. Our pilot study has found that the use of onomatopoeia can alter and add on to the perceived realism/naturalness of the virtual situation such that the experiences of the single representative sound added with the onomatopoeia and "as-is" sound were deemed similar.

Effect of accompanying onomatopoeia with sound feedback toward presence and user experience in virtual reality

Onomatopoeia refers to a word that phonetically imitates the sound. It is often used, in comics or video, in caption as a way to dramatize, emphasize, exaggerate and draw attention the situation. In this paper we explore if the use of onomatopoeia could also bring about similar effects and improve the user experience in virtual reality. We present an experiment comparing the user's subjective experiences and attentive performance in two virtual worlds, each configured in two test conditions: (1) sound feedback with no onomatopoeia and (2) sound feedback with it. Our experiment has found that the moderate and strategic use of onomatopoeia can indeed help direct user attention, offer object affordance and thereby enhance user experience and even the sense of presence and immersion.

Effect of change of head angle on visual horizontal plane

We have been investigating about the visual horizontal plane when wearing HMD. We investigated about the change of the visual horizontal plane in the reclining position and the upright position, but there is no clear knowledge as to whether the change in the subjective horizontal plane is due to the change of head angle or the change of posture. In this study, we investigate the effect of head angle on subjective horizontal plane in reclining position and upright position.

Effects of head-display lag on presence in the oculus rift

We measured presence and perceived scene stability in a virtual environment viewed with different head-to-display lag (i.e., system lag) on the Oculus Rift (CV1). System lag was added on top of the measured benchmark system latency (22.3 ms) for our visual scene rendered in OpenGL Shading Language (GLSL). Participants made active head oscillations in pitch at 1.0Hz while viewing displays. We found that perceived scene instability increased and presence decreased when increasing system lag, which we attribute to the effect of multisensory visual-vestibular interactions on the interpretation of the visual information presented.

Effects of low video latency between visual information and physical sensation in immersive environments

This study aims to investigate the impact on the user's performance when there is latency between the user's physical input to the system and the visual feedback. We developed a video latency control system to film the user's hand movements and control the latency when displaying the video (The standard deviation is 0.38 ms). The minimum latency of the system is 4.3 ms, hence this enables us to investigate the performance for unknown low latency ranges. Using this system, we conducted experiments wherein 20 subjects performed a pointing task based on Fitts' law to clarify the effect of video latency, particularly for low latency. Experimental results showed that when the latency is over 24.3 ms, the user performance begins to decrease. This result will be applied to determine a standard limit for video latency in interactive video devices.

Estimation of distance between thumb and forefinger from hand dorsal image using deep learning

A three-dimensional virtual object can be manipulated by hand and finger movements with an optical hand tracking device where it is necessary to recognize a posture of one's hand. Conventional hand posture recognition is based on three-dimensional coordinates of fingertips and a skeletal model of the hand [1]. It is difficult for conventional methods to estimate a posture of the hand when a fingertip is hidden from an optical camera. This study, therefore, proposes the estimation of a posture of a hand on the basis of a hand-dorsal image that can be taken even when the hand occludes its fingertips. A regression model that estimates a distance between fingertips of the thumb and forefinger was constructed using a convolution neural network (CNN) [2]. This work evaluated the root mean squared error (RMSE) of estimation. The RMSE of estimation based on a model on the same day was less than 1.8 mm, which shows that the proposed method could be an effective method where self-occlusion is a problem. This study also evaluates the robustness of the learning model to time-variation.

Evaluating ray casting and two gaze-based pointing techniques for object selection in virtual reality

Selecting an object is a basic interaction task in virtual reality (VR) environments. Interaction techniques with gaze pointing have potential for this elementary task. There appears to be little empirical evidence concerning the benefits and drawbacks of these methods in VR. We ran an experiment studying three interaction techniques: ray casting, dwell time and gaze trigger, where gaze trigger was a combination of gaze pointing and controller selection. We studied user experience and interaction speed in a simple object selection task. The results indicated that ray casting outperforms both gaze-based methods while gaze trigger performs better than dwell time.

EXController: enhancing interaction capability for VR handheld controllers using real-time vision sensing

This paper presents EXController, a new controller-mounted finger posture recognition device specially designed for VR handheld controllers. We seek to provide additional input through real-time vision sensing by attaching a near infrared (NIR) camera onto the controller. We designed and implemented an exploratory prototype with a HTC Vive controller. The NIR camera is modified from a traditional webcam and applied with a data-driven Convolutional Neural Network (CNN) classifier. We designed 12 different finger gestures and trained the CNN classifier with a dataset from 20 subjects, achieving an average accuracy of 86.17% across - subjects, and, approximately more than 92% on three of the finger postures, and more than 89% on the top-4 accuracy postures. We also developed a Unity demo that shows matched finger animations, running at approximately 27 fps in real-time.

Experience the dougong construction in virtual reality

Dougong is a unique culture in Chinese traditional architecture. In University, the Architectural students usually use video, pictures, and even handmade craft to learn the knowledge and culture about Dougong. However, making these complicated Dougong components by hands requires a lot of facilities. To solve these problems, this paper builds a learning application using Virtual Reality (VR) technology, where students can master how to construct Dougong by interacting with the virtual models. In addition to learning module, the application creates a simulated scene showing students the great charm and design ideas of ancient Chinese buildings. The comparison experiments indicate that the students learning via VR-based application identify more Dougong components and their placement than those learning via conventional teaching.

Experimenting novel virtual-reality immersion strategy to alleviate cybersickness

Cybersickness, related to virtual-reality (VR) using head-mounted devices (HMD), is also known as motion sickness in VR environment. Researchers and developers have been working to find an appropriate technological facility to alleviate this feeling of sickness. In this paper, we aim to further improve VR immersion technique via HMD by strengthening userś sense of presence in the virtual world along with engagement. Our results show that, with alternative ways in the same VR environment, cybersickness could be overcome resulting in user acceptability of VR technology.

Extending recreational environments with a landscape-superimposed display using mixed reality

Herein, we describe a system that extends recreational experiences by overlaying a virtual landscape of a remote place over the currently experienced real landscape using mixed reality (MR) technology and displaying avatars of other users. There are many recreational activities that can be performed outdoors. However, such activities usually involve some traveling costs, preparation time, and require schedule adjustments. To reduce the impact of these factors, we implemented a system that extends recreational environments, thereby allowing free movement through the manipulation of the visual information using MR.

Gaze and body capture system under VR experiences

This paper proposes a novel system to simultaneously capture gaze behavior and the whole body motion of a person experiencing 6-DOF VR contents. This system consists of a VR goggle, eye-trackers attached to the goggle, and multiple Kinects. Measurements of those devices are all described in the consistent global coordinate system. Since the Kinects are robustly calibrated, whichever position and pose a user is located, his/her whole body pose as well as gaze directions is correctly measured. Using this system, we can easily capture gaze behaviors as well as body motion of people under any VR scenes, which is helpful for physiological researches.

Gigapixel virtual reality employing live superzoom cameras

We present a live gigapixel virtual reality system employing a 360° camera, a superzoom camera with a pan-tilt robotic head, and a head-mounted display (HMD). The system is capable of showing on-demand gigapixel-level subregions of 360° videos. Similar systems could be used to have live feed for foveated rendering HMDs.

Hamlet: directing virtual actors in computational live theater

We present "Hamlet", a prototype implementation of a virtual reality experience in which a player takes on a role of the theater director. The objective of the experience is to direct Adam, a virtual actor, to deliver the best possible performance of Hamlet's famous "To be, or not to be" soliloquy. The player interacts with Adam using voice commands, gestures, and body motion. Adam responds to acting directions, offers his own interpretations of the soliloquy, acquires the choreography from the player's body motion, and learns the scene blocking by following the player's pointing gestures.

Hands-free vibrotactile feedback for object selection tasks in virtual reality

Interactions between humans and virtual environments rely on timely and consistent sensory feedback, including haptic feedback. However, many questions remain open concerning the spatial location of haptics on the user's body in VR. We studied how simple vibrotactile collision feedback on two less studied locations, the temples, and the wrist, affects an object picking task in a VR environment. We compared visual feedback to three visual-haptic conditions, providing haptic feedback on the participants' (N=16) wrists, temples or simultaneously on both locations. The results indicate that for continuous, hand-based object selection, the wrist is a more promising feedback location than the temples. Further, even a suboptimal feedback location may be better than no haptic collision feedback at all.

High color-fidelity display using a modified projector

A high color-fidelity display provides accurate spectral reproduction to reduce observer metamerism. In this poster, we implement a multispectral projection display using a modified projector. The modification only requires adding a lens array on the projection optical path to creates multiple copies of images and using color filters to create new primaries. To produce the new primaries with high throughput and low correlation, we proposed a volume maximization-based filter selection approach. We also present an efficient multispectral rendering algorithm to compute the input values of each primary. The experiments show that our multispectral display can accurately approximate desired multispectral images and effectively reduce observer metamerism when compared with the original three primaries projection display.

Illumination for 360 degree cameras

Additional illumination improves the capture of omnidirectional 360° video and images, especially for dark or high-contrast environments. There is no "behind" for 360° cameras, so the placement of lights is a problem. We explore ways to position lights on some 360° cameras, and propose two good locations.

Image compensation and stabilization for immersive 360-degree videos from capsule endoscopy

This paper describes image processing that can be used to develop immersive 360-degree videos using capsule endoscopy procedures. When viewed through a head-mounted display (HMD), doctors are able to inspect the human gastrointestinal tract as if they were inside the patient's body. Although the endoscopy capsule has two tiny fisheye cameras, the images captured by these cameras cannot be converted to equirectangular images which is the basic format used to produce 360-degree videos. This study proposes a method to generate a pseudo-omnidirectional video from the original images and stabilizes the video to prevent virtual reality (VR) sickness.

Impression estimation model and pattern search system based on style features and Kansei metric

In this study, we aimed to construct impression estimation models of clothing patterns based on style features and Kansei metric. We first conducted a subjective evaluation experiment and a factor analysis, and quantified visual impressions of flower patterns. Following that, we used style features using CNN as image features suitable for representing flower patterns. Then, with a Lasso regression, we reduced the dimension based on Kansei metric (impression evaluation) and modeled the relationship between visual impressions and image features. Furthermore, we implemented a pattern search system using the modeled relationship.

Interactive virtual exhibition: creating custom virtual art galleries using web technologies

This paper presents an immersive 3D virtual reality application accessed through the web that allows users to create their own custom virtual art galleries. The application allows users to select paintings based on a time range or country and then it dynamically generates the 3D virtual exhibit. Various features about the exhibit can be customized, such as the floor texture and wall color. Users can also save their exhibit, so it can be shared with others.

Investigation on the cutaneous/proprioceptive contribution to the force sensation induced by electrical stimulation above tendon

A method to present force sensation based on electrical stimulation to the tendon has been suggested, and the occurrence of the sensation was considered due to the contribution of proprioceptors such as Golgi tendon organs. However, there was no clear evidence about the contributing receptors and because the method uses percutaneous electrical stimulation, there are other candidates, the cutaneous receptors. In this paper, we conducted experiments to determine whether the force sensation generated by this method is due to cutaneous sensation or proprioception, by changing the effective depth of electrical stimulation with electrodes spacing. As a result, it was shown that when the electrical stimulation could reach to deep tissue receptors, the force sensation was felt clearer, suggesting possible contribution of the proprioceptor.

JamGesture: an improvisation support system based on physical gesture observed with smartphone

The physical gestures promote musical comprehension because they can provide visual information of musical performance for others. Melodic outlines especially have a high affinity with intuitive physical gestures. We propose methods for recognizing physical gestures using motion sensor cameras and smartphone sensors, and we have developed an improvisation support system, JamGesture, by integrating a method for recognizing physical gestures using smartphones and JamSketch, a system for melody generation based on melodic outlines. JamGesture enables users to improvise music by using the input from their intuitive physical gestures with the melody-generation function of JamSketch.

Learning-based word segmentation for reliable text document retrieval and augmentation

Imagine that one may have access to a part of a text document, say a page, and from that would want to identify the document to which it belongs. In such cases, there is a need to perform a content-based document retrieval in a large database.

Let's guide a smart interface for VR HMD and leap motion

In this paper, we proposed improvements to the Human Computer Interaction in serious contents using Leap Motion and VR HMD. User immersion and interaction could be the core of VR contents technology. Recently, the advance of VR HMD technology is getting attention in VR contents market. Currently used interface methods are mostly uncomfortable for users since the way of interaction stems from the conventional interface layout rather than considering the physical characteristics of VR HMD. To achieve this, the HUD (Head-Up Display) has been used to optimize the interaction of a serious VR content that is suitable for VR HMD and Leap Motion. In addition, to validate the proposed study, we show the comparison of performance tests.

Low-cost VR collaborative system equipped with haptic feedback

In this paper, we present a low-cost virtual reality (VR) collaborative system equipped with a haptic feedback sensation system. This system is composed of a Kinect sensor for bodies and gestures detection, a microcontroller and vibrators to simulate outside interactions, and smartphone powered cardboard, all of this are put into a network implemented with Unity 3D game engine.

Mathematical model for pop-up effect of ChromaDepth

ChromaDepth glasses are microprism glasses that produce a pop-up effect by combining refraction and diffraction. In this paper, we formulate the relation between the pop-up distance and viewing distance using a mathematical model. Using our model and optical measurements, we evaluate the human perception of the pop-up effect for ChromaDepth glasses using a large distant display.

Measuring physical exertion in virtual reality exercise games

We demonstrate a novel method of applying the capabilities of mobile virtual reality technology to the health sciences by measuring physical exertion in a VR exercise game. By measuring changes in heart rate in a thirteen person user study, we find evidence to suggest that virtual reality exercise is able to induce a moderate to high level of physical exertion and produce an immersive an intriguing experience.

Motion recognition for automatic control of a block machine

A block machine has been proposed as effective systems to support attack practice in volleyball. A method for manipulating the system by tablet operation has been established, and the use of the system has been shown to improve practice effectiveness. However, due to the requirement of manual operation, it has been reported that the efficiency of practice decreases due to the error between the block position and the attack position. Therefore, in order to operate the block machine automatically, we propose a method to acquire the player position from monocular video in real time and predict the attack position.

Multi-view augmented reality with a drone

This paper presents some early results from an exploration into Augmented Reality (AR) applications where users have access to controllable alternative viewing positions based on a camera mounted unmanned aerial vehicle (UAV). These results include a system specification that defines and identifies the requirements of multi-view AR; and a demo application where the user can switch between the traditional first person and third person view. While being an initial step in the investigation, the results do illustrate practical applications for multi-view AR functionality. The paper concludes with a discussion on the next steps for the investigation.

Perceptual model optimized efficient foveated rendering

Higher resolution, wider FOV and increasing frame rate of HMD are demanding more VR computing resources. Foveated rendering is a key solution to these challenges. This paper introduces a perceptual model optimized foveated rendering. Tessellation levels and culling areas are adaptively adjusted based on visual sensitivity. We improve rendering performance while satisfying visual perception.

PeriTextAR: utilizing peripheral vision for reading text on augmented reality smart glasses

Augmented Reality (AR) provides real-time information by superimposing virtual information onto users' view of the real world. Our work is the first to explore how peripheral vision, instead of central vision, can be used to read text on AR and smart glasses. We present PeriTextAR, a multiword reading interface using rapid serial visual presentation (RSVP)[5]. This enables users to observe the real world using central vision, while using peripheral vision to read virtual information. We first conducted a lab-based study to determine the effect of different text transformation by comparing reading efficiency among 3 capitalization schemes, 2 font faces, 2 text animation methods, and 3 different numbers of words for RSVP paradigm. Another lab-based study followed, investigating the performance of the PeriTextAR against control text, and the results showed significant better performance.

Phantazuma: the stage machinery enabling the audience members to watch different contents depending on their position by vision control film and pepper's ghost

In this research, we aim to create a stage machinery enabling audience members of a stage performance to watch different contents dependent to their position. To achieve this goal, we combined a vision control film whose transparency changes depending on the viewing angle with the classic Pepper's ghost effect. Therefore, the system enables the audience members in the same theater to watch different scenes (live actors only, ghosts only, or both) depending on their position. This paper will describe our research motivation, design and implementation of the proposed system, and the operation results.

plARy: sound augmented reality system using video game background music

The authors of this paper explored the possibility of enhancing reality interpretation by synchronizing real-life situation with videogame soundtrack. "plARy" is a music based augmented reality application that immerses users in a world of video games with playing soundtracks, enhancing user's interpretation of the real world. By playing known game music according to the locations of individual users, they will recall the scenes and emotions experienced while playing the game based on users' previous learning. The authors of this paper implemented a system that uses Apple iBeacon for proximity detection and evaluated it through experiment. From participants reviews, many people answered that they felt they had imagined a world of the game, and felt that the background music became associated with locations.

Preliminary study on angular properties of spatial awareness of human in virtual space

This manuscript describes an investigation into human's spatial awareness in a virtual space. In the experiment, the subject is asked to see a short video clip of moving through the curved passage, and then fill a questionnaire about how much the passage is curved. As a result, it was found that people would recognize a curved path in virtual space as a smaller degree curved one.

Prototyping impossible objects with VR

Impossible objects are three-dimensional objects that give the impression that it is impossible for such objects to exist in the actual three-dimensional space when observed from a specific view point. The purpose of the present study is to develop a system for prototyping impossible objects with VR, which can be used for prototyping impossible objects as well as evaluating how sure the expected illusions occur when we observe the real objects with naked eyes before molding the impossible objects using 3D printers. We have implemented our prototyping system with Unity and C# programming language for use with Oculus Go. The advantage of employing VR in prototyping impossible objects is that we can take into account the scale effect when we evaluate how sure the expected illusions occur.

Ramen spoon eraser: CNN-based photo transformation for improving attractiveness of ramen photos

In recent years, a large number of food photos are being posted globally on SNS. To obtain many views or "likes", attractive photos should be posted. However, some casual foods are served with utensils on a plate or a bowl at restaurants, which spoils attractiveness of meal photos. Especially in Japan where ramen noodle is the most popular casual food, ramen is usually served with a ramen spoon in a ramen bowl in a ramen noodle shop. This is a big problem for SNS photographers, because a ramen spoon soaked in a ramen bowl extremely degrades the appearance of ramen photos. Then, in this paper, we propose anapplication called "ramen spoon eraser" that erases a spoon from ramen photos with spoons using a CNN-based Image-to-Image translation network. In this application, it is possible to automatically erase ramen spoons from ramen photos, which extremely improve the attractiveness of ramen photos. In the experiment, we train models in two ways as CNN-based Image-to-Image translation networks with the dataset consisting of ramen images with / without spoons collected from the Web.

Random-forest-based initializer for solving inverse problem in 3D motion tracking systems

Many motion tracking systems require solving inverse problem to compute the tracking result from original sensor measurements. For real-time motion tracking, such typical solutions as the Gauss-Newton method for solving their inverse problems need an initial value to optimize the cost function through iterations. A powerful initializer is crucial to generate a proper initial value for every time instance and, for achieving continuous accurate tracking without errors and rapid tracking recovery even when it is temporally interrupted. An improper initial value easily causes optimization divergence, and cannot always lead to reasonable solutions. Therefore, we propose a new initializer based on random-forest to obtain proper initial values for efficient real-time inverse problem computation. Our method trains a random-forest model with varied massive inputs and corresponding outputs and uses it as an initializer for runtime optimization. As an instance, we apply our initializer to IM3D[1], which is a real-time magnetic 3D motion tracking system with multiple tiny, identifiable, wireless, occlusion-free passive markers (LC coils).

Realistic simulation of progressive vision diseases in virtual reality

People with a visual impairment perceive their surroundings differently than those with healthy vision. It can be difficult to understand how affected perceive their surroundings, even for themselves. We introduce a virtual reality (VR) platform capable of simulating the effects of common visual impairments. With this system we are able to create a realistic VR representation of actual visual fields obtained from a medical perimeter.

Real-time human motion forecasting using a RGB camera

We propose a real-time human motion forecasting system which visualize the future pose in virtual reality using a RGB camera. Our system consists of three parts: 2D pose estimation from RGB frames using a residual neural network, 2D pose forecasting using a recurrent neural network, and 3D recovery from the predicted 2D pose using a residual linear network. To improve the prediction learning quantity of temporal feature, we propose a special method using lattice optical flow for the joints movement estimation. After fitting the skeleton, a predicted 3d model of target human will be built 0.5s in advance in a 30-fps video.

Resolving occlusion for 3D object manipulation with hands in mixed reality

Due to the need to interact with virtual objects, the hand-object interaction has become an important element in mixed reality (MR) applications. In this paper, we propose a novel approach to handle the occlusion of augmented 3D object manipulation with hands by exploiting the nature of hand poses combined with tracking-based and model-based methods, to achieve a complete mixed reality experience without necessities of heavy computations, complex manual segmentation processes or wearing special gloves. The experimental results show a frame rate faster than real-time and a great accuracy of rendered virtual appearances, and a user study verifies a more immersive experience compared to past approaches. We believe that the proposed method can improve a wide range of mixed reality applications that involve hand-object interactions.

Simultaneous independent information display at multiple depths using multiple projectors and patterns created by epipolar constraint and homography transformation

Generally, the projected image of a video projector is spatially invariant and cannot project different images at different depths. If independent images can be projected at different depths, there is a great potential for new AR or MR information presentation devices. In the previous research, solution using multiple projectors is presented, however, there are problems in insufficient contrast. In the paper, we propose a new algorithm to achieve high contrast projection, by restricting a projection pattern to only symbolic information such as letters, which allows a large freedom on pattern optimization to present the same information. For efficency, we also introduce epipolar constraint-based pattern optimization algorithm, which divides the original 2D problem into 1D problems. In our demonstration, it will be shown that several words are presented at different depths with static video projectors with high contrast.

Study on VR-based wheelchair simulator using vection-inducing movies and limited-motion patterns

We propose a VR-based wheelchair simulator using a combination of vection-inducing movies displayed on a head-mounted display and motions performable by an electric-powered wheelchair. The scenes of the movie change as if the wheelchair performs motions that are not actually performable. We developed a prototype system and conducted an evaluation task, confirming that our simulator can provide a richer experience for barrier simulations.

System of delivering virtual object to user in remote place by handing gestures

In order to communicate with a person in a remote place, there are many means such as sending sentences, making a phone call, chatting by video. A contact system with a distant person becomes a communication tool through an avatar by a virtual reality system, and we feel that there is a barrier to reality. So, we build a system to deliver virtual objects to a user in remote place by behaving as if handing the objects. Remote and present space views are projected on a wall using video chat, and each virtual object is handed over by using an Augmented Reality (AR) marker. The system promotes communication by feeling the connection of the space in a remote place.

Texture synthesis for stable planar tracking

We propose a texture synthesis method to enhance the trackability of a target planar object by embedding natural features into the object in the object design process. To transform an input object into an easy-to-track object in the design process, we extend an inpainting method for naturally embedding the features into the texture. First, a feature-less region in an input object is extracted based on feature distribution based segmentation. Then, the region is filled by using an inpainting method with a feature-rich region searched in an object database. By using context based region search, the inpainted region can be consistent in terms of the object context while improving the feature distribution.

The impact of camera height in cinematic virtual reality

Watching a 360° movie with Head Mounted Displays (HMDs) the viewer feels to be inside the movie and can experience it in an immersive way. The head of the viewer is exactly in the same place as the camera was when the scene was recorded. Viewing a movie by HMDs from the perspective of the camera can raise some challenges, e.g. heights of well-known objects can irritate the viewer in the case the camera height does not correspond to the physical eye height. The aim of this work is to study how the position of the camera influences presence, sickness and the user experience of the viewer. For that we considered several watching postures as well as various camera heights. The results of our experiments suggest that differences between camera and eye heights are more accepted, if the camera position is lower than the viewer's own eye height. Additionally, sitting postures are preferred and can be adapted easier than standing postures. These results can be applied to improve guidelines for 360° filmmakers.

Towards first person gamer modeling and the problem with game classification in user studies

Understanding gaming expertise is important in user studies. We present a study comprised of 60 participants playing a First Person Shooter Game (Counter-Strike: Global Offensive). This study provides results related to a keyboard model used to determine an objective measurement of gamers' skill. We also show that there is no correlation between frequency questionnaires and user skill.

Towards unobtrusive obstacle detection and notification for VR

We present results of a preliminary study on our planned system for the detection of obstacles in the physical environment by means of an RGB-D sensor and their unobtrusive signalling using metaphors within the virtual environment (VE).

User-centric classification of virtual reality locomotion

Traveling in a virtual world, while confined in the real world requires a virtual reality locomotion (VRL) method. VRL remains an issue because of three fundamental challenges, sickness, presence, and fatigue. We propose a User-Centric Classification (UCC) of VRL methods based on a method's ability to address these challenges. UCC provides a framework to discuss and compare different VRL methods and to examine performance trade-offs. We designed and implemented a testbed to study several VRL methods, and initial results demonstrated the effectiveness of the UCC framework [1].

Using deep-neural-network to extend videos for head-mounted display experiences

Immersion is an important factor in video experiences. Therefore, various methods and video viewing systems have been proposed so far. Although head-mounted displays (HMDs) are home-friendly and more available among these devices, they can provide an immersive video experience owing to their wide field-of-view (FoV) and separation of users from the outside environment. They are often used for panoramic and stereoscopic VR videos, but the demand for viewing standard plane videos has increased in recent years. However, the theater mode, which restricts the FoV, is basically used for viewing plane videos. Thus, the advantages of HMDs are not fully utilized. Therefore, we explored an effective method for viewing plane videos by an HMD, in combination with view augmentation by LED implants to the HMD. We used deep neural network (DNN) to generate images for peripheral vision and wide FoV customization.

"Virtual ability simulation" to boost rehabilitation exercise performance and confidence for people with disability

The purpose of this paper is to investigate a concept called virtual ability simulation (VAS) for people with disability in a virtual reality (VR) environment. In a VAS people with disabilities perform tasks that are made easier in the virtual environment (VE) compared to the real world. We hypothesized that putting people with disabilities in a VAS will increase confidence and enable more efficient task completion than without a VAS. To investigate this hypothesis, we conducted a within-subjects experiment in which participants performed a virtual task called "kick the ball" in two different conditions: a no gain condition (i.e., same difficulty as in the real world) and a rotational gain condition (i.e., physically easier than the real world but visually the same). The results from our study suggest that VAS increased participants' confidence which in turn enables them to perceive the difficulty of the same task easier.

Virtual gaze: exploring use of gaze as rich interaction method with virtual agent in interactive virtual reality content

Nonverbal cues, especially eye gaze, plays an important role in our daily communication, not just as an indicator of interest, but also as a method to convey information to another party. In this work, we propose a simulation of human eye gaze in Virtual Reality content to improve immersion of interaction between user and virtual agent. We developed an eye-tracking integrated interactive narrative content with a focus on player's interaction with gaze aware virtual agent, which is capable of reacting towards the player's gaze to simulate real human-to-human communication in VR environment and conducted an initial study to measure user's reaction.

Virtual reality interactivity in a museum environment

We present research based off of the needs of museum interaction between users and artwork. Through 360 degree footage, we were able to explore the possibilities of an augmented and virtual environment. Adding user interaction features to enhance the surroundings helped us to achieve the type of immersion that a museum could elicit. Museums are struggling to connect with the younger generation these days, therefore, incorporating virtual reality technology not only adds an exciting element for regular visitors, but entices new visitors as well. Our goal was to find ways to use virtual reality to do exactly this, enhancing and expanding the impact of these provoking spaces.

Visualization of neural networks in virtual reality using Unreal Engine

Many applications today use deep learning to provide intelligent behavior. To understand and explain how deep learning models come to certain decisions can be hard or completely in-transparent. We propose a visualization of convolutional neural networks in Virtual Reality (VR). The interactive application shows the internal processes and allows to inspect the results. Large networks can be visualized in real-time with special rendering techniques.

Visualization of software components and dependency graphs in virtual reality

We present the visualization of component-based software architectures in Virtual Reality (VR) to understand complex software systems. We describe how to get all relevant data for the visualization by data mining on the whole source tree and on source code level. The data is stored in a graph database for further analysis and visualization. The software visualization uses an island metaphor. Storing the data in a graph database allows to easily query for different aspects of the software architecture.

VR sickness measurement with EEG using DNN algorithm

Recently, VR technology is rapidly developing and attracting public attention. However, VR Sickness is a problem that is still not solved in the VR experience. The VR sickness is presumed to be caused by crosstalk between sensory and cognitive systems [1]. However, since there is no objective way to measure sensory and cognitive systems, it is difficult to measure VR sickness. In this paper, we collect EEG data while participants experience VR videos. We propose a Deep Neural Network (DNN) deep learning algorithm by measuring VR sickness through electroencephalogram (EEG) data. Experiments have been conducted to search for an appropriate EEG data preprocessing method and DNN structure suitable for the deep learning, and the accuracy of 99.12% is obtained in our study.

Walking into ancient paintings with virtual candles

Taking a famous Chinese painting for a case study, the paper presents a virtual exhibition platform. Through the platform, users can walk into the scenes in the painting with virtual candles in hands, know the scenes which are endowed vitality by attaching actor performances, and see every detail of the artwork. The scenes change their light, shades and shadows in real time by the candles, just as real scenes. For real-time candle-moving and light-changing interaction, in implementation, we render the light effects at densely sampled user positions offline, and extract the light, shades and shadows as masks; during online processing, the system merges the artwork with masks chosen by the positions of candles. The system, novel in both design and techniques, has been partially used in the Palace Museum (Beijing).

XRCreator: interactive construction of immersive data-driven stories

Immersive data-driven storytelling, which uses interactive immersive visualizations to present insights from data, is a compelling use case for VR and AR environments. We present XRCreator, an authoring system to create immersive data-driven stories. The cross-platform nature of our React-inspired system architecture enables the collaboration among VR, AR, and web users, both in authoring and in experiencing immersive data-driven stories.