VRCAI '19- The 17th International Conference on Virtual-Reality Continuum and its Applications in Industry

Full Citation in the ACM Digital Library

SECTION: Technical Papers

SESSION: Session 1

Reducing Latency in a Collaborative Augmented Reality Service

We show how inter-device location tracking latency can be reduced in an Augmented Reality (AR) service that uses Microsoft’s HoloLens (HL) devices for multi-user collaboration. Specifically, we have built a collaborative AR system for a research greenhouse that allows multiple users to be able to work collaboratively to process and record information about individual plants in the greenhouse. In this system, we have combined the HL “world tracking” functionality together with marker-based tracking to develop a one-for-all-shared-experience (OFALL-SE) dynamic object localization service. We compare this OFALL-SE service with the traditional Local Anchor Transfer (LAT) method for managing shared experiences and show that latency of data transmission throughout the server and users can be dramatically reduced. Our results indicate that OFALL-SE can support near-real-time collaboration when sharing the physical locations of the plants among users in a greenhouse.

The Impact of Remote User’s Role in a Mixed Reality Mixed Presence System

Research has shown that Mixed Presence (MP) systems are a valuable collaboration tool. However current research into MP systems is limited to a handful of tabletop and Virtual Reality (VR) systems with no exploration of Head-Mounted Display (HMD) based Mixed Reality (MR) solutions. In this paper, we present a prototype HMD based MR Mixed Presence system we designed and developed. We conducted a user study to investigate how different role assignment affects coordination and engagement in a group task with two local users using MR HMDs, and one remote user with a desktop-based Augmented Reality (AR) interface. The results indicated that the role of coordinator significantly increased the remote user’s engagement with increased usage of visual communication cues. This is further supported by the mental effort and perceived dominance reported by users. Feedback from users also indicated visual communication cues are valuable for remote users in MP systems.

Within a Virtual Crowd: Exploring Human Movement Behavior during Immersive Virtual Crowd Interaction

This paper presents an exploratory study aiming at investigating the movement behavior of participants when immersed within a virtual crowd. Specifically, a crosswalk scenario was created in which a virtual crowd was scripted to cross the road once the traffic light turned green. Participants were also instructed to walk across the road to the opposite sidewalk. During that time, the assessment of participant movement behavior was captured by the use of objective measurements (time, speed, and deviation). Five density conditions (no density, low density, medium density, high density, and extreme density) were developed to investigate which had the greatest effect on the movement behavior of the participants. The results obtained indicated that the extreme density condition of the virtual crowd did indeed alter the movement behavior of participants to a significant degree. Given that density had the greatest effect on the movement behavior of participants, a follow-up study was also conducted that utilized the density findings and explored whether density can affect the speed and direction of participants. This was achieved through examining five speed conditions and six directional conditions. The follow-up study provided some evidence that during an extreme density condition the speed of the crowd also affects the movement behavior of participants. However, no alteration in human movement behavior was observed when examining the direction of the virtual crowd. Implications for future research are discussed.

Investigating the use of Different Visual Cues to Improve Social Presence within a 360 Mixed Reality Remote Collaboration*

In this paper, we investigate the social aspects and effects of various visual cues in a 360 panorama-based Mixed Reality remote collaboration system. We conducted a series of user studies using a prototype system to compare the effects of different combinations of visual cues including hand gestures, ray pointing and drawing in a remote collaborative task. We found that adding ray pointing and drawing can enhance the social aspects of a remote collaborative experience and also can lower the required task mental effort. We discuss the findings and suggest directions for future research.

How to VizSki: Visualizing Captured Skier Motion in a VR Ski Training Simulator

Alpine ski training is restricted by environmental requirements and the incremental and cyclical ways of how movement and form are taught. Therefore, we propose a virtual reality ski training system based on an indoor ski simulator. The system uses two trackers to capture the motion of skis so that users can control them on the virtual ski slope. For training we captured the motion of professional athletes and replay them to the users to help them to improve their skills. In two studies, we explore the utility of visual cues to help users to effectively learn the motion patterns from the pro-skier. In addition, we look at the impact of feedback on this learning process. The work provides the basis for developing and understanding the possibilities and limitations of VR ski simulators, which have the potential to support skiers in their learning process.

Extended Narrow Band Weighted MultiFLIP for Two-Phase Liquid Simulation

Physically-based fluid simulation has been studied for many years in computer graphics. MultiFLIP is a powerful method to simulate two-phase liquid phenomena such as bubbles and the “glugging” effect of water pouring, which cannot be produced by the traditional Fluid Implicit Particle (FLIP) method. In contrast to FLIP where only the liquid phase is involved, MultiFLIP samples two respective grid velocities for both gas and liquid volumes. However, MultiFLIP produces some abnormal phenomena such as small liquid droplets getting carried around by gas. The abnormality is in fact produced by the reason that MultiFLIP uses the same weights for both gas and liquid when blending the velocities near the interface for divergence-free projection. To address this problem, we present a novel velocity coupling method, which uses different mass for gas and liquid particles when interpolating velocities of particles into the Eulerian grid. Besides, we apply a transition function to MultiFLIP method so that the two-phase liquid simulation can switch between a particle-based simulation and a grid-based simulation, which aims to reduce the number of particles and smooth the liquid-gas interface in the calm areas. Experiments show that our techniques can conserve the kinetic energy and tiny details of gas-liquid interface better, as well as reduce the number of gas and liquid particles.

Motion Volume: Visualization of Human Motion Manifolds

The understanding of human motion is important in many areas such as sports, dance, and animation. In this paper, we propose a method for visualizing the manifold of human motions. A motion manifold is defined by a set of motions in a specific motion form. Our method visualizes the ranges of time-varying positions and orientations of a body part by generating volumetric shapes for representing them. It selects representative keyposes from the keyposes of all input motions to visualize the range of keyposes at each key timing. A geometrical volume that contains the trajectories from all input motions is generated for each body part. In addition, a geometrical volume that contains the orientations from all input motions is generated for a sample point on the trajectory. The user can understand the motion manifold by visualizing these motion volumes. In this paper, we present some experimental examples for a tennis shot form.

SESSION: Session 2

Using Augmented Reality to Improve Productivity and Safety for Heavy Machinery Operators: State of the Art

The machinery used in industrial applications, such as in agriculture, construction, and forestry, are increasingly equipped with digital tools that aim to aid the operator in task completion, improved productivity, and enhanced safety. In addition, as machines are increasingly connected, there are even more opportunities to integrate external information sources. This situation provides a challenge in mediating the information to the operator. One approach that could be used to address this challenge is the use of augmented reality. This enables the system-generated information to be combined with the user’s perception of the environment. It has the potential to enhance the operators’ awareness of the machine, the surroundings, and the operation that needs to be performed. In this paper, we review the current literature to present the state of the art, discuss the possible benefits, and the use of augmented reality in heavy machinery.

A Data-Driven Optimisation Approach to Urban Multi-Site Selection for Public Services and Retails

Urban lifestyle depends on public services and retails, of which site locations matter to convenience for residents. We introduce a novel approach to the systematic multi-site selection for public services and retails in an urban context. It takes as input a set of data about an urban area and generates an optimal configuration of two-dimensional locations for urban sites on public services and retails. We achieve this goal using data-driven optimisation entangling deep learning. The proposed approach can cost-efficiently generate a multi-site location plan considering representative site selection criteria, including coverage, dispersion and accessibility. It also complies with the local plan and the predicted suitability regarding land-use zoning.

A Bowl-Shaped Display for Controlling Remote Vehicles

This paper proposes a bowl-shaped hemispherical display to observe omnidirectional images. This display type has many advantages over conventional, flat 2D displays, in particular when it is used for controlling remote vehicles. First, it allows users to observe an azimuthal equidistant view of omnidirectional images by looking from above. Second, it provides a first-person view by looking into the inside of the hemispherical surface from diagonally above. Third, it provides a pseudo–third-person view as if we watched the remote vehicle from its back, by observing both the inside and outside at the same time from obliquely above. These characteristics solve the issues of blind angles around the remote vehicle. We conduct a VR-based user study to compare the bowl-shaped display to an equirectangular projection on a 2D display and a first-person view used in head-mounted displays. Based on the insights gained in the study, we present a real-world implementation and describe the uniqueness, advantages but also shortcomings of our method.

Computational Design and Fabrication of Customized Gamepads

SESSION: Session 3

3D Human Avatar Digitization from a Single Image

With the development of AR/VR technologies, a reliable and straightforward way to digitize three-dimensional human body is in high demand. Most existing methods use complex equipment and sophisticated algorithms. This is impractical for everyday users. In this paper, we propose a pipeline that reconstructs 3D human shape avatar at a glance. Our approach simultaneously reconstructs the three-dimensional human geometry and whole body texture map with only a single RGB image as input. We first segment the human body part from the image and then obtain an initial body geometry by fitting the segment to a parametric model. Next, we warp the initial geometry to the final shape by applying a silhouette-based dense correspondence. Finally, to infer invisible backside texture from a frontal image, we propose a network we call InferGAN. Comprehensive experiments demonstrate that our solution is robust and effective on both public and our own captured data. Our human avatars can be easily rigged and animated using MoCap data. We developed a mobile application that demonstrates this capability in AR/VR settings.

Recycling a Landmark Dataset for Real-time Facial Capture and Animation with Low Cost HMD Integrated Cameras

Preparing datasets for use in the training of real-time face tracking algorithms for HMDs is costly. Manually annotated facial landmarks are accessible for regular photography datasets, but introspectively mounted cameras for VR face tracking have incompatible requirements with these existing datasets. Such requirements include operating ergonomically at close range with wide angle lenses, low-latency short exposures, and near infrared sensors. In order to train a suitable face solver without the costs of producing new training data, we automatically repurpose an existing landmark dataset to these specialist HMD camera intrinsics with a radial warp reprojection. Our method separates training into local regions of the source photos, i.e., mouth and eyes for more accurate local correspondence to the mounted camera locations underneath and inside the fully functioning HMD. We combine per-camera solved landmarks to yield a live animated avatar driven from the user’s face expressions. Critical robustness is achieved with measures for mouth region segmentation, blink detection and pupil tracking. We quantify results against the unprocessed training dataset and provide empirical comparisons with commercial face trackers.

Real-time Virtual Object Insertion for Moving 360°Videos

We propose an approach for real-time insertion of virtual objects into pre-recorded moving-camera 360°video. First, we reconstruct camera motion and sparse scene content via structure from motion on stitched equirectangular video. Then, to plausibly reproduce real-world lighting conditions for virtual objects, we use inverse tone mapping to recover high dynamic range environment maps which vary spatially along the camera path. We implement our approach into the Unity rendering engine for real-time virtual object insertion via differential rendering, with dynamic lighting, image-based shadowing, and user interaction. This expands the use and flexibility of 360°video for interactive computer graphics and visual effects applications.

Dynamic Occlusion Handling for Real-Time AR Applications

Augmented reality (AR) allows computer generated graphics to be overlaid in images or video captured by a camera in real time. This technology is often used to enhance perception by providing extra information or simply by enriching the experience of the user. AR offers a significant potential in many applications such as industrial, medical, education and entertainment. However, for AR to achieve the maximum potential and become fully accepted, the real and virtual objects within the user’s environment must become seamlessly integrated. Three main types of problems arise when we try to achieve this effect: illumination issues, tracking difficulties and occlusion troubles. In this work we present an algorithm to handle AR occlusions in real time. Our approach uses raw depth information of the scene to realize a rough foreground / background segmentation. We use this information, as well as details from color data to estimate a blending coefficient and combine the virtual objects with the real objects into a single image. After experimenting with different scenes we show that our approach is able to produce consistent and aesthetically pleasing occlusions between virtual and real objects, with a low computational cost. Furthermore, we explore different alternatives to improving the quality of the final results while overcoming limitations of previous methods.

SESSION: Session 4

Analysis of VR Sickness and Gait Parameters During Non-Isometric Virtual Walking with Large Translational Gain

The combination of room-scale virtual reality and non-isometric virtual walking techniques is promising-the former provides a comfortable and natural VR experience, while the latter relaxes the constraint of the physical space surrounding the user. In the last few decades, many non-isometric virtual walking techniques have been proposed to enable unconstrained walking without disrupting the sense of presence in the VR environment. Nevertheless, many works reported the occurrence of VR sickness near the detection threshold or after prolonged use. There exists a knowledge gap on the level of VR sickness and gait performance for amplified non-isometric virtual walking at well beyond the detection threshold. This paper presents an experiment with 17 participants that investigated VR sickness and gait parameters during non-isometric virtual walking at large and detectable translational gain levels. The result showed that the translational gain level had a significant effect on the reported sickness score, gait parameters, and center of mass displacements. Surprisingly, participants who did not experience motion sickness symptoms at the end of the experiment adapted to the non-isometric virtual walking well and even showed improved performance at a large gain level of 10x.

Conformal Redirected Walking for Shared Indoor Spaces

In this work we present a redirected walking scheme suitable for shared spaces in a virtual reality environment. We show our redirected walking to work for the case of two physical spaces (a host and a guest) being merged into a single virtual host space. The redirection is based on warping the guest space into the host space using a conformal mapping that preserves the local shape and features. We compare our technique with state-of-the art indoor redirection schemes and show its efficiency. We found our method to have better task performance, higher social presence, and less simulator sickness.

Unlimited Corridor: A Visuo-haptic Redirection System

The Unlimited Corridor is a virtual reality system that enables users to walk in an ostensibly straight direction around a virtual corridor within a small tracked space. Unlike other redirected walking systems, the Unlimited Corridor allows users to keep walking around without interruptions or resetting phases. This is made possible by combining a redirected walking technique with visuo-haptic interaction and a path planning algorithm. The Unlimited Corridor produces passive haptic feedback using semi-circular handrails; that is, when users grip a straight handrail in the virtual environment, they simultaneously grip a corresponding curved handrail in the physical world. These stimuli enable visuo-haptic interaction, with the user perceiving the gripped handrail as straight, and this sensation enhances the effects of redirected walking. Furthermore, we developed an algorithm that dynamically modifies the amount of distortion to allow a user to walk ostensibly straight and turn at intersections in any direction. We evaluated the Unlimited Corridor using a virtual space of approximately 400 m2 in a physical space of approximately 60 m2. According to a user study, the median value of the straightness sensation of walking when users grip the handrails (5.13) was significantly larger than that of the sensation felt without gripping the handrails (3.38).

SESSION: Session 5

LINACVR: VR Simulation for Radiation Therapy Education

La Petite Fee Cosmo

This study aims to investigate the effectiveness of combining an interactive game and the concept of productive failure (PF) to nurture innovative teaching and learning. The study also aims to promote innovative approaches to improve students’ learning experience in data structure concepts taught in computer science disciplines, especially in the linked list concepts. A 2D bridge-building puzzle game, “La Petite Fee Cosmo” was developed to assist students in understanding the underlying concepts of the linked list and foster creative usage of the various functionalities of a linked list in diverse situations. To evaluate the potential impact of the interactive game and implications of productive failure on student learning; a pre-test, post-test and delayed-test were developed and used in the evaluation process. Further, the technology acceptance model (TAM) was applied to examine the factors that influence the adoption of productive failure approach in learning data structures.

For group 1, the game that they played would represent the pre-test equivalent for group 2. Both the game and quiz have the exact same challenges. The game was built to analyse and infer player's knowledge, skill level and improvements. A player profile manager and gameplay analytics framework were developed to support saving data on the PC for multiple users. The analytics system logs all player performance statistics for each stage. A new log file is created for every replay of the same level — these interfaces directly with the operating system's file system for data storage. Singleton pattern was also used to develop these systems. Each time a new player logs into the game, a file with a unique id is generated. The gameplay analytics system relies on the player's unique id to match its analytics file to a player. The post-test was done to measure their improvement after the lecture. The delayed test, on the other hand, was conducted to evaluate how well students can retain the knowledge over a certain period for the respective group and to verify the results.

Dealing with Clutter in Augmented Museum Environments

Augmented Reality (AR) can be used in museum and exhibition spaces to extend the available information space. However, AR scenes in such settings can become cluttered when exhibits are displayed close to one another. To investigate this problem, we have implemented and evaluated four AR headset interaction techniques for the Microsoft HoloLens that are based on the idea of Focus+Context (F+C) visualisation [Kalkofen et al. 2007]. These four techniques were made up of all combinations of interaction and response dimensions where the interaction was triggered by either “walk” (approaching an exhibit) or “gaze” (scanning/looking at an exhibit) and the AR holograms responded dynamically in either a “scale” or “frame” representation. We measured the efficiency and accuracy of these four techniques in a user study that examined their performance in an abstracted exhibition setting when undertaking two different tasks (“seeking” and “counting”). The results of this study indicated that the “scale” representation was more effective at reducing clutter than the “frame” representation, and that there was a user preference for the “gaze-scale” technique.

SESSION: Session 6

Data Presentation with Haptic Glyphs: A Pilot Study

This paper describes the feasibility of glyph-based data presentation and discusses related research works. Next, on the basis of research findings, a software application has been designed and developed for multivariate data presentation in a multimodal virtual environment (VE). In a multimodal VE that incorporates a number of senses, i.e., visual, auditory, and touch or haptic, data is tangible along with its visual presentation. Variables are represented as haptic glyphs of different shapes, sizes, and other physical properties such as friction. Audio feedback helps further exploration of the data. The result of this pilot study demonstrates that glyphs can be successfully used for presenting multivariate data in a multimodal VE. This glyph-based multimodal presentation makes the information available to the blind and visually impaired (VI) in a semantic-aware environment. A multimodal VE also enriches the experience of the sighted users.

An Augmented Reality Application for Mobile Visualization of GIS-Referenced Landscape Planning Projects

We introduce an augmented reality application that allows the representation of planned real world objects (e.g. wind turbines or power poles) at their actual geographic location. The application uses GPS for positioning, which is then supplemented by augmented reality feature tracking to get a constant and stable positional and rotational reading. As addition to the visualization, we use on-the-fly gathered sensor data to identify foreground objects. Thus, for practical scenarios our application depicts images with mostly correct occlusion between real and virtual objects. The application will be used to support urban and landscape planners in their process, especially for the purpose of public information and acceptance. It provides an advantage to current planning processes, where the representation of objects is limited to positions on maps, miniature models, or at best a photo montage where the object is placed into a still camera image.

A Multi-User 360-Video Streaming System for Wireless Network

With the rapid development of Virtual Reality technology and its hardware, 360-degree video is becoming a new form of media which arouses the interest of public. In the past few years, many 360-degree video delivery schemes are proposed, but there hasn’t been a standard solution which can perfectly overcome the difficulties caused by network latency and bandwidth limit. In this paper, we consider the context of a wireless network consisting of a base station and several users. We proposed a 360-degree video delivery and streaming scheme which serves multiple users simultaneously while optimizing the global bandwidth consumption. The system will predict the head movement of users using machine learning algorithm, and extract the visible portion of the video frame for transmission. The core contribution of the scheme is that it will recognize the conjoint viewport of multiple users, and then optimize the global bandwidth consumption by arranging the transmission of conjoint viewport over the public channel of the wireless network. The results prove that the proposed scheme can effectively reduce global bandwidth consumption of the network with relatively simple configuration.

SESSION: Session 7

Combination of Mechanical and Electrical Stimulation for an Intense and Realistic Tactile Sensation

Naturalistic tactile sensations can be elicited by mechanical stimuli because mechanical stimulations reproduce natural physical phenomena. However, a mechanical stimulation that is too strong may cause injury. Although electrical stimulation can elicit strong tactile sensations without damaging the skin, it is inferior in terms of naturalness. We propose and validate a haptic method for presenting naturalistic and intense sensations by combining electrical and mechanical stimulation. We validate the proposed method by verifying whether both enhancement of the subjective strength of mechanical stimulation through electrical stimulation and elimination of the bizarre sensation of electrical stimulation through mechanical stimulation can be achieved. We find that the proposed method increases subjective intensity by 25% on average across participants compared with mechanical stimulus alone and decreases the bizarre sensation compared with the presentation of the electrical stimulus alone. The method can be used to enhance the experience of virtual-reality content but has room for improvement especially in terms of intensity enhancement.

HypAR: Situated Mineralogy Exploration in Augmented Reality

Hyperspectral imaging, as a fast and cost effective method of mapping the composition of geological materials in context, is a key enabler for scientific discoveries in the geosciences. Being able to do this in-situ in real world context, possibly in real time, would have profound implications for geology and minerals exploration. This work addresses this important issue by developing an augmented reality application called HypAR that enables in-situ, interactive exploration of mineralogy spatially co-located and embedded with rock surfaces. User centred design is deployed to assure the utility and validity of the system. We describe the requirements analysis and design process for HypAR. We present a prototype using the Microsoft HoloLens that was implemented for a rock wall containing a wide range of minerals and materials from significant geological localities of Western Australia. We briefly discuss several use cases for which HypAR and extensions thereof may prove useful to geoscientists and other end users who have to make effective, informed decisions about the mineralogy of rock surfaces.

FUROSHIKI: Augmented Reality Media That Conveys Japanese Traditional Culture

Furoshiki is a traditional culture of Japan which has prevailed in its everyday life and is widely used even today. In this paper, we propose a system through which users can be, while getting interested, exposed including to Japanese wrapping culture and attitude of dealing with things with care by augmenting an act to “wrap” with a piece of Japanese traditional furoshiki using the image-recognition technique and the motion-image projection technique.

SESSION: Session 8

Head-Fingers-Arms: Physically-Coupled and Decoupled Multimodal Interaction Designs in Mobile VR

This paper proposes a novel bimanual finger-based gesture called X-Fingers that provides interactive 2D spatial input control using vision-based techniques in mobile VR. This finger-based input modality can be coordinated with the movement of the user's arms or head to provide an additional input modality. The incorporation of the arms or the head provides physically-coupled and physically-decoupled multimodal interactions respectively. Given these two design options, we conducted user studies to understand how the nature of the physical coupling of interactions influences the user's performance and preferences with task consisting of varying degrees of coordination between the modalities. Our results show that physically-decoupled interactions designs are preferred when the degree of coordination is high within the multimodal interaction.

FEETICHE: FEET Input for Contactless Hand gEsture Interaction

Foot input has been proposed to support hand gestures in many interactive contexts, however, little attention has been given contactless 3D object manipulation. This is important since many applications, namely sterile surgical theaters require contactless operation. However, relying solely on hand gestures makes it difficult to specify precise interactions since hand movements are difficult to segment into command and interaction modes. The unfortunate results range from unintended activations, to noisy interactions and misrecognized commands. In this paper, we present FEETICHE a novel set of multi-modal interactions combining hand and foot input for supporting contactless 3D manipulation tasks, while standing in front of large displays driven by foot tapping and heel rotation. We use depth sensing cameras to capture both hand and feet gestures, and developed a simple yet robust motion capture method to track dominant foot input. Through two experiments, we assess how well foot gestures support mode switching and how this frees the hands to perform accurate manipulation tasks. Results indicate that users effectively rely on foot gestures to improve mode switching and reveal improved accuracy on both rotation and translation tasks.

Visualizing and Interacting with Hierarchical Menus in Immersive Augmented Reality

Graphical User Interfaces (GUIs) have long been used as a way to inform the user of the large number of available actions and options. GUIs in desktop applications traditionally appear in the form of two-dimensional hierarchical menus due to the limited screen real estate, the spatial restrictions imposed by the hardware e.g. 2D, and the available input modalities e.g. mouse/keyboard point-and-click, touch, dwell-time etc. In immersive Augmented Reality (AR), there are no such restrictions and the available input modalities are different (i.e. hand gestures, head pointing or voice recognition), yet the majority of the applications in AR still use the same type of GUIs as with desktop applications.

In this paper we focus on identifying the most efficient combination of (hierarchical menu type, input modality) to use in immersive applications using AR headsets. We report on the results of a within-subjects study with 25 participants who performed a number of tasks using four combinations of the most popular hierarchical menu types with the most popular input modalities in AR, namely: (drop-down menu, hand gestures), (drop-down menu, voice), (radial menu, hand gestures), and (radial menu, head pointing). Results show that the majority of the participants (60%, 15) achieved a faster performance using the hierarchical radial menu with head pointing control. Furthermore, the participants clearly indicated the radial menu with head pointing control as the most preferred interaction technique due to the limited physical demand as opposed to the current de facto interaction technique in AR i.e. hand gestures, which after prolonged use becomes physically demanding leading to arm fatigue known as ’Gorilla arms’.

SESSION: Session 9

Augmented Reality-Based Procedural Task Training Application for Less Privileged Children and Autistic Individuals

In this work, we evaluate the applicability of using Augmented Reality applications in for enhanced learning experiences for children from less privileged backgrounds, with a focus on autistic population. Such an intervention can prove to be very useful to children with reduced cognitive development. In our evaluation, we compare the AR mode of instruction for a procedural task training, using tangram puzzles, with live demonstration and a desktop-based application. First, we performed a within-subjects user study on neurotypical children in the age group of 9 - 12 years. We asked the children to independently solve a tangram puzzle after being trained through different modes of instruction. Second, we used the same instruction modes to train autistic participants. Our findings indicate that during training, children took the longest time to interact with Desktop-based instruction, and took the shortest time to interact with the live demonstration mode. Children also took the longest time to independently solve the tangram puzzle in the Desktop mode. We also found that autistic participants could use AR-based instructions but required more time to go through the training.

Immersive applications: what if users are in the autism spectrum?

Domain experts generally agree on the potential of VR-based treatment aimed at improving emotional competencies and social skills in individuals with Autism Spectrum Disorder (ASD). The need arises for Immersive Virtual Environments evaluation frameworks appropriate for this specific user population. In this paper we sketch results of a study focused on engagement, with the multifold objective of (1) evaluating two headsets differing in the degree to which the physical world is cutout from the view (Oculus Rift and HoloLens) to study their general appropriateness with respect to possible VR-based ASD people treatment, (2) reasoning about the interpretation of selected measured engagement aspects in the cases of Typically Development people and ASD people, and (3) outlining the nature of VR-based possible ASD-oriented applications based on the two headsets.

Don’t Panic: Recursive Interactions in a Miniature Metaworld

Metaworld is a new recursive interaction paradigm for virtual reality, where a miniature display (or 3D map) of the virtual world is presented to the user as a miniature model that itself lives inside the virtual world. The miniature model is interactive and every action which occurs on the miniature world similarly occurs to the greater virtual world and vice-versa. We implemented the metaworld concept in the virtual reality application MetaCity, a city designing sandbox where users can reach into a miniature model and move the cars and skyscrapers. Design considerations of how to display and interact with the miniature model are presented, and a technical implementation of the miniature world is described. The metaworld concept was informally and playfully tested in the MetaCity which revealed a number of novel interactions that enable the user to navigate quickly through large spaces, re-scale objects in the world and manipulate the very fabric of the world itself. These interactions are discussed within the context of four major categories – Experiential Planning, Interdimensional Transformations, Power of the Gods and Self Manipulation.

SESSION: Session 10

Grammar of VR Storytelling: Narrative Immersion and Experiential Fidelity in VR Cinema

As grammar of VR storytelling evolves, we must look beyond the technical capabilities of the medium and associated perceptual immersion, in order to better understand the effect of narrative on the users. This paper presents a qualitative analysis of the experience of a VR Cinema and the experiencer’s connection to the narrative. The study attempts to illustrate the significance of narrative immersion with respect to the 360° medium of storytelling in VR. In addition how the various elements in such a narrative lead to experiential fidelity is examined. We believe that the insights gathered would help VR filmmakers in creating effective narrative experiences.

Situated Storytelling with SLAM Enabled Augmented Reality

This paper addresses the feasibility of situated storytelling using Simultaneous Localisation and Mapping (SLAM) enabled augmented reality (AR) on a mobile phone. We specifically focus on storytelling in the heritage context as it provides a rich environment for stories to be told in. We conducted expert interviews with several museum and heritage sites to identify major themes for storytelling in the heritage context. These themes informed the development of an AR based storytelling application for a mobile phone. We evaluated the application in a user study and gained further insight into the factors that users appreciate in AR based storytelling. From these insights we derive several high level design guidelines that may inform future system development for situated storytelling, especially in the heritage context.

The Artistic Approach to Virtual Reality

Virtual Reality technologies have been challenging the way in which humans interact with computers since its implementation. When viewed through an artistic lens these interactions reveal a shift in the roles that content creators and the end user fulfill. VR technologies inherently demand that the user participates in the creation of the content while incorporated into the experience. This realization has dramatic implications for media and artistic works, as the traditional role of the content creator has been to dictate and frame the view in which the user interacts with the content, but with VR works much of the creators role has been stripped away and transferred to the viewer. This breaking of the traditional roles, accompanied by the transition away from “the rectangle,” or the flat rectangular plane which acts as a “canvas” for media and artistic works, requires a new approach to VR works.

SECTION: Poster Abstracts

An End-to-End Augmented Reality Solution to Support Aquaculture Farmers with Data Collection, Storage, and Analysis

Augmented reality (AR) is being rapidly adopted by industries, such as logistics, manufacturing and military. However, one of the extremely under-explored yet significant areas is the primary production industry, and as a major source of food and nutrition, seafood production has always been a priority for many countries. Aquaculture farming is a highly dynamic, unpredictable and labour-intensive process. In this paper, we discuss the challenges in aquaculture farm operation based on our field studies with leading Australian fisheries. We also propose an ”AR + Cloud” system design to tackle the delayed in-situ water quality data collection and query, as well as aquaculture pond stress monitoring and analysis.

An evaluation of augmented reality music notation

We conducted a focus group study of a prototype application to test the opportunities and limitations of augmented reality music notation for musical performance and rehearsal.

An Eye-Tracking Dataset for Visual Attention Modelling in a Virtual Museum Context

Predicting the user’s visual attention enables a virtual reality (VR) environment to provide a context-aware and interactive user experience. Researchers have attempted to understand visual attention using eye-tracking data in a 2D plane. In this poster, we propose the first 3D eye-tracking dataset for visual attention modelling in the context of a virtual museum. It comprises about 7 million records and may facilitate visual attention modelling in a 3D VR space.

Beyond Reality

In virtual reality (VR), a new language of sound design is emerging. As directors grapple to find solutions to some of the inherent problems of telling a story in VR—for instance, the audience's ability to control the field of view—sound designers are playing a new role in subconsciously guiding the audience's attention and consequently, are framing the narrative. However, developing a new language of sound design requires time for creative experimentation, and in direct opposition to this, a typical VR workflow often features compressed project timelines, software difficulties, and budgetary constraints. Turning to VR sound research offers little guidance to sound designers, where decades of research has focused on high fidelity and realistic sound representation in the name of presence and uninterrupted immersion [McRoberts, 2018], largely ignoring the potential contribution of cinematic sound design practices that use creative sound to guide an audience's emotion. Angela McArthur, Rebecca Stewart, and Mark Sandler go as far as to argue that unrealistic and creative sound design may be crucial for an audience's emotional engagement in virtual reality [McArthur et al., 2017].

To make a contribution towards the new language of sound for VR, and with reference to the literature, this practice-led research explores cinematic sound practices and principles within 360-film through the production of a 5-minute 360-film entitled Afraid of the Dark. The research is supported by a contextual survey including unpublished interviews with the sound designers of three 360-films that had the budget and time to experiment with cinematic sound practices - namely, “Under the Canopy” with sound design by Joel Douek, “My Africa” with sound design by Roland Heap, and Emmy award-winning “Collisions” with sound design by Oscar-nominated Tom Myers from Skywalker Sound. Additional insights are included from an unpublished interview with an experienced team of 360-film sound designers from “Cutting Edge” in Brisbane Australia – Mike Lange, Michael Thomas and Heath Plumb.

The findings detail the benefits of thinking about sound from the beginning of pre-production, the practical considerations of on-set sound recording, and differing approaches to realistic representation and creative design for documentary in the sound studio. Additionally, the research contributes a low-budget workflow for creating spatial sound for 360-film as well as a template for an ambisonic location sound report.

Containerisation as a method for supporting multiple VR visualisation platforms from a single data source

This paper discusses a proof-of-concept context-aware container server for exposing multiple VR devices to a single data source. The data source was a real-time streamed reconstruction of a combat simulation generated in NetLogo. The devices included a mobile, tablet, PC, data wall, HMD and dataglove interaction. Each device had its specific requirements and user restrictions. Initial testing of this system suggests it is an efficient method for supporting diverse user needs whilst maintaining data integrity and synchronicity. The overall server architecture is discussed as well as future directions for this research.

Creation and Live Performance of Dance and Music Based on a Body-part Motion Synthesis System

We developed a Body-part Motion Synthesis System (BMSS), which allows users to create choreography by synthesizing body-part motions and to simulate them in 3D animation. To explore the possibilities of using BMSS for creative activities, two dances with different concepts were created and performed by a dancer and a musician. We confirmed that BMSS might be able to generate effective choreographic motions for dance and easily and quickly to support its creation. Moreover, creation using BMSS might fuel new collaboration or interaction between dancers and musicians.

eEyes – an Integrated Aid System for the Blind and People with Low Vision

eEyes is an integrated aid system for people with low vision. eEyes is a wearable device that can be attached to glasses, which consists of a wearable unit and a self-developed standalone computing unit. eEyes employed Natural User Interface (NUI) technology such as user's hand gesture recognition. An eEyes user can interact with eEyes through static hand gestures (open hand, pointing, v sign, etc.). eEyes's hand gesture recognition technology was developed using on skin-color-based hand gesture recognition techniques. This NUI technology on eEyes provides users with enhanced usability for using eEyes's services. eEyes's capabilities can be also applied to other fields such as mobile VR and AR interactions.

Generation of Turning Walking Sensation by a Vestibular Display

This paper describes a new method to generate turning walking sensation by vestibular stimulation with initial (bias) inclination of the motion seat. Our vestibular display can move a seat in 3 degree-of-freedom: lifting, roll and pitch rotation. All of these motions are important for generating walking sensation. We investigated the intensity of turning sensation (i.e. sensation of self-direction changing), and the sensation of straight walking and left/right-turning walking with different conditions of initial roll angle. The result of the user study showed that our method could generate turning walking sensation at about 44 to 52 % and straight walking 57 % of real walking.

HoloCity – exploring the use of augmented reality cityscapes for collaborative understanding of high-volume urban sensor data

This research presents an application for visualizing the real-world cityscapes and massive transport network performance data sets in Augmented Reality (AR) using the Microsoft HoloLens, or any equivalent hardware. This runs in tandem with numerous emerging applications in the growing worldwide Smart Cities movement and industry. Specifically, this application seeks to address visualization of both real-time and aggregated city data feeds - such as weather, traffic and social media feeds. The software is developed in extensible ways, and it able to overlay various historic and live data sets coming from multiple sources.

Advances in computer graphics, data processing and visualization now allow us to tie these visual tools in with much more detailed, longitudinal, massive performance data sets to support comprehensive and useful forms of visual analytics for city planners, decision makers and citizens. Further, it allows us to show these in new interfaces such as the HoloLens and other head-mounted displays to enable collaboration and more natural mappings with the real world.

Using this toolkit, this visualization technology allows a novel approach to explore hundreds of millions of data points in order to find insights, trends, patterns over significant periods of time and geographic space. The focus of our development uses open data sets, which maximizes applications to assessing the performance of networks of cities worldwide. The city of Sydney, Australia is used as our initial application. It showcases a real-world example of this application enabling analysis of the transport network performance over the past twelve months.

Immersive Analytics using Augmented Reality for Computational Fluid Dynamics Simulations

This project aimed to employ multi-sensory visual analytics to a Computational Fluid Dynamics (CFD) dataset using augmented and mixed reality interfaces. Initial application was developed for a Hololens which allows users to interact with the CFD data using gestures, enabling control over the position, rotation and scale of the data, sampling, as well as voice commands that provide a range of functionalities such as changing a parameter or render a different view. This project leads to a more engaging and immersive experience of data analysis, generalised for CFD simulations. The application is also able to explore CFD datasets in fully collaborative ways, allowing engineers, scientists, and end-users to understand the underlying physics and behaviour of fluid flows together.

Integrating Mixed Reality and Internet of Things as an Assistive Technology for Elderly People Living in a Smart Home

In the last few decades, there has been a significant increase in demand for Wearable Assistive Technologies (WATs) useful to overcome functional limitations of individuals. Although advances in Computer Graphics (CG), Computer Vision (CV), and Artificial Intelligence (AI) have the potential to address a wide range of human needs, fully integrated systems that consider age-related changes in elderly people are still pretty uncommon. In this work, we present a WAT that follows interaction design guidelines to ensure reliability, usability, and suitability for everyday use. The WAT enables elderly people to improve interactions with Mixed Reality (MR) and Internet of Things (IoT) technologies. It properly aids and assists elderly people in daily activities such as analysing the environment, recognising and searching for objects, wayfinding, and navigation. We believe that this technology is helpful for blind, low-vision, or hearing-impaired independent elderly people to improve their quality of life while maintaining self-independence.

LUI: A multimodal, intelligent interface for large displays

On large screen displays, using conventional keyboard and mouse input is difficult because small mouse movements often do not scale well with the size of the display and individual elements on screen. We propose LUI, or Large User Interface, which increases the range of dynamic surface area of interactions possible on such a display. Our model leverages real-time continuous feedback of free-handed gestures and voice to control extensible applications such as photos, videos, and 3D models. Utilizing a single stereo-camera and voice assistant, LUI does not require exhaustive calibration or a multitude of sensors to operate, and it can be easily installed and deployed on any large screen surfaces. In a user study, participants found LUI efficient and easily learnable with minimal instruction, and preferred it to more conventional interfaces. This multimodal interface can also be deployed in augmented or virtual reality spaces and autonomous vehicle displays.

Master of Disaster

To be prepared for flooding events, disaster response personnel has to be trained to execute developed action plans. We present a collaborative operator-trainee setup for a flood response training system, by connecting an interactive flood simulation with a VR client, that allows to steer the remote simulation from within the virtual environment, deploy protection measures, and evaluate results of different simulation runs. An operator supervises and assists the trainee from a linked desktop application.

Mixing realities for sketch retrieval in Virtual Reality

Users within a Virtual Environment often need support designing the environment around them with the need to find relevant content while remaining immersed. We focus on the familiar sketch-based interaction to support the process of content placing and specifically investigate how interactions from a tablet or desktop translate into the virtual environment. To understand sketching interaction within a virtual environment, we compare different methods of sketch interaction, i.e., 3D mid-air sketching, 2D sketching on a virtual tablet, 2D sketching on a fixed virtual whiteboard, and 2D sketching on a real tablet. The user remains immersed within the environment and queries a database containing detailed 3D models and replace them into the virtual environment. Our results show that 3D mid-air sketching is considered to be a more intuitive method to search a collection of models; while the addition of physical devices creates confusion due to the complications of their inclusion within a virtual environment. While we pose our work as a retrieval problem for 3D models of chairs, our results are extendable to other sketching tasks for virtual environments.

Negative Space: Workspace Awareness in 3D Face-to-Face Remote Collaboration

Face-to-face telepresence promotes the sense of ”being there” and can improve collaboration by allowing immediate understanding of remote people’s nonverbal cues. Several approaches successfully explored interactions with 2D content using a see-through whiteboard metaphor. However, with 3D content, there is a decrease in awareness due to ambiguities originated by participants’ opposing points-of-view. In this paper, we investigate how people and content should be presented for discussing 3D renderings within face-to-face collaborative sessions. To this end, we performed a user evaluation to compare four different conditions, in which we varied reflections of both workspace and remote people representation. Results suggest potentially more benefits to remote collaboration from workspace consistency rather than people’s representation fidelity. We contribute a novel design space, the Negative Space, for remote face-to-face collaboration focusing on 3D content.

Optimizing Augmented Reality Outcomes in a Gamified Place Experience Application through Design Science Research

Nearly ubiquitous smartphone use invites research and development of augmented reality experiences promoting knowledge and understanding. However, there is a lack of design science research dissemination about developing these solutions. This paper adds to the information systems body of knowledge by presenting the second iteration of Design Science Research Methodology artefact and the process of its development in the form of a gamified place experience application about indigenous art, focusing on the optimization of AR integration and user interface enhancements. In testing the usability, we illustrate how the application was optimized for successful outcomes. The qualitative analysis results revealed the high level of usability of the mobile application leading to further testing of efficacy in creating Sense of Place where the art is curated and displayed.

Painting with Movement

The transdisciplinary experience between art and technology has grown over the last decade. The application of augmented reality and virtual reality on other areas has opened doors for hybrid projects and consequently new experimental ideas. Taking it as motivation, a new application concept is proposed in this work, which will allow to someone to walk through and see his body movement in a three-dimensional space. Currently, a user can choose which visual effect is used to draw the resulting movement (e.g. continuous/dashed line). The model has been extended in a way that the visual effect and shape are automatically generated according to movement type, speed, amplitude and intention. Our technological process includes real-time human body detection, movement visualization in real-time and movement tracking history. This project has a core focus on dance and performance, though we consider that the framework is targeted to anyone interesting in body movement and art work. In this sense, the proposed application tracks body movement inside a three- dimensional physical space, only by using a smartphone camera. Our main objective is to record the sequence of movements of a dance, or of someone moving in space, to further analyze their movements and the way they moved in space. And through this idea, we have created an application that aims to record the movement of a user and represent that record in a visual composition of simple elements. The possibility for the user to see the visual tracking of the choreography or performance, allows a clear observation of the space traveled by the dancer and the range of motion and accuracy of the symmetry that the body should or should not have in each step. Over this article the main concepts of the project are presented as well as the multiple applications to real-life scenarios.

Safe Walking in VR

Common natural walking techniques for navigating in virtual environments feature constraints that make it difficult to use those methods in cramped home environments. Indeed, natural walking requires unobstructed and open space, to allow users to roam around without fear of stumbling on obstacles while immersed in a virtual world. In this work, we propose a new virtual locomotion technique, CWIP-AVR, that allows people to take advantage of the available physical space and empowers them to use natural walking to navigate in the virtual world. To inform users about real world hazards our approach uses augmented virtual reality visual indicators. A user evaluation suggests that CWIP-AVR allows people to navigate safely, while switching between locomotion modes flexibly and maintaining a adequate degree of immersion.

SnapChart: an Augmented Reality Analytics Toolkit to Enhance Interactivity in a Collaborative Environment

Collaborative Immersive Analytics (IA) is a tool which enables multiple people to explore the same dataset using immersive technologies, such as Augmented Reality (AR) or Virtual Reality (VR). In this poster, we describe a system which uses AR to provide situated 3D visualisations in a practical agile collaborative setting. Through a preliminary user study we found that our system helps users accept the concept of IA while enhancing engagement and interactivity during AR collaboration.

Superimposition of a 3D Scan with a Part for Metrology Applications on a Hololens-Based Platform

The paper presents an approach for the visualization of the 3D scan of a part with the part itself using the Hololens platform. Computer vision is used to align the scan with the part.

The next step of digital laboratories: Connecting real and virtual world

Lab-based learning, especially the possibility for students to get practical experience, is a requirement for natural- & engineering education. We want to overcome logistical problems of real laboratories by creating a precise digital twin of a RFID-Measuring-Chamber. Students will be able to perform different kinds of experiments directly in virtual reality. To diminish the disadvantage of a pure virtual experience, a process log of the experiment is created which gets carried out asynchronously by a robotic arm. The students receive data and a webcam recording of the real world experiment as feedback. We hope to combine the benefits of virtualization and remote controlled labs to overcome most of the typical problems inherent to lab-based learning. The effect on the learning outcome will be evaluated over a period of two years.

Virtual Avatar Automatically Enhances Human Perspective Taking

Reasoning about other people’s mind is important to successful social communication. Visual perspective taking is the basic ability to infer others’ mind, denoting the process of judgment what and how a certain object looks from the others’ viewpoint. Recent research showed that the understanding of how a target appears to another person occurs spontaneously. However, the details of the process have not yet been clear. We aimed to investigate whether visual perspective taking occur in a virtual environment and how fast the process is. In an experiment, participant judged the direction of visual stimulus from the viewpoint of a humanoid avatar and responded using joystick. To know the time course of the perspective taking, we manipulated the interval time between the appearance of humanoid avatar and the target visual stimulus presentation. We found that the existence of humanoid avatar improves the performance of the visual perspective taking task in quick interval time condition. This result suggests that the visual perspective-taking is enhanced by the virtual avatar in a very fast time scale.

Virtual Avatars as a tool for audience engagement

Modern motion capture tools can be used to animate sophisticated digital characters in real time. Through these virtual avatars human performers can communicate with live audience, creating a promising new area of application for public engagement. This study describes a social experiment where a real-time multimedia setup was used to facilitate an interaction between a digital character and visitors at a public venue. The technical implementation featured some innovative elements, such as using iPhone TrueDepth Camera as part of the performance capture pipeline. The study examined public reactions during the experiment in order to explore the empathic potential of virtual avatars and assess their ability to engage live audience.

SECTION: Demo Abstracts

Embodied Weather: Promoting Public Understanding of Extreme Weather Through Immersive Multi-Sensory Virtual Reality

Extended Reality for Midwifery Learning: MR VR Demonstration

This demonstration presents a development of a Mixed Reality (MR) and Virtual Reality (VR) research project for midwifery student learning, and a novel approach for showing extended reality content in an educational setting. The Road to Birth (RTB) visualises the changes that occur in the female body during pregnancy, and the five days immediately after birth (postpartum) in a detailed 3D setting. In the Base Anatomy studio, users can observe the base anatomical layers of an adult female. In Pregnancy Timeline, they can scroll through the weeks of gestation to see the development of the baby and the anatomical changes of the mother throughout the pregnancy and postpartum. Finally, users can learn about the different possible birthing positions that may present in Birth Considerations. During the demo, users can experience the system in either MR or VR.

From Lab to Field: Demonstrating Mixed Reality Prototypes for Augmented Sports Experiences

Traditional sports events related data have no direct spatial relationship to what spectators see when attending a live sports event. The idea of our work is to address this gap and ultimately to provide spectators insights of a sports game by embedding sports statistics into their field of view of the game using mobile Augmented Reality.

Research in the area of live sport events comes with several challenges such as tracking and visualisation challenges as well as with the challenge that there are only limited opportunities to test and study new features during live games on-site. In this work, we developed a set of prototypes that allow for researching dedicated features for an AR sports spectator experience off-site in the lab before testing them live on the field.

Möbiusschleife: Beyond the Bounds of a Closed-Loop VR System

We propose a VR system “Möbiusschleife” to make the VR and real world directly interact and enlarge the boundary of the VR player’s field. Conventional VR systems generally adopt head- mounted display (HMD) to present altered experiences from the real, but external audiences cannot receive them. In this paper, we present a virtual “window” which provides bidirectional face-to- face interaction between the VR and real world with a touch display and webcam. We also prepare a simple stereoscopic display enabling the VR player and objects to move as if part of the VR world leaps into the real world. Lastly, we integrate all the above systems as a simple chat application with a virtual character.

Molecular Genomics Education Through Gamified Cell Exploration in Virtual Reality

Cell Explorer is an interactive Virtual Reality (VR) journey into the nucleus of a human cell, created with the intention to engage and educate visitors about the important genomic research endeavours being carried out at the Garvan Institute of Medical Research. Its innovative approach is characterized by the fusion of multiple resolutions of scientific data with interaction and game design techniques and 3D computer animation. Cell Explorer was created through an iterative design process, during which various tensions related to technical constraints, scientific integrity, aesthetic and user experience needed to be negotiated. The prototype is an educational tool that offers users a unique experience to facilitate a more meaningful engagement with scientific data and learn about concepts related to human genomics.

Multi–Modal High–End Visualization System

This paper describes a production-grade software toolkit used for shared multi-model visualization systems developed by the Expanded Perception and Interaction Centre. Our High-End Visualization System (HEVS) can be used as a framework to enable content to be run transparently on a wider range of platforms (Figure 2) with fewer compatibility issues and dependencies on commercial software. Content can be transferred more easily from large screens (including cluster-driven systems) such as CAVE-like platforms, hemispherical domes, and projected cylindrical displays through to multi-wall displays and HMDs such as VRR or AR. This common framework is able to provide a unifying approach to visual analytics and visualizations. In addition to supporting multi-modal displays, multiple platforms can be connected to create multi-user collaborative experiences across remotely located labs. We aim to demonstrate multiple projects developed with HEVS that have been deployed to various multi-modal display devices.

Multi-User Immersive Virtual Reality Prototype for Collaborative Visualization of Microscopy Image Data

This paper examines the creation of the Multi-User Cell Arena (MUCA), a novel 3d data visualization work emphasizing virtual co-location via customizable VR avatars. The work explores the utility of building collaborative virtual-reality spaces using consumer VR hardware and the importance of virtual user embodiment. The workflow from raw microbiological scan data to VR representation is documented, noting the unique design challenges of multi-user spaces. Related works in the field are compared, and the limitations of the mesh-cache shell workflow are discussed along with possible future improvements to the multi-user avatar system.

Relive History: VR time travel at the world heritage site

Relive History VR project provides the users with the high detailed 3D scan of the World Heritage site of the Ayutthaya historical park in Thailand. It brings together large scale 3D scanning technology, VR, educational and virtual tourism based experience to help a user experience heritage sites in a new and innovative way. This main contribution of this project is the ability of a user to go back in time to see and experience a particular site and the culture associated with it. The reconstruction of the original structures was modeled based on the studies from historical researchers. The reconstruction model can be shown transparently overlaid on top of the scanned model of the site current state to better compare the architectural detail. Users can interact with the ancient people to complete some given missions and learn the old cultures along the way. We expected that our audiences would enjoy the experience, learn the value of the world heritage site and help to preserve them

Sodeisha Sculptural Ceramics: Digitalization and VR Interaction

This demonstration presents the development of a virtual reality (VR) research project for the VR interaction and digitization of “Sodeisha Sculptural Ceramics”, a transmedia approach showcases photogrammetry scanned Japanese ceramic artworks in an educational and public VR exhibition setting. The early prototype has involved the photogrammetry scanning of 10 sculptural ceramic works of art. These works were created by the innovative Japanese post-war artist group, known as ‘Sodeisha’. Newcastle Art Gallery holds one of the largest collections of Sodeisha ceramics outside of Japan and recently featured the collection in a large-scale exhibition titled SODEISHA: connected to Australia from March – May 2019. The audience used controllers to interact with objects in a virtual environment, with the option of seeing a pair of VR hands or full VR arms.

This Land AR: an Australian Music and Sound XR Installation

This demonstration presents a development of an Augmented Reality (AR) Indigenous music and sound installation, an extended reality (XR) interactive audible experiential approach for augmenting audible elements in a public exhibition setting. It is a transmedia initiative as part of a music project, This Land. The project connected contributors and musicians, involving traditional to contemporary vocal and instrumental sounds. This Land project embraces cultural and social perspectives and related contemporary discourses within the Australia context. As augmented reality was being explored as an on-going study for the project, a number of conventional printed wall design (posters and photograph exhibits) were enhanced with augmented musical and sound elements. This Land project commenced as artistic performative event built around many years of collaboration between staff and students from the School of Creative Industries and the Wollotuka Institute at University of Newcastle (UON). Its vision embraces issues of Indigenisation, decolonisation, reciprocity and language revitalisation. A portable version of This Land AR will be used for the demonstration where users could experience features of the prototype system in the public setting.

Till We Meet Again: A Cinévoqué Experience

Virtual Reality movies, with its possibility of being immersed in 360° spaces, have both inherent challenges and advantages associated with creating and experiencing them. While the grammar of storytelling in traditional media is well established, filmmakers cannot utilize them effectively in the context of VR due to its immersive nature, as the viewers could end up looking elsewhere and miss important parts of the story. Taking this into account, our framework Cinévoqué leverages the unique features of this immersive medium for creating seamless movie experiences, where the narrative alters itself with respect to the viewer’s passive interactions, without making them aware of the changes. In our demo, we present Till We Meet Again, a VR film that utilizes our framework to provide different storylines that evolve seamlessly for each user.

Water Bodies: VR Interactive Narrative and Gameplay for Social Impact

Partnered with UN Environment, an interactive and immersive virtual reality (VR) experience was produced that takes participants on a journey through the human stomach to raise awareness of microplastics that are unknowingly consumed in our daily lives. The creative practice and research addresses UN Sustainable Development Goal #12: Responsible Consumption and Production. Through novel, exciting and engaging interactive narrative and gameplay mechanics, Water Bodies delivers a serious underlying message to encourage participants to question their use of plastic and to drive towards a positive environmental impact.