A user’s movement is one of the most important properties that pertain to user experience in a virtual reality (VR) environment. However, little research has focused on examining backward movements. Inappropriate support of such movements could lead to dizziness and disengagement in a VR program. In this paper, we investigate the possibility of detecting forward and backward movements from three different positions of the body (i.e., head, waist, and feet) by conducting a user study. Our machine-learning model yields the detection of forward and backward movements up to 93% accuracy and shows slightly varying performances by the participants. We detail the analysis of our model through the lenses of body position, integration, and sampling rate.
The Sense of Embodiment in Virtual Reality is one of the key components to provide users with a convincing experience. Our contribution to a better understanding of the phenomenon focuses on the analysis of the motor reaction of the users to an alien finger movement. We assess quantitatively that the view of an alien movement (i.e. a movement of the self-avatar caused by an alien will) induces a finger posture variation, that we refer to as the Induced Finger Movements Effect. This only happens in case of embodiment, while in a disembodied setup the effect disappears. The principle of the investigation is being tested as a base for neuro-rehabilitation, based on the concept of inducing movements in post-stroke hemiplegic patients.
We developed an annotation tool—Label360 to solve the distortion and instance matching issues across different viewing aspects in spherical image annotations. A post-processing algorithm was introduced to generate distortion-free annotations on equirectangular images. Two experiments were conducted to examine the consistency of annotations using Label360 and to compare labeling efficiency with LabelMe. Our tool obtained a mean intersection over union (mIoU) of 0.92 in the consistency test and has 1.45x the annotation speed of LabelMe. This demonstrates that Label360 is efficient for annotating instance-aware semantic segmentation labels on spherical images.
We developed a mid-air touch Mixed Reality application which combined hand tracking sensing and haptic feedback on a desktop display. We evaluated three hand interaction techniques: 1) Touch, 2) Mid Air Gesture Touch, 3) Mid Air Haptic Touch through preliminary user testing with ten adults. Results suggest the user’s willingness to use self-service devices in public places increases in the post COVID-19 world, while their concerns about the possibility of getting in touch with the virus reduces. However before the large scale deployment of this technology, accuracy and user experience design need to be improved.
It is said that a four-leaf clover brings good luck, but finding it is challenging because of the rarity and the difficulty distinguishing from common overlapped clovers with human eyes. We present a novel system that helps people to find four-leaf clovers with augmented reality (AR) and artificial intelligence (AI) technologies. This system tries to detect a four-leaf clover with the picture taken by the head-mounted camera and to pointed out where the four-leaf clover is in the real world by placing virtual sign. The goal of this study is to explore natural information representation without counteracting the user’s search and how much of technological intervention affects user’s perceived luckiness.
We present a hardware and software framework in which we use the direction of the sound source to interact with the simulation of a snake robot. We present a gamification idea (similar to hide and seek) of how one can use the direction of the sound and develop interactive simulations in robotics and especially use the bio-inspired idea of a snake's reactive locomotion to sound. We use multiple microphones and calculate the direction of sound coming from the sound source in near real-time and make the simulation respond to it. Since a biological snake moves away from a sound source when it senses vibrations, we bio-mimic this behavior in a simulated snake robot. This idea can be used for developing games that are reactive to multiple people interacting with a computer, based on sound direction input. This is a novel interface and first of its kind presented in this paper.
Wearable haptics, compatible with Virtual Reality applications for remote interaction, are still heavily reliant on cumbersome wired systems, restricting user movement. We present an innovative, component-based, wireless embedded system on a glove with hand motion capture providing tactile feedback, operating without the need for a finger tracking device such as the Leap Motion. The user’s hand is tracked by an infrared filter pass camera which detects three infrared LEDs attached on the 3D printed base on the glove. The system is built using inexpensive parts making it ideal for prototyping and customization, therefore resulting to a scalable and upgradable system.
In recent virtual reality systems, users can now experience various types of content through precise interaction with virtual world by incorporating HMD’s and interacts with the virtual world by projecting the user actions in real world beyond audiovisual viewing. Recently, virtual reality technology is also actively applied in the field of arts. This paper proposes a novel immersive virtual 3D body painting system which provides various drawing tools and paint effects used in conventional body painting works for producing high-quality works by giving insights through the stages of either concept design or pre-production. We analyzed the drawing effect of airbrush and painting brush through collaboration with body painting experts and provide excellent drawing effect through GPU-based real-time rendering. Our system also provides users with the management functions such as save/load, undo they need to create works in virtual reality.
In the growth process of children, playing with dolls contributes to the formation of the self, creativity, and social development. In general dollhouse play, children move the dolls and imagine the world of the dollhouse. In this study, we installed sensors of distance and pressure in a dollhouse, and projected interactive images according to the measured values by projection mapping. It became possible to expand the doll play by generating the image corresponding to the movement of the dolls and hand in real time and reflecting it in the doll house.
Social interaction is important when audience are participating in a live performance. With strong immersion and high degree of freedom (DoF), virtual reality (VR) is often used to present performance events, such as virtual concert. A live performance is also a space with the potential for shared social realities, which could hardly be emulated by remote VR. This paper proposes new approaches to design social interactions that arouse social awareness for the audience in a VR performance. These interactions could be implemented in three different modes, which include both individual expression and group effect. The project brings new possibilities for future VR performance experience.
We have performed a particle based simulation in the heart with the aorta and the left ventricle models generated from real CT data. In the simulation, blood flow from the left ventricle to the aorta is visualized, and the pressure change in the aorta and the left ventricle is compared to the real data. The left ventricular pressure increases, when the aortic valve opens and blood flows from the left ventricle to the aorta. In the isovolumetric contraction period, the aortic valve interlocks with the mitral valve. We have tried two types of forces for the isovolumetric contraction, and performed the blood flow simulation from the left ventricle to the aorta with the valve interlocking and reducing rigidness of the aortic valve. As the result of the simulation, pressure changes in the aorta and the left ventricle have become very similar to the real data.
In this work, we present the Digital Twin of the Australian Square Kilometre Array Pathfinder (ASKAP) - an extended reality framework for telescope monitoring. Currently, most of the immersive visualisation tools developed in astronomy primarily focus on educational aspects of astronomical data or concepts. We extend this paradigm, allowing complex operational network controls with the aim of combining telescope monitoring, processing and observational data into the same framework.
The influence of AR instruction on intention sharing between task designer and task executor in AR manual assembly is studied. We propose a visual representation (SHARIdeas) to support intention sharing and evaluate the effectiveness of new instruction.
Control algorithms developed for autonomous and electric vehicles undergo limited trials because of the high cost of using actual vehicles. This study constructs a low-cost virtual environment for developing such control algorithms using open data and a game engine. Specifically, a large-scale three-dimensional urban road model with elevation differences is generated. This model is connected with hardware-in-the-loop simulations (HILS) as a vehicle running model to realize a practical system. This study reveals the need to automate the model development process in light of its high cost.
We present an MR system for experiencing human body scaling in the virtual world with enhancement of tactile stimuli. The device a user grabs has an outer surface based on a creased pattern that scales to all the directions; hence, is scaled by controlling air inside using an air pump with control circuits. We demonstrate an MR content in which a user experiences scaling in a virtual space with simultaneous tactile stimuli from the scaling device.
This short paper presents a method called Variable Rate Ray Tracing to reduce the performance cost with minimal quality loss when facilitating hardware-accelerated ray tracing for Virtual Reality. This method is applied to the ray generation stage in the ray tracing pipeline to vary the ray tracing rate based on the scene specific information. The method uses 3 different control policies to effectively reduce rays generated per second for various needs. Based on the benchmark, this method can improve more than 30% frames per second(FPS) on current main-stream graphics hardware and virtual reality devices.
Papertracker is an interactive educational platform that engages audiences of all ages with music and technology. The focus is on providing fun and inexpensive challenges that promote creative problem solving, collaborative work, and programming using a system of placeable tiles.
The Virtual Reality (VR) headset has become potential equipment for immersive storytelling. However, we know limited things about the users when they are experiencing the VR context. Sometimes, the users miss the narration because they are looking around, which makes designing a compelling VR story a challenge. With the advancement of electroencephalography (EEG) in VR, the story’s rhythm or structure could dynamically change based on the audience’s brain waves to create a personal dramatic moment. In this paper, we conduct a preliminary study to investigate the potential use of a consumer-level brainwave headset, and attempts to explore virtual character interaction to enhance immersive storytelling.
A In a push away from the functional and utilitarian intentions behind the ‘smart city’ creative movements such as ‘playable cities’ have emerged often using contemporary ubiquitous locative mobile technologies as a means for more meaningfully connecting people and public places. Drawing on recent studies that explore three key concepts: ‘Game cartography’, ‘emotional cartography’ and ‘performative cartography’, this paper describes three projects by the author that advocate for the capacity of cartographic interfaces in location based mobile media experiences to share human based, ‘experiential data’ relating to physical spaces through playful, insightful and creative interventions with public spaces. Presenting the term ‘playable cartography’ as a means of describing emerging creative practices regarding uses of interactive cartographic interfaces in contemporary locative mobile media-based experiences.
Recently, 3D model generation methods from images have been proposed. These methods can generate a realistic 3D human model from an image by focusing on the body shape. However, it may be difficult for these methods to generate a user-desired 3D character model from a character illustration. In character illustrations, each character has unique features especially in the lengths of their body parts. To reflect these unique features on 3D models, in this paper, we propose a novel interactive 3D model generation method from character illustrations. Our method modifies 3D models to match the user’s intentions interactively based on the constraints of input poses and symmetrical bone lengths. The experimental results show the effectiveness of our method.
We present a novel approach to reconstruct high-fidelity geometric human face model from a single RGB image. The main idea is to add details into a coarse 3D Morphable Model (3DMM) based model in a self-supervised way. Our observation is that most of the facial details like wrinkles are driven by expression and intrinsic facial characteristics which here we refer to as the facial attribute. To this end, we propose an expression related details recovery scheme and a facial attribute representation.
Leaps in technology are increasingly making the prospect of using biological structures as part of digital models and artwork a tangible reality. In this work a new method heavily inspired by natural biological processes for 3D modelling and animation is proposed. The proposed approach differentiates from the classic assembly or printing methodologies and offers a novel growth based solution for the design and modelling of 3D structures. In order to facilitate the needs of growth based modelling, new terms and graphic primitives such as stem-voxels, muscle-voxels, bone-voxels and digital-DNA are introduced. The core production rules of a novel context free grammar were implemented allowing 3D model designers to build the digital-DNA of a 3D model that the introduced parser will interpret to a full 3D structure. The obtained 3D models support animation using the muscle-voxels, they are able to observe the environment using photoreceptor-voxels and interact with a level of intelligence based on neural networks build with nerve-voxels. The proposed solution was evaluated with a variety of volumetric models demonstrating a strong potential and impact, with many applications and offering a new tool for 3D modelling systems.
Star charts and a planetarium can be used to identify objects in the sky. However, these are surfaces viewed from Earth, and we cannot understand the relative distances between celestial objects in three dimensions. Therefore, in this study, we developed a system to determine such distances by placing the stars in a virtual space. The system allows users to move freely in a large scale of space using a game engine and the Hipparcos catalog. We can intuitively perceive the relative distances of stars by understanding the three-dimensional configuration of the stars and constellations as seen from Earth.
In this paper, we address the problem of 3D Point Cloud Upsampling, that is, given a set of points, the objective is to obtain denser point cloud representation. We achieve this by proposing a deep learning architecture that along with consuming point clouds directly, also accepts associated auxiliary information such as Normals and Colors and consequently upsamples them. We design a novel feature loss function to train this model. We demonstrate our work on ModelNet dataset and show consistent improvements over existing methods.
In this paper, we propose a method to refine sparse point clouds of complex structures generated by Structure from Motion in order to achieve improved visual fidelity of ancient Indian heritage sites. We compare our results with the state-of-the-art upsampling networks.
Specular highlight removal is a challenging task. We present a novel data-driven approach for automatic specular highlight removal from a single image. To this end, we build a new dataset of real-world images for specular highlight removal with corresponding ground-truth diffuse images. Based on the dataset, we also present a specular highlight removal network by introducing the detection of specular reflections information as guidance. The experimental evaluations indicate that the proposed approach outperforms recent state-of-the-art methods.
In this paper, we propose a robust low-cost mocap system (mocap) with sparse sensors. Although the sensor with an accelerometer, magnetometer, and gyroscope is cost-effective and offers the measured positions and rotations from these devices, it potentially suffers from noise, drift, and lost issues over time. The resulting character obtained from a sensor-based low-cost mocap system is thus generally not satisfactory. We address these issues by using a novel deep learning framework that consists of two networks, a motion estimator and a sensor data generator. When the aforementioned issues occur, the motion estimator feeds the newly synthesized sensor data obtained with the measured and predicted data from the sensor data generator until the issues have been resolved. Otherwise, the motion estimator receives the measured sensor data to accurately and continuously reconstruct the new character poses. In our examples, we show that our system outperforms the previous approach without the sensor data generator and we believe that it can be considered a handy and robust mocap system.
Motion Matching is a computationally expensive animation selection process where a motion capture database is regularly searched to identify the best frame of animation to be played. This study presents a process used to compute these calculations in parallel by using multiple GPU threads. The described work is shown to greatly reduce the computational time of CPU - based Motion Matching within the Unreal Engine.
We propose a system for free-viewpoint facial re-enactment from a casual video capture of a target subject. Our system can render and re-enact the subject consistently in all the captured views. Furthermore, our system also enables interactive free-viewpoint facial re-enactment of the target from novel views. The re-enactment of the target subject is driven by an expression sequence of a source subject, which is captured using a custom app running on an iPhone X. Our system handles large pose variations in the target subject while keeping the re-enactment consistent. We demonstrate the efficacy of our system by showing various applications.
Virtual reality (VR) would benefit from more end-to-end systems centered around a casual capturing procedure, high-quality visual results, and representations that are viewable on multiple platforms. We present an end-to-end system that is designed for casual creation of real-world VR content, using a smartphone. We use an AR app to capture a linear light field of a real-world object by recording a video sweep around the object. We predict multiplane images for a subset of input viewpoints, from which we extract high-quality textured geometry that are used for real-time image-based rendering suitable for VR. The round-trip time of our system, from guided capture to interactive display, is typically 1–2 minutes per scene.
Image-based rendering methods that support visually pleasing specular surface reflections require accurate surface geometry and a large number of input images. Recent advances in neural scene representations show excellent visual quality while requiring only imperfect mesh proxies or no surface-based proxies at all. While providing state-of-the-art visual quality, the inference time of learned models is usually too slow for interactive applications. While using a casually captured circular video sweep as input, we extend Deferred Neural Rendering to extrapolate smooth viewpoints around specular objects like a car.