We propose a method based on an animator’s technique to create an animation for fluttering objects in the wind such as hair and flag. As a preliminary study, we analyzed how the fluttering objects were expressed in the hand-drawn animations, and confirmed that there is a traditional technique commonly used by professional animators. In the case of hair, for example, the tip of the hair is often moved in the shape of a figure eight, and the remaining hair bundle is animated as if a wave caused by this movement is propagating along the hair. Based on this observation, we developed a system to reproduce this technique digitally. In this system, trajectories of a few control points on a hair bone are sketched by a user, and their motion are propagated to the whole hair bundle to represent the waving behavior. In this process, user can interactively adjust two parameters on swing speed and wave propagation delay. As system evaluation, we conducted a user test in which a sample animation was reproduced by several subjects using our system.
In this paper, we propose a novel method of denoising human motion using a bidirectional recurrent neural network (BRNN) with an attention mechanism. The corrupted motion that is captured from a single 3D depth sensor camera is automatically fixed in the well-established smooth motion manifold. Incorporating an attention mechanism into BRNN achieves better optimization results and higher accuracy because a higher weight value is selectively given to the more important input pose at a specific frame for encoding the input motion when compared to other deep learning frameworks. The results show that our approach efficiently handles various types of motion and noise. We also experiment with different features to find the best feature and believe that our method will be sufficiently desirable to be used in motion capture applications as a post-processing step after capturing human motion.
We propose a novel method for the pose selection process in the motion matching technique. This method decreases the amount of calculations of the motion matching system at runtime by limiting the number of poses to be searched, while also reducing the frequency of unwanted pose matches, compared to the preceding methods. We built a table which stores poses classified by their trajectories from the motion data and used it to return a subset of an entire set of poses to be used as a search space in real time.
Conventionally to make wound props, firstly artists carves wound sculpture using oil clay and makes the wound mold by pouring silicon or plaster over the finished sculpture then pours silicone into the wound mold to finish the wound props. This conventional approach takes a lot of time and effort to learn how to handle materials such as oil clay or silicon, or to acquire wound sculpting techniques. Recently, many users are trying to create wound molds using 3D modeling software and 3D printers but it is difficult for non-experts to conduct tasks such as 3D wound modeling or 3D model transformation for 3D printers. This paper suggests a simple and rapid way for users to create a wound mold model from a wound image and to print one using a 3D printer. Our method provides an easy-to-use capabilities for wound molds production so that the makeup artists who are not familiar with 3D modeling can easily create the molds using the software.
We present a novel local shape descriptor, named wavelet energy decomposition signature (WEDS) for robustly matching non-rigid 3D shapes with different resolutions. The local shape descriptors are generated by decomposing Dirichlet energy on the input triangular mesh. Our approach can be either applied directly or used as the input to other learning-based approaches. Experimental results show that the proposed WEDS achieves promising results on shape matching tasks in terms of incompatible shape structures.
Wire sculpture is a unique art form where an artist represents a 3D form using 1D wires. However, the design of novel wire sculpture is difficult and limited to talented experts. We introduce a computer-assisted framework for manually creating 3D wire bending art from given 3D models. Our method extracts a set of 3D contour-curves from several viewpoints as a target design of wire sculpture. Next, we generate several grooves on the 3D surface of the given model, and prints it with 3D printer as a 3D support. By directly fitting wires into the grooves and assembling, users can manually fabricate 3D wire bending art easily and quickly.
Computing clipped Voronoi diagrams in 3D volume is a challenging problem. In this poster, we propose an efficient GPU implementation to tackle this problem. By discretizing the 3D volume into a tetrahedral mesh, the main idea of our approach is that we use the four planes of each tetrahedron (tet for short in the following) to clip the Voronoi cells, instead of using the bisecting planes of Voronoi cells to clip tets like previous approaches. This strategy reduces computational complexity drastically. Our approach outperforms the state-of-the-art CPU method up to one order of magnitude.
In this poster, we proposed a refined scheme and system to realize the multi-directional 3D printing with the strength as the traditional unidirectional 3D printing. With the introduction of the 10.6m CO2 laser, the printing system can heat the interfaces of the already printed components and increase the intermolecular-penetrating diffusion while fabricating the base layers of the next components. Therefore, the interfacial bonding strength between components is augmented. The tensile tests demonstrate that the interfacial bonding strength can be increased by more than 27%, which reaches the strength of the integrated ones. The improved printing system makes it possible to realize multi-directional 3D printing with strength retention.
In this study, we developed Code Weaver, a tool for learning basic programming concepts designed for elementary school or younger children. A tangible user interface of this tool can be programmed by directly combining parts with a physical form by the users’ hands. In this way, we attempted to resolve several problems, including the typical obstacles encountered in learning programming languages by small children such as text input via a keyboard, strict syntax rule requirements, and difficulties associated with group learning involving multiple participants.
Proprioception or body awareness is an essential sense that aids in the neural control of movement. Proprioceptive impairments are commonly found in people with neurological conditions such as stroke and Parkinson’s disease. Such impairments are known to impact the patient’s quality of life. Robot-aided proprioceptive training has been proposed and tested to improve sensorimotor performance. However, such robot-aided exercises are implemented similar to many physical rehabilitation exercises, requiring task-specific and repetitive movements from patients. Monotonous nature of such repetitive exercises can result in reduced patient motivation, thereby, impacting treatment adherence and therapy gains. Gamification of exercises can make physical rehabilitation more engaging and rewarding. In this work, we discuss our ongoing efforts to develop a game that can accompany a robot-aided wrist proprioceptive training exercise.
This study proposes an interface device for augmenting multi-sensory reality based on a visually unified experience with a high level of consistency between real and virtual worlds using video see-through type mixed reality (MR). By putting on an MR head-mounted display (HMD) and holding a box-shaped device, virtual objects are displayed within the box, and both vibrations and reactions are presented in a synchronized way based on the dynamics of objects. Inside the device, multiple built-in actuators using solenoids and eccentric motors enable actions to be controlled in synchronicity with the motion of objects. Furthermore, one can also hear the sound emitted from virtual objects using 3D sound localization.
China’s intangible cultural heritage glove puppetry is a thousand-year form of performance. However, this ancient cultural and artistic treasures are faced with difficulties of protection and inheritance. Our approach is to integrate robotics with the glove puppetry, design and develop the glove puppetry robot HinHRob. It simulates the performances of puppeteers and interacts with the audience in real-time. The robot not only can perform with professional puppeteers but also attract interest from audiences of different cultural backgrounds and ages. The creative approach of combining technology and the intangible cultural heritage may open a new mode for the ancient human legacy, the glove puppetry.
Weight feeling is important for musical instrument training and physical workout training. But it is difficult to convey accurate information about weight feeling to trainer with visual or verbal information.This study measures weight feeling on muscles when playing a piano keyboard or doing push-ups using a wearable device. To do that, the muscle deformation data is measured by a photo-sensor array wrapped around the forearm. This data is input to a trained Support Vector Regression (SVR) classifier that estimates weight feeling as output. As a result of our experiment, the correlation coefficient between the measured value and the estimated value was 0.911 while RMSE and MAE were 236 g and150 g respectively when estimating weights up to 2000 g. In future work, we want to use this technique under many arm posture.
The kinesthetic sensation is important in terms of creating presence in virtual reality applications. One possible way of presenting the kinesthetic sensation with compact equipment is to use the kinesthetic illusion, which is an illusion of position and movement of the one’s own body, generated by vibration. However, the kinesthetic illusion observed currently is an illusion of neither large nor rapid movement. To resolve this issue, we propose simultaneous stimulation of numerous tendons and muscles related to arm movement. Our investigation of the chest, lower arm, and upper arm finds an intensity change of the illusion when multiple points are stimulated.
We propose a system that automatically generates layouts for magazines that require graphical design. In this system, when images or texts are input as the content to be placed in layouts, an appropriate layout is automatically generated in consideration of content and design. The layout generation process is performed by randomized processing in accordance with a rule set of minimum conditions that must be satisfied for layouts (minimum condition rule set), where a large number of candidates are generated. An evaluation of appearance, style, design, and composition of the candidates are combined with an evaluation of their diverseness. Top candidates of the combined evaluation are returned. The automation makes the layout creation task performed by users such as graphical designers to be much more efficient. It also allows the user to choose from a wide range of ideas to create attractive layouts.
Humans’ ability to forecast motions and trajectories, are one of the most important abilities in many sports. With the development of deep learning and computer vision, it is becoming possible to do the same thing with real-time computing. In this paper, we present a real-time table tennis forecasting system using a long short-term pose prediction network. Our system can predict the trajectory of a serve before the pingpong ball is even hit based on the previous and present motions of a player, which is captured only using a single RGB camera. The system can be either used for training beginner’s prediction skill, or used for practitioners to train a conceal serve.
We propose a novel search engine that recommends a combination of furniture preferred by a user based on image features. In recent years, research on furniture search engines has attracted attention with the development of deep learning techniques. However, existing search engines mainly focus on the techniques of extracting similar furniture (items), and few studies have considered interior combinations. Even techniques that consider the combination do not take into account the preference of each user. They make recommendations based on the text data attached to the image and do not incorporate a judgmental mechanism based on differences in individual preference such as the shape and color of furniture. Thus, in this study, we propose a method that recommends items that match the selected item for each individual based on individual preference by analyzing images selected by the user and automatically creating a rule for a combination of furniture based on the proposed features.
We present a portable multi-camera system for recording panoramic light field video content. The proposed system captures wide baseline (0.8 meters), high resolution (>15 pixels per degree), large field of view (>220°) light fields at 30 frames per second. The array contains 47 time-synchronized cameras distributed on the surface of a hemispherical, 0.92 meter diameter plastic dome. We use commercially available action sports cameras (Yi 4k) mounted inside the dome using 3D printed brackets. The dome, mounts, triggering hardware and cameras are inexpensive and the array itself is easy to fabricate. Using modern view interpolation algorithms we can render objects as close as 33-cm to the surface of the array.
In this paper, a compact imaging system is developed to enable simultaneous acquisition of the spectral and depth information in real-time with high resolution. We achieve this goal using only two commercial cameras and relying on an efficient computational reconstruction algorithm with deep learning. For the first time, this work allows 5D information (3D space + 1D spectrum + 1D time) of the target scene to be captured with a miniaturized apparatus and without active illumination.
We present an approach to obtain high-quality focus-stacking images. The key idea is to integrate the multi-view structure-from-motion (SfM) algorithm with the focus-stacking process; we carry out focus-bracketing shooting at multiple viewpoints, generate depth maps for all viewpoints by using the SfM algorithm, and compute focus stacking using the depth maps and local sharpness. By using the depth-maps, we successfully achieve focus-stacking results with less artifacts around object boundaries and without halo-artifacts, which was difficult to avoid by using the previous sharpest pixel and pyramid approaches. To illustrate the feasibility of our approach, we performed focus stacking of small objects such as insects and flowers.
We propose a novel fundus imaging system using a dihedral corner reflector array (DCRA) that is an optical component to work as a lens but does not have a focal length or an optical axis. A DCRA has a feature that transfers a light source into a plane symmetric point. Conventionally, using this feature, a DCRA has been used to many display applications, such as virtual retinal display and three-dimensional display, in the field of computer graphics. On the other hand, as a sensing application, we use a DCRA for setting a virtual camera in/on an eyeball to capture a fundus. The proposed system has three features; (1) robust to eye movement, (2) wavelength-independent, (3) a simple optical system. In the experiments, the proposed system achieves 8 mm of large eyebox. The proposed system has a possibility to be applied to preventive medicine for households that can be used in daily life.
Rendering a scene is the most repeated and resource intensive task, at the core of a VFX facility. Each render can consume significant compute resources, which are expensive and finite. The number of renders (iterations) required to final a shot varies based on the creative and technical complexity of the scene. Getting a reliable estimate of the render time beforehand could prove useful from a budgeting and scheduling perspective. In this poster we present a novel approach to estimate render time of a scene based on a machine learning model built upon previous renders on a show. Each input vector for training is encoded from direct constituents of a scene like assets, looks, lights and render parameters like number of samples and resolution. Renders are categorized into two buckets: less than an hour and greater than an hour and two models are built for estimation. The estimated results for test scenes are verified against the actual render time for measuring accuracy.
In virtual reality (VR) or augmented reality (AR) systems, latency is one of the most important causes of simulator sickness. Latency is difficult to limit in traditional renderers, which sample time rigidly with a series of frames, each representing a single moment in time, depicted with a fixed amount of latency. Previous researchers proposed adaptive frameless rendering (AFR), which removes frames to sample space and time flexibly, and reduce latency. However, their prototype was neither parallel nor interactive. We implement AFR in NVIDIA OptiX, a concurrent, real–time ray tracing API taking advantage of NVIDIA GPUs, including their latest RTX ray tracing components. With proper tuning, our prototype prioritizes temporal detail when scenes are dynamic (producing rapidly updated, blurry imagery), and spatial detail when scenes are static (producing more slowly updated, sharp imagery). The result is parallel, interactive, low-latency imagery that should reduce simulator sickness.
We propose a compute shader based point cloud rasterizer with up to 10 times higher performance than classic point-based rendering with the GL_POINT primitive. In addition to that, our rasterizer offers 5 byte depth-buffer precision with uniform or customizable distribution, and we show that it is possible to implement a high-quality splatting method that blends together overlapping fragments while still maintaining higher frame-rates than the traditional approach.
In the outdoor scenes, the inhomogeneity of the atmosphere and the ground effect have a great impact on sound propagation, but these two effects are usually ignored in previous methods. We propose an improved FDTD-PE method to simulate sound propagation in 3D outdoor scenes. In the simulation, the ground and atmosphere are considered as porous and inhomogeneous medium, respectively. The scene is categorized into a number of two-dimensional vertical ground planes in the three-dimensional cylindrical coordinate system, and these planes are separately decomposed into near-source region and far-source region, which are solved by the FDTD solver and the parabolic equation (PE) solver, respectively. We validated our method in various outdoor scenes, and the results indicate that our method can accurately simulate the outdoor sound propagation, with quite higher speed and lower storage.
Conventional light-field methods of producing 3D images having circular parallax on a tabletop surface require several hundred projectors. Our novel approach produces a similar light field using only 1/10 the number of projectors. Adopting our method, two cylindrical mirrors are inserted in the projection light paths. By appropriately folding the paths with the mirrors, we form any viewpoint image in an annular viewing area from a group of rays sourced from all projectors arranged on a circle.
The focus of this research is to introduce the concept of Training Task Complexity (TT) in the design of Virtual Immersive Training (VIT) systems. In this report, we describe design parameters, experimentation design and initial results.
In this paper, we propose a method of presenting an aerial image at any point in the same three-dimensional space as a user. In the existing method, presenting an image to an arbitrary point in 3D space is difficult because the presentation position on the device is fixed. Therefore, in this study, particles with scattering properties remain to an arbitrary point in space and used as a screen. Specifically, we focus on the phenomenon of vortex rings that can stably transport particles, and we generate an aerial screen by colliding vortex rings ejected from air cannons at multiple points in the air. In a prototyping experiment, we generated a screen at a specified point in space and confirmed that aerial image presentation was possible by projection. We also determined potential issues.
We validated an optical method for measuring the display lag of modern head-mounted displays (HMDs). The method used a high-speed digital camera to track landmarks rendered on a display panel of the Oculus Rift CV1 and S models. We used an Nvidia GeForce RTX 2080 graphics adapter and found that the minimum estimated baseline latency of both the Oculus CV1 and S was extremely short (∼2 ms). Variability in lag was low, even when the lag was systematically inflated. Cybersickness was induced with the small baseline lag and increased as this lag was inflated. These findings indicate the Oculus Rift CV1 and S are capable of extremely low baseline display lag latencies for angular head rotation, which appears to account for their low levels of reported cybersickness.
We propose a pop-up digital tabletop system that seamlessly integrates two-dimensional (2D) and three-dimensional (3D) representations of contents in a digital tabletop environment. By combining a digital tabletop display of 2D contents with a light-field display, we can visualize a part of the 2D contents in 3D. Users of our system can overview the contents in their 2D representation, then observe a detail of the contents in the 3D visualization. The feasibility of our system is demonstrated on two applications, one for browsing cityscapes, the other for viewing insect specimens.
In the present study, we propose a stealth projection method that visually removes the ProCam system in dynamic projection mapping (DPM). In recent years, DPM has been actively studied to change the appearance of moving and deforming objects by image projection. Various objects, such as an object held by the user, clothes, a human body, and a face, are projection targets, and the possibility of expressing these objects has continuously evolved. However, in order to realize this, high-speed and multiplexed special projection systems are needed, and objects are being closely enclosed by the systems. In DPM that seamlessly connects the real world and the virtual world, a complex device is an unnecessarily visually disturbing factor and should be removed in order to further exploit the potential effects of DPM. Therefore, in the present research, we propose a stealth projection method using a ProCam system that cannot be seen by combining a method that is capable of high-speed tracking with a single IR camera and all-around projection technology applying aerial image display technology.
Higher education is embracing a digital transformation in a relatively slow adoption rate, with few fragmented solutions to portray the capabilities of new immersive technologies like Virtual Reality (VR). One may argue that deployment costs and substantial levels of design knowledge might be critical stagnation factors to create effective Virtual Immersive Educational (VIE) systems. We attempt to address these impediments in a cost-effective and user-friendly VIE system. In this paper, we report in brief the main elements of this design and initial result and lessons learned.
Parallel coordinates is a well-known technique for visual analysis of high-dimensional data. Although it is effective for interactive discovery of patterns in subsets of dimensions and data records, it also has scalability issues for large datasets. In particular, the amount of visual information potentially being shown in a parallel coordinates plot grows combinatorially with the number of dimensions. Choosing the right ordering of axes is crucial, and poor design can lead to visual noise and a cluttered plot. In this case, the user may overlook a significant pattern, or leave some dimensions unexplored. In this work, we demonstrate how eye-tracking can help an analyst efficiently and effectively reorder the axes in a parallel coordinates plot. Implicit input from an inexpensive eye-tracker assists the system in finding unexplored dimensions. Using this information, the system guides the user either visually or automatically to find further appropriate orderings of the axes.
In this paper, we propose a method to perceive the internal structure of volumetric data using midair haptics. In this method, we render haptic stimuli using a Gaussian mixture model to approximate the internal structure of the volumetric data. The user’s hand is tracked by a sensor and is represented in a virtual space. Users can touch volumetric data with virtual hands. The focal points of the ultrasound phased arrays for presenting the sense of touch are determined from the the position of the user’s hand and the contact point of the virtual hand on the volumetric data. These haptic cues allow the user to directly perceive the sensation of touching the inside of the volumetric data. Our proposal is a solution for the occlusion problem in volumetric data visualization.
3D computer-animated representations of complex biological systems and environments are often vastly oversimplified. There are a number of key reasons: to highlight a distinct biological mechanism of interest; technical limitations of hardware and software computer graphics (CG) capabilities; and a lack of data regarding cellular environments. This oversimplification perpetuates a naive understanding of fundamental cellular dynamics and topologies.
This work attempts to address these challenges through the development of a first-person interactive virtual environment that more authentically depicts molecular scales, densities and interactions in real-time. Driven by a collaboration between scientists, CG developers and 3D computer artists, Nanoscapes utilizes the latest CG advances in real-time pipelines to construct a cinematic 3D environment that better communicates the complexity associated with the cellular surface and nanomedicine delivery to the cell.