SIGGRAPH '18- ACM SIGGRAPH 2018 Posters

Full Citation in the ACM Digital Library

SESSION: Art & design

Automatic generation of artworks using virtual photoelastic material

Photoelasticity is known as one of the phenomena related to polarization and is defined as the change in birefringence of transparent material when internal force is applied. Interference fringes appear by irradiating the material with polarized light when viewing it through the polarizer. In this study, we attempt to apply the concept of photoelasticity to generative art. Assuming there is virtual stress distribution in the two-dimensional material, our method automatically generates artworks with photoelasticity. A GPU-based acceleration of the current implementation is also discussed.

Beckett in VR: exploring narrative using free viewpoint video

This poster describes a reinterpretation of Samuel Beckett's theatrical text Play for virtual reality (VR). It is an aesthetic reflection on practice that follows up an a technical project description submitted to ISMAR 2017 [O'Dwyer et al. 2017]. Actors are captured in a green screen environment using free-viewpoint video (FVV) techniques, and the scene is built in a game engine, complete with binaural spatial audio and six degrees of freedom of movement. The project explores how ludic qualities in the original text help elicit the conversational and interactive specificities of the digital medium. The work affirms the potential for interactive narrative in VR, opens new experiences of the text, and highlights the reorganisation of the author-audience dynamic.

Computation of skinning weight using spline interface

Among many approaches for object and character deformation, closed-form skinning methods, such as Linear Blend Skinning (LBS) and Dual Quaternion Skinning (DQS), are widely used as they are fast and intuitive. The quality of these skinning methods highly depends on specifying appropriate skinning weights to vertices, which requires the intensive efforts of professional artists in production animation.

Creative use of signal processing and MARF in ISSv2 and beyond

Illimitable Space System (ISS) is a real-time interactive configurable toolbox for use by artists to create interactive visual effects in theatre performances and in documentaries through user inputs such as gestures and voice. Kinect has been the primary input device for motion and video data capture. In this work in addition to the existing motion based visual and geometric data processing facilities present in ISSv2, we describe our efforts to incorporate audio processing with the help of Modular Audio Recognition Framework (MARF). The combination of computer vision and audio processing to interpret both music and human motion to create imagery in real time is both artistically interesting and technically challenging. With these additional modules, ISSv2 can help interactive performance authoring that employs visual tracking and signal processing in order to create trackable human-shaped animations in real time. These new modules are incorporated into the Processing software sketchbook and language framework used by ISSv2. We verify the effects of these modules, through live demonstrations which are briefly described.

CRISPR/Cas9-NHEJ: action in the nucleus

CRISPR/Cas9-NHEJ: Action in the Nucleus (2017) is derived from an interdisciplinary creative process. This paper discusses the creation of this 210° scientific visualization, the usage of data from the worldwide Protein Data Bank, and the audio-visual presentation in an interactive dome setup. Since the topic is significant for the future of humanity, immersive experiences should be considered to convey tacit knowledge of gene-editing processes to make them approachable for the general public.

Design method of digitally fabricated spring glass pen

In this study, We propose a method to develop a spring glass dip pen by using a 3D printer and reproduce different types of writing feeling. There have been several studies on different types of pens to change the feel of writing. For example, EV-Pen [Wang et al. 2016] and haptics pens [Lee et al. 2004] changes the feel of pen writing with using vibration. However, our proposed method does not reproduce tactile sensation of softness by using vibrations.

El faro: developing a digital illustration of hull wreckage 15,400 feet below the surface of the Atlantic ocean

The National Transportation Safety Board (NTSB) is an independent agency charged with determining the probable cause of transportation accidents and promoting transportation safety. We collect a large volume of highly complex and diverse data, which is often integrated into digital illustrations to help explain the accident events, probable cause, and relevant safety issues. One major accident investigation in which digital illustrations were key was our investigation into the sinking of the cargo ship SS El Faro in 2015. (Fig. 1)

Ephemeral sandscapes: using robotics to generate temporal landscapes

Using commercially available parts, Ephemeral Sandscaper produces complex layered landscapes by semi-randomly selecting predefined elements and sculpting them onto a material field with compelling implications for Soft Architecture.

Fractal anatomy: imaging internal and ambient structures

Fractal shapes reflect the behavior of complex natural systems, but can be generated by simple mathematical equations. Images of 3D fractals almost exclusively depict opaque surfaces, and use reflected light and shadows to simulate a physical realization of these virtual objects. But rich inner detail can be revealed by reinterpreting the fractal as a volume and considering material transparency, light absorption and refraction. This work explores the range of images made possible by employing volume rendering techniques inspired by medical image visualization.

I am afraid: voice as sonic sculpture

We present a multi-user networked VR application, I Am Afraid, which uses voice as an interface to create sonic objects in a virtual environment. Words are spoken and added to the environment as three-dimensional textual objects. Other vocalizations are rendered as abstract shapes. The sculptural elements embed the sound of the voice that initiated their creation, and can be played as instruments via user-controlled interactions such as scrubbing, shaking, or looping. Multiple users can simultaneously be in the environment, mixing their voices in an evolving, dynamic, sound sculpture. I Am Afraid has been used for fun, performance, and therapeutic purposes.

Interactive projection mappings in a Japanese traditional house

We introduce interactive projection mappings in a traditional Japanese house. In Japanese traditional houses, sliding doors / windows called shoji are often used. The shoji is a panel stuck with paper on the frame of the tree, and it can be used as a projector screen. We created two types of interactive projection mappings on shoji (Figure 1(a)(b)). Other characteristics of Japanese traditional houses is tatami: straw mats flooring. We also created an interactive projection mapping on tatami flooring (Figure 1(c)).

Learning to move in crowd

The main goal of the crowd simulation is to generate realistic movements of agents. Reproducing the mechanism that seeing the environments, understanding current situation, and deciding where to step is crucial point to simulating crowd movements. We formulate the process of walking mechanism using deep reinforcement learning. And we experiment some typical scenarios.

Painting with DEGAS: (digitally extrapolated graphics via algorithmic strokes)

Most photograph-to-oil-painting algorithms are based off of the techniques described by [Litwinowicz 1997] and improved upon by [Hertzmann 2001]. They are essentially fully-automated processes which place strokes at random, choosing stroke orientations to follow local gradient normals. These strokes are built up over several layers, each layer in a decreasing order of stroke size - painting broad strokes and then filling in details. Outside of the initial chosen parameters and some masking considerations, the user has little agency in how the algorithm chooses stroke placement, nor has the ability to make direct changes or touch-ups until every stroke is laid down on the canvas - essentially it behaves like an "image filter" applied as one would adjust contrast or add texture.

Perceptual-based CNN model for watercolor mixing prediction

In the poster, we propose a model to predict the mixture of water-color pigments using convolutional neural networks (CNN). With a watercolor dataset, we train our model to minimize the loss function of sRGB differences. In metric of color difference ΔELab, our model achieves 88.7 % of data that ΔELab < 5 on the test set, which means the difference can not easily be detected by human eye. In addition, an interesting phenomenon is found; Even if the reflectance curve of the predicted color is not as smooth as the ground truth curve, the RGB color is still close to the ground truth.

Progressive-CRF-net: single image radiometric calibration using stacked CNNs

A camera is a good instrument for measuring scene radiance. However, to please the human eye, the resulting image brightness is not linear to the scene radiance, so solving the mapping function between scene radiance and image brightness is very important. We propose a Progressive-CRF-net for radiometric calibration. By stacking multiple networks and using the pre-trained weights, this approach can reduce the training time and reach better performance than that of previous work. Our experiments show a significant improvement based on PSNR and SSIM.

Small trees, big data: augmented reality model of air quality data via the chinese art of "artificial" tray planting

Our prototype app, Pocket Penjing, built using Unity3D, takes its name from the Chinese "Penjing." These tray plantings of miniature trees pre-date bonsai, often including miniature benches or figures to allude to people's relationship to the tree. App users choose a species, then create and name their tree. Swiping rotates a 3D globe showing flagged locations. Each flag represents a live online air quality monitoring station data stream that the app can scrape. Data is pulled in from the selected station and the AR window loads. The AR tree grows in real-time 3D. Its L-Systems form is determined by the selected live air quality data. We used this prototype as the basis of a two-part formative participatory design workshop with 63 participants.

Stitch: an interactive design system for hand-sewn embroidery

This paper presents a system that aims to assist with the design of hand-sewn embroidery. With our system, a user can edit his/her design until he/she is satisfied with the simulated embroidery. We demonstrate the effectiveness of our approach, showing that visually pleasing results can be generated with minimal effort.

Syntropic counterpoints: art of AI sense or machine made context art

Project Syntropic Counterpoints has been conceptualized in the form of a series of discussions between artificial intelligence (historical persons) clones, related to topics we want to expose to AI interpretation. The project is an artist response to rising technology singularity and emerging Artificial Intelligence implementation in every aspect of everyday life which changes the social interaction landscape forever. With this project we intend to point to questions such as: Are we using AI to make humans smarter or to create a new living entity equal to us? How will this reflect on human society and its present planetary supremacy? Can we share the world and accept equality with a new AI living entity? What could be the consequences of that decision? We are also trying to point to AI limitations and to examine the cultural, creative, historical and social benefits we can gain by using AI.

The stereoscopic art installation eccentric spaces

Stereoscopic techniques help to perceive spatial depth; hence they are used to create realistic representations of the three-dimensional world. They can also be used to manipulate spatial dimensions and create alternative spaces that challenge our understanding of visual reality. The video installation Eccentric Spaces is part of an art project that combines stereoscopic live action video with a Holobench-type display to depict alternative spaces that appear physically real.

Wall mounted level: a cooperative mixed reality game about reconciliation

Wall Mounted Level is a cooperative mixed-reality game that leverages multimodal interactions to support its narrative of 'reconciliation'. In it, players control their digitally projected characters and navigate them across a hand drawn physical sculpture as they collaborate towards a shared goal: finding one another. The digital and physical characteristics of the game are further reflected in the ways in which players interact with it, by making use of digital input devices and physical 'touch'. The abstract and poster discuss the design choices that were made for creating the varying modes of engagement and the motivation behind player collaboration in 'Wall Mounted Level.

SESSION: Augmented & virtual realities

Exploiting the limitations of spatio-temporal vision for more efficient VR rendering

Increasingly higher virtual reality (VR) display resolutions and good-quality anti-aliasing make rendering in VR prohibitively expensive. The generation of these complex frames 90 times per second in a binocular setup demands substantial computational power. Wireless transmission of the frames from the GPU to the VR headset poses another challenge, requiring high-bandwidth dedicated links.

Improving the realism of mixed reality through physical simulation

We present a new way of adding augmented information based on the computation of the physical equations that truly govern the behavior of objects. In computer graphics, it is common to use big simplifications to be able to solve this type of equations in real time, obtaining in many occasions behaviors that differ remarkably from reality. However, using model order reduction (MOR) techniques we are able to pre-compute a parametric solution that is only evaluated in the visualization stage, greatly reducing the computation time in this on-line phase. We present also several examples that support our method, showing computational fluid dynamics (CFD) examples and deformable solids with nonlinear material behaviors. Since it is a mixed-reality implementation, we decided to create an interactive poster that allows the visualization of augmented reality videos using augmented reality techniques, what we call (AR)2.

Interactive teaching aids design for essentials of anatomy and physiology: using bones and muscles as example

Learning essentials of anatomy and physiology[R. Richardson et al. 2018] can make students knowing more about the connection between bones and muscles of human bodies. In the past, we can only use books, pictures, videos or fixed bone model to teach. This kind of teaching may suit for student over 15. But, for student under 15, it's hard to increase their interest or studying time for learning. If there are some models that can be assembled during the class, as Alison James[A. James et al. 2014] said, building LEGO helps us to think more about the 3D shape of the object. Can also increase student's interest of learning.

Make-a-face: a hands-free, non-intrusive device for tongue/mouth/cheek input using EMG

Current devices aim to be more hands-free by providing users with the means to interact with them using other forms of input, such as voice which can be intrusive. We propose Make-a-Face; a wearable device that allows the user to use tongue, mouth, or cheek gestures via a mask-shaped device that senses muscle movement on the lower half of the face. The significance of this approach is threefold: 1) It allows a more non-intrusive approach to interaction, 2) we designed both the hardware and software from the ground-up to accommodate the sensor electrodes and 3) we proposed several use-case scenarios ranging from smartphones to interactions with virtual reality (VR) content.

On-site example-based material appearance digitization

We present a novel example-based material appearance modeling method for digital content creation. The proposed method requires a single HDR photograph of an exemplar object made of a desired material under known environmental illumination. While conventional methods for appearance modeling require the object shape to be known, our method does not require prior knowledge of the shape of the exemplar, nor does it require recovering the shape, which improves robustness as well as simplify on-site appearance acquisition by non-expert users.

tARget: limbs movement guidance for learning physical activities with a video see-through head-mounted display

In the aging society, people are paying more attention to having good exercise habits. The advancement of technology grants the possibility of learning various kinds of exercises using multi-media equipment, for example, watching instruction videos. However, it is difficult for users to learn accurate movements due to lack of feedback information.

The virtual schoolyard: attention training in virtual reality for children with attentional disorders

This work presents a virtual reality simulation for training different attentional abilities in children and adolescents. In an interdisciplinary project between psychology and computer science, we developed four mini-games that are used during therapy sessions to battle different aspects of attentional disorders. First experiments show that the immersive game-like application is well received by children. Our tool is also currently part of a treatment program in an ongoing clinical study.

Training assistant: strengthen your tactical nous with proficient virtual basketball players

Tactic training plays a crucial role in basketball offensive plays. With the aid of virtual reality, we propose a framework to improve the effectiveness and experience of tactic learning. The framework consists of a tactic input device and a wireless VR interaction system, which allows the user to conveniently input target tactic and practice in a high-fidelity circumstance. By the assistance of our VR training system, the user can vividly experience how the tactics are executed by viewing from the a specific player's viewing direction. Additionally, tactic movement guidance, action hint of how to offense aggressively, and virtual defenders are rendered in our system to make the training more realistic. By using the proposed framework, players can strengthen their tactical nous and improve the efficiency of tactic training.

Transcalibur: dynamic 2D haptic shape illusion of virtual object by weight moving VR controller

We introduce a dynamic weight moving VR controller for 2d haptic shape rendering using a haptic shape illusion. This allows users to perceive the feeling of various shapes in virtual space with a single controller. In this paper, we describe the mechanical design of prototype device that drives weight on a 2d planar area to alter mass properties of the hand-held controller. Based on the experiment, our system succeeded in providing shape perception over a wide range. We discuss limitation and further capability of our device.

SESSION: Display & rendering

A process to create dynamic landscape paintings using barycentric shading with control paintings

In this work, we present a process that use Barycentric shading method to create dynamic landscape paintings that change based on time of the day. Our process can allow creating dynamic paintings for any time of the day using simply a limited number of control paintings. To create a proof of concept, we have used landscape paintings of Edgar Payne, one of the leading landscape painters of the American West. His specific style of painting that blends Impressionism with the style of other painters of the American West is particularly appropriate for the demonstration of the power of our Barycentric shading method.

Aerial 3D display using a symmetrical mirror structure

In the present study, we propose a new wide-viewing-angle aerial imaging display that can display aerial three-dimensional images to the surroundings. Aerial imaging has a strong visual impact, and significant efforts have been made to realize aerial imaging. In recent years, attention has been drawn to optical elements that realize retroreflective transmission, which makes it possible to easily project such images into the air. However, the viewing angle of the aerial image is narrow, and it is difficult for multiple people to simultaneously observe aerial images or to observe aerial images from all angles. Therefore, in the present study, by symmetrically arranging the mirrors at the end of the retroreflective optical transfer system, the maximum viewing angle of the aerial image is enlarged and observation from the entire circumference becomes possible.

Aerial 3D/2D composite display: depth-fused 3D for the central user and 2D for surrounding audiences

This paper proposes a novel optical system to show an aerial 3D image for a user in front of the display and to show its 2D image for the surrounding viewers. Our optics forms two-layered aerial images that are visible in a limited viewing area. Outside the viewing area, only the rear aerial 2D image is visible. The viewing area is controlled by the area of a retro-reflector in AIRR (Aerial Imaging by Retro-Reflection). The center perceives depth in the aerial screen based on DFD (Depth-Fused 3D) display.

Automated acquisition and real-time rendering of spatially varying optical material behavior

We created a fully automatic system for acquisition of spatially varying optical material behavior of real object surfaces under a hemisphere of individual incident light directions. The resulting measured material model is flexibly applicable to arbitrary 3D model geometries, can be photorealistically rendered and interacted with in real-time and is not constrained to isotropic materials.

Conservative Z-prepass for frustum-traced irregular Z-buffers

This paper presents a pipeline to accelerate frustum traced irregular z-buffers (IZBs). The IZB proposed by Wyman et al. is used to render accurate hard shadows for real-time applications such as video games, while it is expensive compared to shadow mapping. To improve the performance of hard shadows, we use a two-pass visibility test by integrating a conservative shadow map into the pipeline of the IZB. This paper also presents a more precise implementation of the conservative shadow map than the previous implementation. In our experiments for 4K screen resolution, the performance of the hard shadow computation is improved by more than double on average using the two-pass visibility test, though there is still room for optimization.

General primitives for smooth coloring of vector graphics

We propose a novel and intuitive method for coloring vector graphics which is easy to use and creates richly colored artwork with very little effort. Further, it preserves the underlying geometry of the vector graphic primitives, thereby, making it easy to perform subsequent edits. Our method builds upon the concepts of shape-coverage, color and opacity and thus is applicable to all vector graphics constructs including non-convex paths and text. Furthermore, our method is highly performant and provides real-time results irrespective of the number of coloring primitives used.

Improving incompressible SPH simulation efficiency by integrating density-invariant and divergence-free conditions

Our method shortens the time of fluid simulation by coupling the two conditions of density-invariant and divergence-free, and achieves the same simulation effect compared with other methods. Further, we regard the displacement of particles as the only basic variable of the continuity equation, which improves the stability of the fluid to a certain extent.

Lighting condition adaptive tone mapping method

We propose an adaptive tone mapping method for displaying HDR images according to ambient light conditions. To compensate the loss of perceived luminance in brighter viewing conditions, we enhance the HDR image by an algorithm based on the Naka-Rushton model. Changes of the HVS response under different adaptation levels are considered and we match the response under the ambient conditions with the plateau response to the original HDR scene. The enhanced HDR image is tone mapped through a tone mapping curve constructed by the original image luminance histogram to produce visually pleasing images under given viewing conditions.

MegaParallax: 360° panoramas with motion parallax

Capturing 360° panoramas has become straightforward now that this functionality is implemented on every phone. However, it remains difficult to capture immersive 360° panoramas with motion parallax, which provide different views for different viewpoints. Alternatives such as omnidirectional stereo panoramas provide different views for each eye (binocular disparity), but do not support motion parallax, while Casual 3D Photography [Hedman et al. 2017] reconstructs textured 3D geometry that provides motion parallax but suffers from reconstruction artefacts. We propose a new image-based approach for capturing and rendering high-quality 360° panoramas with motion parallax. We use novel-view synthesis with flow-based blending to turn a standard monoscopic video into an enriched 360° panoramic experience that can be explored in real time. Our approach makes it possible for casual consumers to capture and view high-quality 360° panoramas with motion parallax.

Practical acquisition and rendering of common spatially varying holographic surfaces

We present a novel approach to measure the appearance of commonly found spatially varying holographic surfaces. Such surfaces are made of one dimensional diffraction gratings that vary in orientations and periodicities over a sample to create impressive visual effects. Our method is able to recover the orientation and periodicity maps simply using a flash illumination and a DSLR camera. We present real-time renderings under environmental illumination using the measured maps that match the observed appearance.

Practical measurement-based spectral rendering of human skin

Realistic appearance modeling of human skin is an important research topic with a variety of application in computer graphics. Various diffusion based BSSRDF models [Jensen et al. 2001, Donner and Jensen 2005, Donner and Jensen 2006] have been introduced in graphics to efficiently simulate subsurface scattering in skin including modeling its layered structure. These models however assume homogeneous subsurface scattering parameters and produce spatial color variation using an albedo map. In this work, we build upon the spectral scattering model of [Donner and Jensen 2006] and target a practical measurement-based rendering approach for such a spectral BSSRDF. The model assumes scattering in the two primary layers of skin (epidermis and dermis respectively) can be modeled with relative melanin and hemoglobin chromophore concentrations respectively. To drive this model for realistic rendering, we employ measurements of skin patches using an off-the-shelf Miravex Antera 3D camera which provides spatially varying maps of these chromophore concentrations as well as corresponding 3D surface geometry (see Figure 1) using a custom imaging setup.

Progressive real-time rendering of unprocessed point clouds

Rendering tens of millions of points in real time usually requires either high-end graphics cards, or the use of spatial acceleration structures. We introduce a method to progressively display as many points as the GPU memory can hold in real time by reprojecting what was visible and randomly adding additional points to uniformly converge towards the full result within a few frames.

Our method heavily limits the number of points that have to be rendered each frame and it converges quickly and in a visually pleasing way, which makes it suitable even for notebooks with low-end GPUs. The data structure consists of a randomly shuffled array of points that is incrementally generated on-the-fly while points are being loaded.

Due to this, it can be used to directly view point clouds in common sequential formats such as LAS or LAZ while they are being loaded and without the need to generate spatial acceleration structures in advance, as long as the data fits into GPU memory.

Realistic post-processing of rendered 3D scenes

In this talk, we show a realistic post-processing rendering based on generative adversarial network CycleWGAN. We propose to use CycleGAN architecture and Wasserstein loss function with additional identity component in order to transfer graphics from Grand Theft Auto V to the older version of GTA video-game, Grand Theft Auto: San Andreas. We aim to present the application of modern art style transfer and unpaired image-to-image translations methods for graphics improvement using deep neural networks with adversarial loss.

Retinal resolution display technology brings impact to VR industry

Currently1, Visual Reality Head-mounted Display has several problems that need to be overcome, such as insufficient resolution of the display, latency, Vergence-accommodation Conflict, etc., while the resolution is not high enough, causing the virtual image of the display to have graininess or Screen-door Effect. These problems have brought VR users an imperfect image quality experience and are unable to achieve a good sense of immersion. Therefore, it is necessary to solve the problem of insufficient display resolution. INT TECH Co., is working towards this goal and has made very good progress.

Which BSSRDF model is better for heterogeneous materials?

We present an improved method for rendering heterogeneous translucent materials with existing BSSRDF models. In the general BSSRDF models, the optical properties of the target object are constant. Sone et al. have proposed a method to combine with existing BSSRDF models for rendering heterogeneous materials. However, the method generates more bright and blurred images compared with correctly simulated images. We have experimented with various BSSRDF models by the method and rendered heterogeneous materials. As a result, the rendered image with the better dipole model is the closest to the result of Monte carlo simulation. If incorporating the better dipole model into the method proposed by Sone et al., we can render more realistic images of heterogeneous materials.

SESSION: Hardware interfaces

3D facial geometry analysis and estimation using embedded optical sensors on smart eyewear

Facial performance capture is used for animation production that projects a performer's facial expression to a computer graphics model. Retro-reflective markers and cameras are widely used for the performance capture. To capture expressions, we need to place markers on the performer's face and calibrate the intrinsic and extrinsic parameters of cameras in advance. However, the measurable space is limited to the calibrated area. In this study, we propose a system to capture facial performance using a smart eyewear with photo-reflective sensors and machine learning technique. Also, we show a result of principal components analysis of facial geometry to determine a good estimation parameter set.

Development of an open source motion capture system

Motion capture (MoCap) has been one of the leading and most useful tools within the field of animation to capture fluid and detailed motion. However, it can be quite expensive for animators, game developers and educators on a tight budgets. By using Raspberry Pi Zeros, with NoIR cameras and IR LED light rings, the cost of a four-camera system can potentially be reduced to less than 1000 USD. The research described should lead to an effective and useful system, able to detect multiple markers, record their coordinates, and keep track of them as they move. With a setup of three or more cameras, one would be able to triangulate the data on a low-cost host computer. All software and hardware designs will be disseminated open source, providing anyone who is interested in MoCap, whether it be for hobbyist, semi-professional, or educational purposes, a system for a fraction of the typical cost.

Learning optimal lighting patterns for efficient SVBRDF acquisition

Digitally acquiring high-quality material appearance from the real-world is challenging, with applications in visual effects, e-commerce and entertainment. One popular class of existing work is based on hand-derived illumination multiplexing [Ghosh et al. 2009], using hundreds of patterns in the most general case [Chen et al. 2014].

Make your own retinal projector: retinal near-eye displays via metamaterials

Retinal projection is required for xR applications that can deliver immersive visual experience throughout the day. If general-purpose retinal projection methods can be realized at a low cost, not only could the image be displayed on the retina using less energy, but there is also a possibility of cutting off the weight of projection unit itself from the AR goggles. Several retinal projection methods have been previously proposed. Maxwellian optics based retinal projection was proposed in 1990s [Kollin 1993]. Laser scanning [Liao and Tsai 2009], laser projection using spatial light modulator (SLM) or holographic optical elements were also explored [Jang et al. 2017]. In the commercial field, QD Laser1 with a viewing angle of 26 degrees is available. However, as the lenses and iris of an eyeball are in front of the retina, which is a limitation of a human eyeball, the proposal of retinal projection is generally fraught with narrow viewing angles and small eyebox problems. Due to these problems, retinal projection displays are still a rare commodity because of their difficulty in optical schematics design.

Solar projector

The sun is the most universal, powerful and familiar energy available on the planet. Every organism and plant has evolved over the years, corresponding to the energy brought by the sun. Humanity is no exception. We have invented many artificial lights since Edison invented light bulbs. In recent years, LEDs are one of the most representative examples. Displays and projectors using LEDs are still being actively developed. However, it is difficult to reproduce ideal light with high brightness and wide wavelength like sunlight. Furthermore, considering low energy sustainability and environmental contamination in the manufacturing process, artificial light can not surpass the sunlight. Against this backdrop, projects that utilize sunlight have been actively carried out in the world. Concentrating Solar Power (CSP) generate electricity using the heat of sunlight to turn turbines [Müller-Steinhagen and Trieb 2004]. [Koizumi 2017] is an aerial image presentation system using the sun as a light source. Digital sundials use the shadow of sunlight to inform digital time [Scharstein et al. 1996]. These projects attempt to use the direct sunlight without any conversion and minimize the energy loss.

3D content creation exploiting 2D character animation

While 3D animation is constantly increasing its popularity, 2D is still largely in use in animation production. In fact, 2D has two main advantages. The first one is economic, as it is more rapid to produce, having a dimension less to consider. The second one is important for the artists, as 2D characters usually have highly distinctive traits, which are lost in a 3D transposition. An iconic example is Mickey Mouse, whom ears appear circular no matter which way he is facing.

Automatic display zoom for people suffering from Presbyopia

Human eyes have an adjustment function to adjust for different distances of seeing. However, it becomes weaker as you get older. When you move paper closer to read small letters, it is not in focus. When you move it away to bring it into focus, it is too small to read. This condition is called Presbyopia. People suffering from presbyopia also suffer from this condition when they use a smartphone or tablet. Although they can magnify the display using the pinch operation, it is a bother. A method for automatic display zoom, to see detail and an overview, was proposed in [Satake et al. 2016]. This method measures the distance between a face and a screen to judge whether you want to see detail or an overview. When you move it close to your face, it judges you want to see detail and zooms in. When you move it away from your face, it judges that you want to see overview and zooms out. In this paper, we improve and apply this method for presbyopia. First we observe and analyze the behavior of presbyopic people when trying to read small letters. Then we propose a suitable zooming function, for example, a screen is zoomed in also when it is moved away if the person suffers from presbyopia.

Collaborative animation production from students' perspective: creating short 3D CG films through international team-work

Massive Collaborative Animation Projects (MCAP) was founded in 2016 by Dr. William Joel (Western Connecticut State University) to test students' collaborative abilities and provide experience that will allow them to grow professionally and academically. The MCAP 1 production is a children's ghost story designed to test the massive collaborative structure. The goal of MCAP 2 is to create an animation for use in planetariums worldwide. Currently, there are nearly one hundred student contributors from universities in Alaska, California, Colorado, Connecticut, Japan, Michigan, South Korea, and Taiwan.

Curved support structures and meshes with spherical vertex stars

The computation and construction of curved beams along freeform skins pose many challenges. We show how to use surfaces of constant mean curvature (CMC) to compute beam networks with beneficial properties, both aesthetically and from a fabrication perspective. To explore variations of such networks we introduce a new discretization of CMC surfaces as quadrilateral meshes with spherical vertex stars and right node angles. The computed non-CMC surface variations can be seen as a path in design space - exploring possible solutions in a neighborhood, or represent an actual erection sequence exploiting elastic material behavior.

Volume: 3D reconstruction of history for immersive platforms

This paper presents Volume, a software toolkit that enables users to experiment with expressive reconstructions of archival and/or historical materials as volumetric renderings. Making use of contemporary deep learning methods, Volume re-imagines 2D images as volumetric 3D assets. These assets can then be incorporated into virtual, augmented and mixed reality experiences.

SESSION: Research

3D-mesh cutting based on fracture photographs

We propose a new approach to 3D mesh fracturing for the fields of animation and game production. Through the use of machine learning and computer vision to analyze real fractures we produced a solution capable of creating realistic fractures in real-time.

A seamless texture color adjustment method for large-scale terrain reconstruction

We present a technique to generate realistic high quality texture with no seams suitable to reconstruct large-scale 3D terrains. We focused on adjusting color difference caused by camera variations and illumination transition for texture reconstruction pipelines. Seams between separated processing areas should also be considered important in large terrain models. The proposed technique corrects these problems by normalizing texture colors and interpolating texture adjustment colors.

BOLCOF: base optimization for middle layer completion of 3D-printed objects without failure

3D printing failures can occur without completion of printing process due to shaking, errors in printer settings, and shape of the support material and 3D model. In such case it could be difficult to restart printing process from the last printed layer in conventional 3D printers, as the printing parts to which the nozzles are supposed to be attached are lost. In order to restart printing from the middle layer, Wu et al.[Wu et al. 2017] proposed a method of printing while rotating the base of a 3D printer. However, such approach required time for two objects to bond after segmentation, with limited availability of methods for adhesion between parts. Wu et al.[Wu et al. 2016] have also proposed a method to print 3D models at any angle through 5-axis rotation of the base of a 3D printer, but the manufacturing cost of such approach was relatively high. Therefore, we propose a system that prints 3D models on existing object by utilizing an infrared depth camera. Our method makes it possible to attach a 3D-printed object into a free-formed object in the middle of printing by recognizing its shape with a depth camera.

Deep motion transfer without big data

This paper presents a novel motion transfer algorithm that copies content motion into a specific style character. The input consists of two motions. One is a content motion such as walking or running, and the other is movement style such as zombie or Krall. The algorithm automatically generates the synthesized motion such as walking zombie, walking Krall, running zombie, or running Krall. In order to obtain natural results, the method adopts the generative power of deep neural networks. Compared to previous neural approaches, the proposed algorithm shows better quality, runs extremely fast, does not require big data, and supports user-controllable style weights.

Depth assisted full resolution network for single image-based view synthesis

Synthesizing images of novel viewpoints is widely investigated in computer vision and graphics. Most works in this topic focus on using multi-view images to synthesize viewpoints in-between. In this paper, we consider extrapolation, and we take a step further to do extrapolation from one single input image. This task is very challenging for two major reasons. First, some parts of the scene may not be observed in the input viewpoint but are required for novel ones. Second, 3D information is lacking for single view input but is crucial to determine pixel movements between viewpoints. Although very challenging, we observe that human brains are always able to imagine novel viewpoints. The reason is that human brains have learned in our daily lives to understand the depth order of objects in a scene [Chen et al. 2016] and infer what the scene looks like when viewing from another viewpoint.

Depth from focus for 3D reconstruction by iteratively building uniformly focused image set

Depth estimation from differently focused set of images has been a practical approach for 3D reconstruction with existing color cameras. In this paper, we propose a depth from focus (DFF) method for accurate depth estimation using single commodity color camera. We investigate the appearance changes in spatial and frequency domain along the focused image frames in iterative manner. In order to achieve sub-frame level accuracy in depth estimation, optimal location of in-focus frame is estimated by fitting a parameterized polynomial curve on the dissimilarity measurements of each pixel. Quantitative and qualitative evaluations on various test image sets show promising performance of the proposed method in depth estimation.

Efficient multispectral facial capture with monochrome cameras

We propose a variant to polarized gradient illumination facial scanning which uses monochrome instead of color cameras to achieve more efficient and higher-resolution results. In typical polarized gradient facial scanning, sub-millimeter geometric detail is acquired by photographing the subject in eight or more polarized spherical gradient lighting conditions made with white LEDs, and RGB cameras are used to acquire color texture maps of the subject's appearance. In our approach, we replace the color cameras and white LEDs with monochrome cameras and multispectral, colored LEDs, leveraging that color images can be formed from successive monochrome images recorded under different illumination colors. While a naive extension of the scanning process to this setup would require multiplying the number of images by number of color channels, we show that the surface detail maps can be estimated directly from monochrome imagery, so that only an additional n photographs are required, where n is the number of added spectral channels. We also introduce a new multispectral optical flow approach to align images across spectral channels in the presence of slight subject motion. Lastly, for the case where a capture system's white light sources are polarized and its multispectral colored LEDs are not, we introduce the technique of multispectral polarization promotion, where we estimate the cross- and parallel-polarized monochrome images for each spectral channel from their corresponding images under a full sphere of even, unpolarized illumination. We demonstrate that this technique allows us to efficiently acquire a full color (or even multispectral) facial scan using monochrome cameras, unpolarized multispectral colored LEDs, and polarized white LEDs.

Evaluation of stretched thread lengths in spinnability simulations

In this paper, we report evaluation of thin stretched thread lengths in spinnability simulations. There are many previous studies related to viscoelastic fluid, however, there are few studies that represent "spinnability", which is a feature that the material is stretched thin and long. Although some studies represented thread-forming property, they did not evaluate the stretched length of the material. We also tried to represent spinnability of viscoelastic fluid, however, the simulation results were not similar to a real material. Therefore, we try to perform spinnability simulations with three kinds of models, and evaluate stretched thread lengths by comparison of simulation results with a literature datum.

Food texture manipulation by face deformation

Food texture plays an important role in the experience of food. Researchers have proposed various methods to manipulate the perception of food texture using auditory and physical stimulation. In this paper, we demonstrate a system to present visually modified mastication movements in real-time to manipulate the perception of food texture, because visual stimuli efficiently work to enrich other food-related perceptions and showing someone their deformed posture changes somatosensory perception. The result of our experiments suggested that adding real-time feedback of facial deformation when participants open their mouths can increase the perceived chewiness of foods. Moreover, perceptions of hardness and adhesiveness were improved when the participants saw their modified face or listened to their non-modified chewing sound, while both perceptions were decreased when participants were presented with both stimuli. These results indicate the occurrence of the contrast effect.

From visible to printable: thin surface with implicit interior structures

Converting a surface-based objects into a thin-surface solid representation is an essential problem for additive manufacturing. This paper proposes a simple way to thicken surfaces to thin solids based on implicit modelling technique. With the proposed technique, any surface-based object can be converted into a 3D printing friendly form that seamlessly combines both the geometric shape and its interior material structures in one single representation.

Gesture recognition using leap motion: a comparison between machine learning algorithms

In this paper we compare the effectiveness of various methods of machine learning algorithms for real-time hand gesture recognition, in order to find the most optimal way to identify static hand gestures, as well as the most optimal sample size for use during the training step of the algorithms.

In our framework, Leap Motion and Unity were used to extract the data. The data was then used to be trained using Python and scikit-learn. Utilizing normalized information regarding the hands and fingers, we managed to get a hit rate of 97% using the decision tree classifier.

Improving regularity of the centoridal voronoi tessellation

We present a novel method for valence optimization of the Centroidal Voronoi Tessellation (CVT). We first identify three commonly appeared atomic configurations of local irregular Voronoi cells, and then design specific atomic operations for each configuration to improve the regularity within the CVT framework.

Interactive dance performance evaluation using timing and accuracy similarity

This paper presents a dance performance evaluation how well a learner mimics the teacher's dance as follows. We estimate the human skeletons, then extract dance features such as torso and first and second-degree feature, and compute the similarity score between the teacher and the learner dance sequence in terms of timing and pose accuracies. To validate the proposed dance evaluation method, we conducted several experiments on a large K-Pop dance database. The proposed methods achieved 98% concordance with experts' evaluation on dance performance.

Large-scale fabrication with interior zometool structure

In recent years, personalized fabrication has attracted many attentions due to the widespread of consumer-level 3D printers. However, consumer 3D printers still suffer from shortcomings such as long production time and limited output size, which are undesirable factors to large-scale rapid-prototyping. We propose a hybrid 3D fabrication method that combines 3D printing and Zometool structure for both time/cost-effective fabrication of large objects. The key of our approach is to utilize compact, sturdy and re-usable internal structure (Zometool) to infill fabrications and replace both time and material-consuming 3D-printed materials. Unlike the laser-cutted shape used in [Song et al. 2016], we are able to reuse the inner structure. As a result, we can significantly reduce the cost and time by printing thin 3D external shells only.

RSGAN: face swapping and editing using face and hair representation in latent spaces

This abstract introduces a generative neural network for face swapping and editing face images. We refer to this network as "region-separative generative adversarial network (RSGAN)". In existing deep generative models such as Variational autoencoder (VAE) and Generative adversarial network (GAN), training data must represent what the generative models synthesize. For example, image inpainting is achieved by training images with and without holes. However, it is difficult or even impossible to prepare a dataset which includes face images both before and after face swapping because faces of real people cannot be swapped without surgical operations. We tackle this problem by training the network so that it synthesizes synthesize a natural face image from an arbitrary pair of face and hair appearances. In addition to face swapping, the proposed network can be applied to other editing applications, such as visual attribute editing and random face parts synthesis.

Simulation of emergent rippling on growing thin-shells

Many thin tissues, such as leaves and flower petals, exhibit rippling and buckling patterns along their edge as they grow (Figure 1). Experiments with plastic materials have replicated the rippling patterns found in nature and shown that such patterns exhibit a fractal quality of ripples upon ripples --- a so called "buckling cascade" [Eran et al. 2004]. Such patterns are influenced by many physical mechanisms, including stress forces, physical properties of materials (e.g., stiffness), and space constraints [Prusinkiewicz and Barbier de Reuille 2010]. Physics-based computer animation that produces emergent rippling patterns on thin surface can improve the realism of virtual flowers and leaves, and also help to explain which physical mechanisms are most important for controlling the morphology of tissues with buckling cascades.

Skin+: programmable skin as a visuo-tactile interface

Wearable technologies have been supporting and augmenting our body and sensory functions for a long time. Skin+ introduces a novel bidirectional on-skin interface that serve not only as haptic feedback to oneself but also as a visual display to mediate touch sensation to others as well. In this paper, we describe the design of Skin+ and its usability in a variety of applications. We use a shape-changing auxetic structure to build this programmable coherent visuo-tactile interface. The combination of shape-memory alloy with an auxetic structure enables a lightweight haptic device that can be worn seamlessly on top of our skin.

Towards a stochastic depth maps estimation for textureless and quite specular surfaces

The human brain is constantly solving enormous and challenging optimization problems in vision. Due to the formidable meta-heuristics engine our brain equipped with, in addition to the widespread associative inputs from all other senses that act as the perfect initial guesses for a heuristic algorithm, the produced solutions are guaranteed to be optimal. By the same token, we address the problem of computing the depth and normal maps of a given scene under a natural but unknown illumination utilizing particle swarm optimization (PSO) to maximize a sophisticated photo-consistency function. For each output pixel, the swarm is initialized with good guesses starting with SIFT features as well as the optimal solution (depth, normal) found previously during the optimization. This leads to significantly better accuracy and robustness to textureless or quite specular surfaces.

Visual microscope for massive genomics datasets, expanded perception and interaction

An innovative fully interactive and ultra-high resolution navigation tool has been developed to browse and analyze gene expression levels from human cancer cells, acting as a visual microscope on data. The tool uses high-performance visualization and computer graphics technology to enable genome scientists to observe the evolution of regulatory elements across time and gain valuable insights from their dataset as never before.

Withering fruits: vegetable matter decay and fungus growth

We propose a parametrised method for recreating drying and decaying vegetable matter from the fruits category, taking into account the biological characteristics of the decaying fruit. The simulation addresses three main phenomena: mould propagation, volume shrinking and fungus growth on the fruit's surface. The spread of decay is achieved using a Reaction-Diffusion method, a Finite Element Method is used for shrinking and wrinkling of the fruit shell, while the spread of the fruit's fungal infection is described by a Diffusion Limited Aggregation algorithm. We extend existing fruit decay approaches, improving the shrinking behaviour of decaying fruits and adding independent fungal growth. Our approach integrates a user interface for artist directability and fine control of the simulation parameters.