SIGGRAPH '21 Posters: Special Interest Group on Computer Graphics and Interactive Techniques Conference Posters

Full Citation in the ACM Digital Library

SESSION: Art & Design

Animated Futurist Sculpting as Dynamic Implicit Shapes

In this work, we present an approach to obtain futurist sculptures. Our approach is inspired by the works of Italian Futurist artists such as Umberto Boccioni. Futurism, as an art movement, aims to achieve to define forms that are a product of time but is permanent in space. In this work, we have developed a methodology to produce a set of futurist sculptures from any given animation of any object that is defined as a triangular mesh. Each produced futurist sculpture is a still frame of what can be rendered as a sculpture animation. Our method is based on the conversion of a given polygonal mesh and its motion into an implicit shape in 4D space which consists of 3-spatial and one temporal dimension. To create each specific futurist sculpture, we compute a subset of this 4D implicit shape in a given time interval. The resulting immersion of 4D structure into the 3D spatial domain provides us desired futurist sculpture for the given time interval. The most important aspect of our methodology is the conversion of animated polygonal mesh into a 4D implicit shape. We first convert a polygonal mesh into a set of particles. Each particle can have its own color. All the points that are closer to the trajectory of the particle form an implicitly defined swept volume [Kim et al. 2004]. These swept volumes appear to be similar to the extrusion of a circle along a curve, but they are guaranteed to be free of artifacts caused by intersections.

Caricature Creation with Conformal Mapping in Complex Domain

Caricature is an art form of exaggeration of features [Akleman 1997; Akleman et al. 2000; Akleman and Reisch 2004; Brennan 1985; Klare et al. 2012; Liang et al. 2002]. An important property of feature exaggeration is that it is not deformation. By deforming features we can obtain funny looking portraits, however the resulting features will not look exaggerated. In this work, we present an approach for extreme exaggeration of facial features to obtain caricature effect. Our approach is based on the well-known conformal property of maps in complex domains. Namely, any map in a complex domain is angle preserving, which is crucial for caricature generation. Without angle preservation, the maps can result in deformations that look funny or grotesque, but not caricature. We have developed a particular mapping in a complex domain and show that we can obtain a wide variety of faces starting from any illustration (or photograph) of a human face.

Dynamic Projection Mapping for Silkworms

Goshuin 2.0: Construction of the World’s Largest Goshuin Dataset and Automatic Generation System of Goshuin with Neural Style Transfer

Integrating Abstract Expressionism with 3D lighting within the Light-in-Space Movement

Interactive DPM for Thin Plants with the Latency Measurement

Procedural real time live drawing animation

This work presents a real-time method to create a procedural drawing animation given a simple image and a set of parameters. The resulting animation, based on a GPU particle simulation, respects the input image structure and dynamic to draw and move brushes. Our work could be helpful for both creating live drawing animation and, more generally, to create a stylized image reproduction. The set of parameters allows the user to achieve a wide range of artistic styles.

WhiteStone: A Tangible interactive device for revitalizing Qiang language and culture

The Qiang is an ancient minority in western China. However, the number of people who can speak the Qiang language is decreasing due to a lack of written text. Although the protection of intangible cultural heritage has been widely discussed, there is still a dearth of interactive design for the Qiang people’s language and culture. This research aims to determine the efficacy of tangible interactive games to encourage Qiang people’s interest in learning the Qiang language and increase their cultural awareness. To better understand the current state and challenges of Qiang language and culture, we conducted a three-day field investigation in the Qiang villages. Based on the field study’s key findings, we created ”WhiteStone,” a tangible interactive projection device based on the heroic epic of Qiang. This poster examines the design opportunities for tangible interactive games to revitalize the Qiang language and culture.

SESSION: Augmented & Virtual Realities

A Hybrid 2D-3D Tangible Interface for Virtual Reality

Virtual Reality (VR) controllers are widely used for easy object selection and manipulation as a primary 3D input method in the virtual environment. Mobile devices with touchscreens like smartphones or tablets provide precise 2D tangible inputs. This research combines a VR controller and a touch-based smartphone to create a novel hybrid 2D-3D interface for enhanced VR interaction. We present the interface design and its implementation and also demonstrate four featured scenarios with the hybrid interface.

An Examination of View-Settings for Long Texts in VR Reading

To reduce the burden from VR reading of long texts, this study finds the view-settings for better readability and less fatigue. As the view-settings, we focus on the font type, the font color, the font size, and the view-distance from the text. Our results show the relation among the view-settings, the readability, and the fatigue for long texts in VR reading.

Applying Virtual Reality for Systematic Gaze Pattern Evaluation in Simulated Retinitis Pigmentosa Patients

We apply virtual reality headsets with eye tracking capabilities to evaluate new training methods for patients living with loss of peripheral vision (“tunnel vision”). It can be shown that systematic gaze patterns, taught in a virtual reality environment, are able to significantly increase the effectively perceived visual area of the participants as well as reduce the number of obstacle collisions in navigation tasks.

Feedback of Rotational Sensation Experienced by Body for Immersive Telepresence

In our previous study, we proposed a telepresence system that can transfer the riding sensation of a vehicle (Segway) for assisting collaborative task. The system could provide a local expert who remotely attend the task not only the view of a remote environment that is captured by a camera but also the vestibular perception during the movement of the camera. In this study, we examined the rotation feedback by the rotary seat when the camera is rotated. The measured intensity adjustment showed that the angular acceleration of the rotary seat was about half that of the camera rotation. Further, the result of the simulator sickness questionnaire scores showed that the inphase rotation of the seat with the camera is appropriate for suppressing virtual reality sickness, indicating that the requirement of vestibular intensity is quite low compared with the visual cue showed on the head mounted display, which allows a designer to develop a sensation feedback device that has an actuator of low strength.

Holo-Box: Level-of-Detail Glanceable Interfaces for Augmented Reality

Glanceable interfaces are Augmented Reality (AR) User Interfaces (UIs) for information retrieval ”at a glance” relying on eye gaze for implicit input. While they provide rapid information retrieval, they often occlude a large part of the real-world. This is compounded as the amount of virtual information increases. Interacting with complex glanceable interfaces often results in unintentional eye gaze interaction and selections due to the Midas Touch problem. In this work, we present Holo-box, an innovative AR UI design that combines 2D compact glanceable interfaces with 3D virtual ”Holo-boxes”. We can utilize the glanceable 2D interface to provide compact information at a glance while using Holo-box for explicit input such as hand tracking activated when necessary, surpassing the Midas Touch problem and resulting in Level-of-Detail(LOD) for AR glanceable UIs. We test our proposed system inside a real-world machine shop to provide on-demand virtual information while minimizing unintentional real-world occlusion.

Pockets: User-Assigned Menus Based on Physical Buttons for Virtual Environments

We present Pockets, a simple means of organizing and carrying 3D tools and other objects in virtual environments. Previous examples exist of using 3D tools with visually obvious affordances in virtual immersive environments instead of more traditional menus, however, in these applications a 2D menu is still necessary to select 3D tools from. Pockets make use of a belt with physical buttons, that objects can be assigned to. The Pockets design not only enables users to use their muscle memory to store and retrieve objects, thereby making tool use more efficient, but also solves the occlusion problem associated with state of the art approaches such as 2D menus tied to the body or to world space.

PushToSki - An Indoor Ski Training System Using Haptic Feedback

Haptic feedback is an intuitive way of improving required postures in sports without having the trainee change their head-pose towards visual cues and therefore possibly worsening their overall body-pose. However, this feedback is not possible in a dynamic sport like alpine skiing which is why we propose a virtual reality ski training system that uses vibration as a haptic feedback method. Our system uses a commercially available indoor ski simulator and several trackers to capture the user’s motion together with a set of vibration motors which will provide direct, haptic feedback to the user. Our system therefore allows giving haptic feedback even while the trainee is moving on the simulator.

SESSION: Displays

Adaptive Radiometric Compensation on Deforming Surfaces

In this research, we propose an adaptive radiometric compensation method, which uses only a projector and a camera, on continuously deforming projection surfaces. Radiometric compensation has been widely studied as a technique for making various objects usable as screens, by canceling out the influence of the color and pattern of the projection target. However, since it is necessary to continuously maintain the inter-pixel correspondence between a projector and a camera, to date, the projection target has been limited to stationary objects. Therefore, we propose a method to estimate the inter-pixel correspondence in real-time, using only a normal projector and camera. The method expands the scope of application of projection mapping greatly, by applying radiometric compensation to deforming clothes, and making them available as screens.

Enabling Reflective & Refractive Depth Representation in Computer-Generated Holography

Light-field Projection for Tangible Projection Mapping

In the present study, we propose a novel projection mapping using a 3D light-field image as a light source. In recent years, spatial augmented reality has evolved into dynamic projection mapping that extends the target to moving objects. However, spatial augmented reality causes multiplexing of projection and measurement equipment, which causes various problems, such as increased psychological pressure on users and a reduced production effect. Therefore, based on the concept of stealth projection, which hides the projection device using aerial imaging technology, we propose a dynamic projection mapping method using a 3D light-field image generated in real time according to the position and orientation of the target object. As a result, a simple light-field projector consisting of an LCD panel and a lenticular lens provides projection mapping for moving objects while visually hiding the projection devices.

Omnidirectional display that presents information to the ambient environment with optical transparency

Real-time Projection of Lip Animation onto Face Masks using OmniProcam

This paper describes an OmniProcam system, which enables 360 degree horizontal projection by a fisheye lens with a coaxial procam in which the optical axes of the camera and projector are exactly matched. Combined with 2D marker recognition, the OmniProcam can display images onto screens at arbitrary positions in 3D space. As an example application, we developed a system which projects lip animation onto the user’s face masks for better communication at the physical meeting in current COVID-19 situation. The system recognizes the user’s speech, generates the lip animation using Lipsync, and projects the animation onto the user’s face masks.

TeraFoils: Design and Rapid Fabrication Techniques for Binary Holographic Structures in the Terahertz Region

In this paper, we introduce TeraFoils, a method for designing and fabricating material-based structures using binary holograms in the terahertz region. We outline the design, fabrication, imaging, and data processing steps for embedding information inside physical objects and exploring a method to create holographic structures with silver-foiled paper. This paper is a sheet on which silver foil is pasted where the ink is printed, using a home-use laser printer and an electric iron. Wave propagation calculations were performed to design a binary-amplitude hologram. Along with the designed pattern, we fabricated silver-foiled binary holograms in the sub-terahertz range (0.1 THz) and confirmed their functions using a two-dimensional THz sensor.

Wide Angular Range Dynamic Projection Mapping Method Applied to the Projection on a Flying Drone.

In this study, we proposed a method to realize dynamic projection mapping on a target moving at high speed in a wide angular range around the projection equipment using a high-speed gaze control system, and actually implemented and evaluated it. We also combined the proposed system with a teleconferencing system, and conducted an experiment in which a drone was used as an avatar robot for communication with remote locations.

SESSION: Image Processing & Computer Vision

Cross Sample Similarity for Stable Training of GAN

Recently attention network finding similarity in non-local area within a 2D image has shown outstanding improvement in multi-class generation task in GAN. However it frequently shows unstable training state sometimes falling in mode collapse. We propose cross sample similarity loss to penalize similar features of fake samples that are rarely observed in reals. Proposed method shows improved FID score compared to baseline methods on CelebA, LSUN, and decreased mode collapse on Cifar10[Krizhevsky 2009].

PanoSynthVR: View Synthesis From A Single Input Panorama with Multi-Cylinder Images

We introduce a method to automatically convert a single panoramic input into a multi-cylinder image representation that supports real-time, free-viewpoint view synthesis for virtual reality. We apply an existing convolutional neural network trained on pinhole images to a cylindrical panorama with wrap padding to ensure agreement between the left and right edges. The network outputs a stack of semi-transparent panoramas at varying depths which can be easily rendered and composited with over blending. Initial experiments show that the method produces convincing parallax and cleaner object boundaries than a textured mesh representation.

Real-time sports video analysis for video content viewing with haptic information

Screenshots from Screen Photography

Screenshot is a frequently used tool in our daily life, while the screenshot capturing techniques are not much discussed in computer graphics and image processing researches. Capturing a screenshot is not always as easy as it seems. Firstly, the target devices for screenshot capturing must have screenshot software installed or featured in their operating systems. Secondly, the users must have input access to control the screenshot software within the target devices. Thirdly, the target devices must have Internet access or other hardware interfaces (such as USB ports) so that the users can take their screenshots out. When these requirements are not met, people often need to use their smartphones to take photographs in front of the screens as a substitute of screenshots. This allows direct sharing of the screen content, but the fidelity of the obtained content is apparently not as good as software screenshots. Might we be able to achieve a computer graphic solution to directly convert a screen photography to a screenshot, which looks like as if it was taken using software?

Silent Speech and Emotion Recognition from Vocal Tract Shape Dynamics in Real-Time MRI

We propose a novel deep neural network-based learning framework that understands acoustic information in the variable-length sequence of vocal tract shaping during speech production, captured by real-time magnetic resonance imaging (rtMRI), and translate it into text. In an experiment, it achieved a 40.6% PER at sentence-level, much better compared to the existing models. We also performed an analysis of variations in the geometry of articulation in each sub-regions of the vocal tract with respect to different emotions and genders. Results suggest that each sub-regions distortion is affected by both emotion and gender.

View Synthesis In Casually Captured Scenes Using a Cylindrical Neural Radiance Field With Exposure Compensation

We extend Neural Radiance Fields (NeRF) with a cylindrical parameterization that enables rendering photorealistic novel views of 360° outward facing scenes. We further introduce a learned exposure compensation parameter to account for the varying exposure in training images that may occur from casually capturing a scene. We evaluate our method on a variety of 360° casually captured scenes.

SESSION: Modeling & Geometry

Creating Crowd Characters Through Procedural Deformation

Developable Surface Segmentation For CAD Models

We present a novel method to segment CAD models into developable patches by detecting curve-like features on Gauss images of the corresponding patches. A region-growing approach is employed to detect planar and curved developable patches. The Gauss image of each segmented patch is constrained to be curve-like via principal component analysis and Pearson correlation analysis. Experimental results demonstrate that our approach generates nice results on CAD models with all kinds of developable surfaces.

OpenMfx: An API for cross-software non-destructible mesh effects

Quantum Nodes: Quantum Computing Applied to 3D Modeling

Quantum Nodes is a Blender add-on that introduces the integration of quantum algorithms into the 3D creation process. Our work focuses on allowing users to experiment both new forms of creation and approaching the concepts of quantum computing through 3D creation.

SESSION: Rendering

Foveated Monte-Carlo Denoising

In this work, we propose a temporally-stable denoising system that is capable of reconstructing MC renderings in a foveated manner. We develop a multi-scale convolutional neural network that starts at a base (downsampled) resolution and denoises progressively higher resolutions. Our network learns to use the lower resolutions and the previous frames to denoise each foveal layer. We demonstrate how this architecture produces accurate denoised results at a much lower computational cost.

Global Illumination-Aware Color Remapping with Fidelity for Texture Values

Our aim is to convert an object’s appearance to an arbitrary color considering the light scattering in the entire scene, which is often called the global illumination. Existing stylization methods convert the color of an object with a 1-dimensional texture for 3-dimensional computer graphics to reproduce a typical style used in illustrations and cel animations. However, they cannot express global illumination effects such as color bleedings and soft shadows. We propose a method to compute the global illumination and convert the shading to an arbitrary color. It consists of subpath tracing from the eye to the object, and radiance estimation on the object. The radiance is stored and used later to convert its color. The method reproduces reflections in other objects with the converted color. As a result, we can convert the color of illumination effects such as soft shadows and refractions.

Multi-scale Computational Visualization of Angle-Dependent and Roughness-Sensitive Plasmonic Structural Coloration

Non-photorealistic ray tracing with paint and toon shading

We present a modification to traditional ray tracing that stylistically renders a scene with cartoon and painterly styles. Previous methods rely on post-processing, materials, or textures to achieve a non-photorealistic look. Our method uses a ray tracer to combine cel animation art styles with complex lighting effects, such as reflections, refractions, and global illumination. The ray tracer collects information about objects and their properties to dynamically switch between cartoon and painterly rendering styles. The renderer generates the styles by shooting additional rays for each pixel and collecting information such as normals, distance, slope, object identifiers, and light gradients from neighboring areas of the image. The resulting algorithm produces images with visual and artistic characteristics that allow artists to take advantage of rendering techniques that are not commonly supported in production ray tracers.

Procedural Shading for Rendering the Appearance of Feathers

The appearance of a real-world feather is the result of light interactions with complex, patterned structures of varying scale; however, these have not yet been modeled for accurate rendering of feathers in computer graphics. Previously published works have presented simplified curve models for feather appearance. Using imaging from real feathers, we suggest why these approaches are not sufficient and provide motivation for building an appearance model specific to feathers. In that vein we demonstrate a new technique that takes into account the substructures of feathers during shading calculations to produce a more accurate far-field appearance.

Reflectance Estimation for Free-viewpoint Video

We present a method to infer physically-based material properties for free-viewpoint video. Given a multi-camera image feed and reconstructed geometry, our method infers material properties, such as albedo, surface normal, metallic and roughness maps. We use a physically based, differentiable renderer to generate candidate images which are compared against the image feed. Our method searches for material textures which minimise an image-space loss metric between candidate renders and the ground truth image feed. Our method produces results that approximate state of the art reflectance capture, and produces texture maps that are compatible with common real-time and offline shading models.

SESSION: Scientific Visualization

GPGPU Accelerated Flow Diagrams

SESSION: Simulation & Animation

Bowing-Net: Motion Generation for String Instruments Based on Bowing Information

This paper presents a deep learning based method that generates body motion for string instrument performance from raw audio. In contrast to prior methods which aim to predict joint position from audio, we first estimate information that dictates the bowing dynamics, such as the bow direction and the played string. The final body motion is then determined from this information following a conversion rule. By adopting the bowing information as the target domain, not only is learning the mapping more feasible, but also the produced results have bowing dynamics that are consistent with the given audio. We confirmed that our results are superior to existing methods through extensive experiments.

Scalable Visual Simulation of Ductile and Brittle Fracture

Fracture of solid objects produces debris. Modelling the physics that produces the broken fragments from the original solid requires an increase in the number of degrees of freedom. This causes a huge increase in computational cost for FEM based methods used to model such phenomena. We present a graph-based FEM method that tackles this issue by relabeling the edges of the graph induced in a volumetric mesh, using a damage variable. We reformulate the system dynamics for this relabelled graph in order to simulate the fracture mechanics using FEM without an explosion in the computation cost. Our method therefore requires no remeshing of the volumetric mesh used for computation and this makes it very scalable to high-resolution meshes. We demonstrate that the method can simulate both brittle and ductile fracture.

Text-Based Motion Synthesis with a Hierarchical Two-Stream RNN

We present a learning-based method for generating animated 3D pose sequences depicting multiple sequential or superimposed actions provided in long, compositional sentences. We propose a hierarchical two-stream sequential model to explore a finer joint-level mapping between natural language sentences and the corresponding 3D pose sequences of the motions. We learn two manifold representations of the motion –- one each for the upper body and the lower body movements. We evaluate our proposed model on the publicly available KIT Motion-Language Dataset containing 3D pose data with human-annotated sentences. Experimental results show that our model advances the state-of-the-art on text-based motion synthesis in objective evaluations by a margin of 50%.

Why you walk like that: Inferring Body Conditions from Single Gait Cycle

Gait is a key barometer to analyze human body conditions. We propose a personalized gait analysis framework which can diagnose a possible muscularskeletal disorders with a single gait cycle. Our framework built over a gait manifold which reveals the principle kinematic characteristics in the temporal pose sequence. Body parameters such as muscle, skeleton, and joint limits for an arbitrary gait cycle can be approximated by measuring similarity in the small latent space. We present a physical gait simulator to enrich the gait space paired with the body conditions.

SESSION: User Interfaces & Systems

A Suggestive Interface for Designing Dance Formations

Balanced Glass Design: A flavor perception changing system by controlling the center-of-gravity

We propose Balanced Glass Design, a flavor perception changing system. The system consists of glass-type device shifting its center of gravity in response to the user’s motion which allows drinking a beverage with a virtual perception of weight through drinking motion. We thought It’s possible to intervene in the user’s perception of flavor by displaying virtual weight perception, and so conducted demonstrations as a user study. In this paper, we describe the system design and comments obtained through a user study.