ACM Transactions on Graphics (TOG): Vol. 38, No. 6. 2019

Full Citation in the ACM Digital Library

SESSION: Light transport

MIS compensation: optimizing sampling techniques in multiple importance sampling

Multiple importance sampling (MIS) has become an indispensable tool in Monte Carlo rendering, widely accepted as a near-optimal solution for combining different sampling techniques. But an MIS combination, using the common balance or power heuristics, often results in an overly defensive estimator, leading to high variance. We show that by generalizing the MIS framework, variance can be substantially reduced. Specifically, we optimize one of the combined sampling techniques so as to decrease the overall variance of the resulting MIS estimator. We apply the approach to the computation of direct illumination due to an HDR environment map and to the computation of global illumination using a path guiding algorithm. The implementation can be as simple as subtracting a constant value from the tabulated sampling density done entirely in a preprocessing step. This produces a consistent noise reduction in all our tests with no negative influence on run time, no artifacts or bias, and no failure cases.

Variance-aware multiple importance sampling

Many existing Monte Carlo methods rely on multiple importance sampling (MIS) to achieve robustness and versatility. Typically, the balance or power heuristics are used, mostly thanks to the seemingly strong guarantees on their variance. We show that these MIS heuristics are oblivious to the effect of certain variance reduction techniques like stratification. This shortcoming is particularly pronounced when unstratified and stratified techniques are combined (e.g., in a bidirectional path tracer). We propose to enhance the balance heuristic by injecting variance estimates of individual techniques, to reduce the variance of the combined estimator in such cases. Our method is simple to implement and introduces little overhead.

Selectively metropolised Monte Carlo light transport simulation

Light transport is a complex problem with many solutions. Practitioners are now faced with the difficult task of choosing which rendering algorithm to use for any given scene. Simple Monte Carlo methods, such as path tracing, work well for the majority of lighting scenarios, but introduce excessive variance when they encounter transport they cannot sample (such as caustics). More sophisticated rendering algorithms, such as bidirectional path tracing, handle a larger class of light transport robustly, but have a high computational overhead that makes them inefficient for scenes that are not dominated by difficult transport. The underlying problem is that rendering algorithms can only be executed indiscriminately on all transport, even though they may only offer improvement for a subset of paths. In this paper, we introduce a new scheme for selectively combining different Monte Carlo rendering algorithms. We use a simple transport method (e.g. path tracing) as the base, and treat high variance "fireflies" as seeds for a Markov chain that locally uses a Metropolised version of a more sophisticated transport method for exploration, removing the firefly in an unbiased manner. We use a weighting scheme inspired by multiple importance sampling to partition the integrand into regions the base method can sample well and those it cannot, and only use Metropolis for the latter. This constrains the Markov chain to paths where it offers improvement, and keeps it away from regions already handled well by the base estimator. Combined with stratified initialization, short chain lengths and careful allocation of samples, this vastly reduces non-uniform noise and temporal flickering artifacts normally encountered with a global application of Metropolis methods. Through careful design choices, we ensure our algorithm never performs much worse than the base estimator alone, and usually performs significantly better, thereby reducing the need to experiment with different algorithms for each scene.

Integral formulations of volumetric transmittance

Computing the light attenuation between two given points is an essential yet expensive task in volumetric light transport simulation. Existing unbiased transmittance estimators are all based on "null-scattering" random walks enabled by augmenting the media with fictitious matter. This formulation prevents the use of traditional Monte Carlo estimator variance analysis, thus the efficiency of such methods is understood from a mostly empirical perspective. In this paper, we present several novel integral formulations of volumetric transmittance in which existing estimators arise as direct Monte Carlo estimators. Breaking from physical intuition, we show that the null-scattering concept is not strictly required for unbiased transmittance estimation, but is a form of control variates for effectively reducing variance. Our formulations bring new insight into the problem and the efficiency of existing estimators. They also provide a framework for devising new types of transmittance estimators with distinct and complementary performance tradeoffs, as well as a clear recipe for applying sample stratification.

SESSION: Geometry brekkie

ZoomOut: spectral upsampling for efficient shape correspondence

We present a simple and efficient method for refining maps or correspondences by iterative upsampling in the spectral domain that can be implemented in a few lines of code. Our main observation is that high quality maps can be obtained even if the input correspondences are noisy or are encoded by a small number of coefficients in a spectral basis. We show how this approach can be used in conjunction with existing initialization techniques across a range of application scenarios, including symmetry detection, map refinement across complete shapes, non-rigid partial shape matching and function transfer. In each application we demonstrate an improvement with respect to both the quality of the results and the computational speed compared to the best competing methods, with up to two orders of magnitude speed-up in some applications. We also demonstrate that our method is both robust to noisy input and is scalable with respect to shape complexity. Finally, we present a theoretical justification for our approach, shedding light on structural properties of functional maps.

Distortion-minimizing injective maps between surfaces

The problem of discrete surface parametrization, i.e. mapping a mesh to a planar domain, has been investigated extensively. We address the more general problem of mapping between surfaces. In particular, we provide a formulation that yields a map between two disk-topology meshes, which is continuous and injective by construction and which locally minimizes intrinsic distortion. A common approach is to express such a map as the composition of two maps via a simple intermediate domain such as the plane, and to independently optimize the individual maps. However, even if both individual maps are of minimal distortion, there is potentially high distortion in the composed map. In contrast to many previous works, we minimize distortion in an end-to-end manner, directly optimizing the quality of the composed map. This setting poses additional challenges due to the discrete nature of both the source and the target domain. We propose a formulation that, despite the combinatorial aspects of the problem, allows for a purely continuous optimization. Further, our approach addresses the non-smooth nature of discrete distortion measures in this context which hinders straightforward application of off-the-shelf optimization techniques. We demonstrate that, despite the challenges inherent to the more involved setting, discrete surface-to-surface maps can be optimized effectively.

X-CAD: optimizing CAD models with extended finite elements

We propose a novel generic shape optimization method for CAD models based on the eXtended Finite Element Method (XFEM). Our method works directly on the intersection between the model and a regular simulation grid, without the need to mesh or remesh, thus removing a bottleneck of classical shape optimization strategies. This is made possible by a novel hierarchical integration scheme that accurately integrates finite element quantities with sub-element precision. For optimization, we efficiently compute analytical shape derivatives of the entire framework, from model intersection to integration rule generation and XFEM simulation. Moreover, we describe a differentiable projection of shape parameters onto a constraint manifold spanned by user-specified shape preservation, consistency, and manufacturability constraints. We demonstrate the utility of our approach by optimizing mass distribution, strength-to-weight ratio, and inverse elastic shape design objectives directly on parameterized 3D CAD models.

Repairing man-made meshes via visual driven global optimization with minimum intrusion

3D mesh models created by human users and shared through online platforms and datasets flourish recently. While the creators generally have spent large efforts in modeling the visually appealing shapes with both large scale structures and intricate details, a majority of the meshes are unfortunately flawed in terms of having duplicate faces, mis-oriented regions, disconnected patches, etc., due to multiple factors involving both human errors and software inconsistencies. All these artifacts have severely limited the possible low-level and high-level processing tasks that can be applied to the rich datasets. In this work, we present a novel approach to fix these man-made meshes such that the outputs are guaranteed to be oriented manifold meshes that preserve the original structures, big and small, as much as possible. Our key observation is that the models all visually look meaningful, which leads to our strategy of repairing the flaws while always preserving the visual quality. We apply local refinements and removals only where necessary to achieve minimal intrusion of the original meshes, and global adjustments through robust optimization to ensure the outputs are valid manifold meshes with optimal connections. We test the approach on large-scale 3D datasets, and obtain quality meshes that are more readily usable for further geometry processing tasks.

SceneGit: a practical system for diffing and merging 3D environments

Version control systems are the foundation of collaborative workflows for text documents. For 3D environments though, version control is still an open problem due to the heterogeneous data of 3D scenes and their size. In this paper, we present a practical version control system for 3D scenes comprised of shapes, materials, textures, and animations, combined together in scene graphs. We version objects at their finest granularity, to make repositories smaller and to allow artists to work concurrently on the same object. Since, for some scene data, computing an optimal set of changes between versions is not computationally feasible, version control systems use heuristics. Compared to prior work, we propose heuristics that are efficient, robust, and independent of the application. We test our system on a variety of large scenes edited with different workflows, and show that our approach can handle all cases well while remaining efficient as scene size increases. Compared to prior work, we are significantly faster and more robust. A user study confirms that our system aids collaboration.

SESSION: Accelerated physics

Accelerated complex-step finite difference for expedient deformable simulation

In deformable simulation, an important computing task is to calculate the gradient and derivative of the strain energy function in order to infer the corresponding internal force and tangent stiffness matrix. The standard numerical routine is the finite difference method, which evaluates the target function multiple times under a small real-valued perturbation. Unfortunately, the subtractive cancellation prevents us from setting this perturbation sufficiently small, and the regular finite difference is doomed for computing problems requiring a high-accuracy derivative evaluation. In this paper, we graft a new finite difference scheme, namely the complex-step finite difference (CSFD), with physics-based animation. CSFD is based on the complex Taylor series expansion, which avoids subtractions in first-order derivative approximation. As a result, one can use a very small perturbation to calculate the numerical derivative that is as accurate as its analytic counterpart. We accelerate the original CSFD method so that it is also as efficient as the analytic derivative. This is achieved by discarding high-order error terms, decoupling real and imaginary calculations, replacing costly functions based on the theory of equivalent infinitesimal, and isolating the propagation of the perturbation in composite/nesting functions. CSFD can be further augmented with multicomplex Taylor expansion and Cauchy-Riemann formula to handle higher-order derivatives and tensor-valued functions. We demonstrate the accuracy, convenience, and efficiency of this new numerical routine in the context of deformable simulation - one can easily deploy a robust simulator for general hyperelastic materials, including user-crafted ones to cater specific needs in different applications. Higher-order derivatives of the energy can be readily computed to construct modal derivative bases for reduced real-time simulation. Inverse simulation problems can also be conveniently solved using gradient/Hessian-based optimization procedures.

Material-adapted refinable basis functions for elasticity simulation

In this paper, we introduce a hierarchical construction of material-adapted refinable basis functions and associated wavelets to offer efficient coarse-graining of linear elastic objects. While spectral methods rely on global basis functions to restrict the number of degrees of freedom, our basis functions are locally supported; yet, unlike typical polynomial basis functions, they are adapted to the material inhomogeneity of the elastic object to better capture its physical properties and behavior. In particular, they share spectral approximation properties with eigenfunctions, offering a good compromise between computational complexity and accuracy. Their construction involves only linear algebra and follows a fine-to-coarse approach, leading to a block-diagonalization of the stiffness matrix where each block corresponds to an intermediate scale space of the elastic object. Once this hierarchy has been precomputed, we can simulate an object at runtime on very coarse resolution grids and still capture the correct physical behavior, with orders of magnitude speedup compared to a fine simulation. We show on a variety of heterogeneous materials that our approach outperforms all previous coarse-graining methods for elasticity.

A scalable galerkin multigrid method for real-time simulation of deformable objects

We propose a simple yet efficient multigrid scheme to simulate high-resolution deformable objects in their full spaces at interactive frame rates. The point of departure of our method is the Galerkin projection which is simple to construct. However, a naïve Galerkin multigrid does not scale well for large and irregular grids because it trades-off matrix sparsity for smaller sized linear systems which eventually stops improving the performance. Given that observation, we design our special projection criterion which is based on skinning space coordinates with piecewise constant weights, to make our Galerkin multigrid method scale for high-resolution meshes without suffering from dense linear solves. The usage of skinning space coordinates enables us to reduce the resolution of grids more aggressively, and our piecewise constant weights further ensure us to always deal with reasonably-sparse linear solves. Our projection matrices also help us to manage multi-level linear systems efficiently. Therefore, our method can be applied to different optimization schemes such as Newton's method and Projective Dynamics, pushing the resolution of a real-time simulation to orders of magnitudes higher. Our final GPU implementation outperforms the other state-of-the-art GPU deformable body simulators, enabling us to simulate large deformable objects with hundred thousands of degrees of freedom in real-time.

Accelerating ADMM for efficient simulation and optimization

The alternating direction method of multipliers (ADMM) is a popular approach for solving optimization problems that are potentially non-smooth and with hard constraints. It has been applied to various computer graphics applications, including physical simulation, geometry processing, and image processing. However, ADMM can take a long time to converge to a solution of high accuracy. Moreover, many computer graphics tasks involve non-convex optimization, and there is often no convergence guarantee for ADMM on such problems since it was originally designed for convex optimization. In this paper, we propose a method to speed up ADMM using Anderson acceleration, an established technique for accelerating fixed-point iterations. We show that in the general case, ADMM is a fixed-point iteration of the second primal variable and the dual variable, and Anderson acceleration can be directly applied. Additionally, when the problem has a separable target function and satisfies certain conditions, ADMM becomes a fixed-point iteration of only one variable, which further reduces the computational overhead of Anderson acceleration. Moreover, we analyze a particular non-convex problem structure that is common in computer graphics, and prove the convergence of ADMM on such problems under mild assumptions. We apply our acceleration technique on a variety of optimization problems in computer graphics, with notable improvement on their convergence speed.

SESSION: Photography in the field

Handheld mobile photography in very low light

Taking photographs in low light using a mobile phone is challenging and rarely produces pleasing results. Aside from the physical limits imposed by read noise and photon shot noise, these cameras are typically handheld, have small apertures and sensors, use mass-produced analog electronics that cannot easily be cooled, and are commonly used to photograph subjects that move, like children and pets. In this paper we describe a system for capturing clean, sharp, colorful photographs in light as low as 0.3 lux, where human vision becomes monochromatic and indistinct. To permit handheld photography without flash illumination, we capture, align, and combine multiple frames. Our system employs "motion metering", which uses an estimate of motion magnitudes (whether due to handshake or moving objects) to identify the number of frames and the per-frame exposure times that together minimize both noise and motion blur in a captured burst. We combine these frames using robust alignment and merging techniques that are specialized for high-noise imagery. To ensure accurate colors in such low light, we employ a learning-based auto white balancing algorithm. To prevent the photographs from looking like they were shot in daylight, we use tone mapping techniques inspired by illusionistic painting: increasing contrast, crushing shadows to black, and surrounding the scene with darkness. All of these processes are performed using the limited computational resources of a mobile device. Our system can be used by novice photographers to produce shareable pictures in a few seconds based on a single shutter press, even in environments so dim that humans cannot see clearly.

Learning efficient illumination multiplexing for joint capture of reflectance and shape

We propose a novel framework that automatically learns the lighting patterns for efficient, joint acquisition of unknown reflectance and shape. The core of our framework is a deep neural network, with a shared linear encoder that directly corresponds to the lighting patterns used in physical acquisition, as well as non-linear decoders that output per-pixel normal and diffuse / specular information from photographs. We exploit the diffuse and normal information from multiple views to reconstruct a detailed 3D shape, and then fit BRDF parameters to the diffuse / specular information, producing texture maps as reflectance results. We demonstrate the effectiveness of the framework with physical objects that vary considerably in reflectance and shape, acquired with as few as 16 ~ 32 lighting patterns that correspond to 7 ~ 15 seconds of per-view acquisition time. Our framework is useful for optimizing the efficiency in both novel and existing setups, as it can automatically adapt to various factors, including the geometry / the lighting layout of the device and the properties of appearance.

Blind image super-resolution with spatially variant degradations

Existing deep learning approaches to single image super-resolution have achieved impressive results but mostly assume a setting with fixed pairs of high resolution and low resolution images. However, to robustly address realistic upscaling scenarios where the relation between high resolution and low resolution images is unknown, blind image super-resolution is required. To this end, we propose a solution that relies on three components: First, we use a degradation aware SR network to synthesize the HR image given a low resolution image and the corresponding blur kernel. Second, we train a kernel discriminator to analyze the generated high resolution image in order to predict errors present due to providing an incorrect blur kernel to the generator. Finally, we present an optimization procedure that is able to recover both the degradation kernel and the high resolution image by minimizing the error predicted by our kernel discriminator. We also show how to extend our approach to spatially variant degradations that typically arise in visual effects pipelines when compositing content from different sources and how to enable both local and global user interaction in the upscaling process.

Hierarchical and view-invariant light field segmentation by maximizing entropy rate on 4D ray graphs

Image segmentation is an important first step of many image processing, computer graphics, and computer vision pipelines. Unfortunately, it remains difficult to automatically and robustly segment cluttered scenes, or scenes in which multiple objects have similar color and texture. In these scenarios, light fields offer much richer cues that can be used efficiently to drastically improve the quality and robustness of segmentations.

In this paper we introduce a new light field segmentation method that respects texture appearance, depth consistency, as well as occlusion, and creates well-shaped segments that are robust under view point changes. Furthermore, our segmentation is hierarchical, i.e. with a single optimization, a whole hierarchy of segmentations with different numbers of regions is available. All this is achieved with a submodular objective function that allows for efficient greedy optimization. Finally, we introduce a new tree-array type data structure, i.e. a disjoint tree, to efficiently perform submodular optimization on very large graphs. This approach is of interest beyond our specific application of light field segmentation.

We demonstrate the efficacy of our method on a number of synthetic and real data sets, and show how the obtained segmentations can be used for applications in image processing and graphics.

Document rectification and illumination correction using a patch-based CNN

We propose a novel learning method to rectify document images with various distortion types from a single input image. As opposed to previous learning-based methods, our approach seeks to first learn the distortion flow on input image patches rather than the entire image. We then present a robust technique to stitch the patch results into the rectified document by processing in the gradient domain. Furthermore, we propose a second network to correct the uneven illumination, further improving the readability and OCR accuracy. Due to the less complex distortion present on the smaller image patches, our patch-based approach followed by stitching and illumination correction can significantly improve the overall accuracy in both the synthetic and real datasets.

SESSION: Network

Curve-pleated structures

In this paper we study pleated structures generated by folding paper along curved creases. We discuss their properties and the special case of principal pleated structures. A discrete version of pleated structures is particularly interesting because of the rich geometric properties of the principal case, where we are able to establish a series of analogies between the smooth and discrete situations, as well as several equivalent characterizations of the principal property. These include being a conical mesh, and being flat-foldable. This structure-preserving discretization is the basis of computation and design. We propose a new method for designing pleated structures and reconstructing reference shapes as pleated structures: we first gain an overview of possible crease patterns by establishing a connection to pseudogeodesics, and then initialize and optimize a quad mesh so as to become a discrete pleated structure. We conclude by showing applications in design and reconstruction, including cases with combinatorial singularities. Our work is relevant to fabrication in so far as the offset properties of principal pleated structures allow us to construct curved sculptures of finite thickness.

Modeling curved folding with freeform deformations

We present a computational framework for interactive design and exploration of curved folded surfaces. In current practice, such surfaces are typically created manually using physical paper, and hence our objective is to lay the foundations for the digitalization of curved folded surface design. Our main contribution is a discrete binary characterization for folds between discrete developable surfaces, accompanied by an algorithm to simultaneously fold creases and smoothly bend planar sheets. We complement our algorithm with essential building blocks for curved folding deformations: objectives to control dihedral angles and mountain-valley assignments. We apply our machinery to build the first interactive freeform editing tool capable of modeling bending and folding of complicated crease patterns.

Checkerboard patterns with black rectangles

Checkerboard patterns with black rectangles can be derived from quad meshes with orthogonal diagonals. First, we present an initial theoretical analysis of these quad meshes. The analysis reveals many possible applications in geometry processing and also motivates the numerical optimization for aesthetic and functional checkerboard pattern design. Second, we describe an optimization algorithm that transforms initial 2D and 3D quad meshes into quad meshes with orthogonal diagonals. Third, we present a 2D checkerboard pattern design framework based on integer programming inspired by the logo design of the 2020 Olympic games. Our results show a variety of 2D and 3D checkerboard patterns that can be derived from 2D or 3D quad meshes with orthogonal diagonals.

Chebyshev nets from commuting PolyVector fields

We propose a method for computing global Chebyshev nets on triangular meshes. We formulate the corresponding global parameterization problem in terms of commuting PolyVector fields, and design an efficient optimization method to solve it. We compute, for the first time, Chebyshev nets with automatically-placed singularities, and demonstrate the realizability of our approach using real material.

Discrete geodesic parallel coordinates

Geodesic parallel coordinates are orthogonal nets on surfaces where one of the two families of parameter lines are geodesic curves. We describe a discrete version of these special surface parameterizations and show that they are very useful for specific applications, most of which are related to the design and fabrication of surfaces in architecture. With the new discrete surface model, it is easy to control strip widths between neighboring geodesics. This facilitates tasks such as cladding a surface with strips of originally straight flat material or designing geodesic gridshells and timber rib shells. It is also possible to model nearly developable surfaces. These are characterized by geodesic strips with almost constant strip widths and are used for generating shapes that can be manufactured from materials which allow for some stretching or shrinking like felt, leather, or thin wooden boards. Most importantly, we show how to constrain the strip width parameters to model a class of intrinsically symmetric surfaces. These surfaces are isometric to surfaces of revolution and can be covered with doubly-curved panels that are produced with only a few molds when working with flexible materials like metal sheets.

SESSION: Learning from video

Colorblind-shareable videos by synthesizing temporal-coherent polynomial coefficients

To share the same visual content between color vision deficiencies (CVD) and normal-vision people, attempts have been made to allocate the two visual experiences of a binocular display (wearing and not wearing glasses) to CVD and normal-vision audiences. However, existing approaches only work for still images. Although state-of-the-art temporal filtering techniques can be applied to smooth the per-frame generated content, they may fail to maintain the multiple binocular constraints needed in our applications, and even worse, sometimes introduce color inconsistency (same color regions map to different colors). In this paper, we propose to train a neural network to predict the temporal coherent polynomial coefficients in the domain of global color decomposition. This indirect formulation solves the color inconsistency problem. Our key challenge is to design a neural network to predict the temporal coherent coefficients, while maintaining all required binocular constraints. Our method is evaluated on various videos and all metrics confirm that it outperforms all existing solutions.

Animating landscape: self-supervised learning of decoupled motion and appearance for single-image video synthesis

Automatic generation of a high-quality video from a single image remains a challenging task despite the recent advances in deep generative models. This paper proposes a method that can create a high-resolution, long-term animation using convolutional neural networks (CNNs) from a single landscape image where we mainly focus on skies and waters. Our key observation is that the motion (e.g., moving clouds) and appearance (e.g., time-varying colors in the sky) in natural scenes have different time scales. We thus learn them separately and predict them with decoupled control while handling future uncertainty in both predictions by introducing latent codes. Unlike previous methods that infer output frames directly, our CNNs predict spatially-smooth intermediate data, i.e., for motion, flow fields for warping, and for appearance, color transfer maps, via self-supervised learning, i.e., without explicitly-provided ground truth. These intermediate data are applied not to each previous output frame, but to the input image only once for each output frame. This design is crucial to alleviate error accumulation in long-term predictions, which is the essential problem in previous recurrent approaches. The output frames can be looped like cinemagraph, and also be controlled directly by specifying latent codes or indirectly via visual annotations. We demonstrate the effectiveness of our method through comparisons with the state-of-the-arts on video prediction as well as appearance manipulation. Resultant videos, codes, and datasets will be available at http://www.cgg.cs.tsukuba.ac.jp/~endo/projects/AnimatingLandscape.

DeepRemaster: temporal source-reference attention networks for comprehensive video enhancement

The remastering of vintage film comprises of a diversity of sub-tasks including super-resolution, noise removal, and contrast enhancement which aim to restore the deteriorated film medium to its original state. Additionally, due to the technical limitations of the time, most vintage film is either recorded in black and white, or has low quality colors, for which colorization becomes necessary. In this work, we propose a single framework to tackle the entire remastering task semi-interactively. Our work is based on temporal convolutional neural networks with attention mechanisms trained on videos with data-driven deterioration simulation. Our proposed source-reference attention allows the model to handle an arbitrary number of reference color images to colorize long videos without the need for segmentation while maintaining temporal consistency. Quantitative analysis shows that our framework outperforms existing approaches, and that, in contrast to existing approaches, the performance of our framework increases with longer videos and more reference color images.

Write-a-video: computational video montage from themed text

We present Write-A-Video, a tool for the creation of video montage using mostly text-editing. Given an input themed text and a related video repository either from online websites or personal albums, the tool allows novice users to generate a video montage much more easily than current video editing tools. The resulting video illustrates the given narrative, provides diverse visual content, and follows cinematographic guidelines. The process involves three simple steps: (1) the user provides input, mostly in the form of editing the text, (2) the tool automatically searches for semantically matching candidate shots from the video repository, and (3) an optimization method assembles the video montage. Visual-semantic matching between segmented text and shots is performed by cascaded keyword matching and visual-semantic embedding, that have better accuracy than alternative solutions. The video assembly is formulated as a hybrid optimization problem over a graph of shots, considering temporal constraints, cinematography metrics such as camera movement and tone, and user-specified cinematography idioms. Using our system, users without video editing experience are able to generate appealing videos.

Neural style-preserving visual dubbing

Dubbing is a technique for translating video content from one language to another. However, state-of-the-art visual dubbing techniques directly copy facial expressions from source to target actors without considering identity-specific idiosyncrasies such as a unique type of smile. We present a style-preserving visual dubbing approach from single video inputs, which maintains the signature style of target actors when modifying facial expressions, including mouth motions, to match foreign languages. At the heart of our approach is the concept of motion style, in particular for facial expressions, i.e., the person-specific expression change that is yet another essential factor beyond visual accuracy in face editing applications. Our method is based on a recurrent generative adversarial network that captures the spatiotemporal co-activation of facial expressions, and enables generating and modifying the facial expressions of the target actor while preserving their style. We train our model with unsynchronized source and target videos in an unsupervised manner using cycle-consistency and mouth expression losses, and synthesize photorealistic video frames using a layered neural face renderer. Our approach generates temporally coherent results, and handles dynamic backgrounds. Our results show that our dubbing approach maintains the idiosyncratic style of the target actor better than previous approaches, even for widely differing source and target actors.

SESSION: Composing & decomposing geometry

Mandoline: robust cut-cell generation for arbitrary triangle meshes

Although geometry arising "in the wild" most often comes in the form of a surface representation, a plethora of geometrical and physical applications require the construction of volumetric embeddings either of the geometry itself or the domain surrounding it. Cartesian cut-cell-based mesh generation provides an attractive solution in which volumetric elements are constructed from the intersection of the input surface geometry with a uniform or adaptive hexahedral grid. This choice, especially common in computational fluid dynamics, has the potential to efficiently generate accurate, surface-conforming cells; unfortunately, current solutions are often slow, fragile, or cannot handle many common topological situations. We therefore propose a novel, robust cut-cell construction technique for triangle surface meshes that explicitly computes the precise geometry of the intersection cells, even on meshes that are open or non-manifold. Its fundamental geometric primitive is the intersection of an arbitrary segment with an axis-aligned plane. Beginning from the set of intersection points between triangle mesh edges and grid planes, our bottom-up approach robustly determines cut-edges, cut-faces, and finally cut-cells, in a manner designed to guarantee topological correctness. We demonstrate its effectiveness and speed on a wide range of input meshes and grid resolutions, and make the code available as open source.

QuadMixer: layout preserving blending of quadrilateral meshes

We propose QuadMixer, a novel interactive technique to compose quad mesh components preserving the majority of the original layouts. Quad Layout is a crucial property for many applications since it conveys important information that would otherwise be destroyed by techniques that aim only at preserving shape.

Our technique keeps untouched all the quads in the patches which are not involved in the blending. We first perform robust boolean operations on the corresponding triangle meshes. Then we use this result to identify and build new surface patches for small regions neighboring the intersection curves. These blending patches are carefully quadrangulated respecting boundary constraints and stitched back to the untouched parts of the original models. The resulting mesh preserves the designed edge flow that, by construction, is captured and incorporated to the new quads as much as possible. We present our technique in an interactive tool to show its usability and robustness.

3D hodge decompositions of edge- and face-based vector fields

We present a compendium of Hodge decompositions of vector fields on tetrahedral meshes embedded in the 3D Euclidean space. After describing the foundations of the Hodge decomposition in the continuous setting, we describe how to implement a five-component orthogonal decomposition that generically splits, for a variety of boundary conditions, any given discrete vector field expressed as discrete differential forms into two potential fields, as well as three additional harmonic components that arise from the topology or boundary of the domain. The resulting decomposition is proper and mimetic, in the sense that the theoretical dualities on the kernel spaces of vector Laplacians valid in the continuous case (including correspondences to cohomology and homology groups) are exactly preserved in the discrete realm. Such a decomposition only involves simple linear algebra with symmetric matrices, and can thus serve as a basic computational tool for vector field analysis in graphics, electromagnetics, fluid dynamics and elasticity.

Bounded distortion tetrahedral metric interpolation

We present a method for volumetric shape interpolation with unique shape preserving features. The input to our algorithm are two or more 3-manifolds, immersed into R3 and discretized as tetrahedral meshes with shared connectivity. The output is a continuum of shapes that naturally blends the input shapes, while striving to preserve the geometric character of the input. The basis of our approach relies on the fact that the space of metrics with bounded isometric and angular distortion is convex [Chien et al. 2016b]. We show that for high dimensional manifolds, the bounded distortion metrics form a positive semidefinite cone product space. Our method can be seen as a generalization of the bounded distortion interpolation technique of [Chen et al. 2013] from planar shapes immersed in R2 to solids in R3. The convexity of the space implies that a linear blend of the (squared) edge lengths of the input tetrahedral meshes is a simple yet powerful-and-natural choice. Linearly blending flat metrics results in a new metric which is, in general, not flat, and cannot be immersed into three-dimensional space. Nonetheless, the amount of curvature that is introduced in the process tends to be very low in practical settings. We further design an extremely robust nonconvex optimization procedure that efficiently flattens the metric. The flattening procedure strives to preserve the low distortion exhibited in the blended metric while guaranteeing the validity of the metric, resulting in a locally injective map with bounded distortion. Our method leads to volumetric interpolation with superb quality, demonstrating significant improvement over the state-of-the-art and qualitative properties which were obtained so far only in interpolating manifolds of lower dimensions.

SESSION: Synthesis in the arvo

Deep face normalization

From angling smiles to duck faces, all kinds of facial expressions can be seen in selfies, portraits, and Internet pictures. These photos are taken from various camera types, and under a vast range of angles and lighting conditions. We present a deep learning framework that can fully normalize unconstrained face images, i.e., remove perspective distortions, relight to an evenly lit environment, and predict a frontal and neutral face. Our method can produce a high resolution image while preserving important facial details and the likeness of the subject, along with the original background. We divide this ill-posed problem into three consecutive normalization steps, each using a different generative adversarial network that acts as an image generator. Perspective distortion removal is performed using a dense flow field predictor. A uniformly illuminated face is obtained using a lighting translation network, and the facial expression is neutralized using a generalized facial expression synthesis framework combined with a regression network based on deep features for facial recognition. We introduce new data representations for conditional inference, as well as training methods for supervised learning to ensure that different expressions of the same person can yield to not only a plausible but also a similar neutral face. We demonstrate our results on a wide range of challenging images collected in the wild. Key applications of our method range from robust image-based 3D avatar creation, portrait manipulation, to facial enhancement and reconstruction tasks for crime investigation. We also found through an extensive user study, that our normalization results can be hardly distinguished from ground truth ones if the person is not familiar.

3D Ken Burns effect from a single image

The Ken Burns effect allows animating still images with a virtual camera scan and zoom. Adding parallax, which results in the 3D Ken Burns effect, enables significantly more compelling results. Creating such effects manually is time-consuming and demands sophisticated editing skills. Existing automatic methods, however, require multiple input images from varying viewpoints. In this paper, we introduce a framework that synthesizes the 3D Ken Burns effect from a single image, supporting both a fully automatic mode and an interactive mode with the user controlling the camera. Our framework first leverages a depth prediction pipeline, which estimates scene depth that is suitable for view synthesis tasks. To address the limitations of existing depth estimation methods such as geometric distortions, semantic distortions, and inaccurate depth boundaries, we develop a semantic-aware neural network for depth prediction, couple its estimate with a segmentation-based depth adjustment process, and employ a refinement neural network that facilitates accurate depth predictions at object boundaries. According to this depth estimate, our framework then maps the input image to a point cloud and synthesizes the resulting video frames by rendering the point cloud from the corresponding camera positions. To address disocclusions while maintaining geometrically and temporally coherent synthesis results, we utilize context-aware color- and depth-inpainting to fill in the missing information in the extreme views of the camera path, thus extending the scene geometry of the point cloud. Experiments with a wide variety of image content show that our method enables realistic synthesis results. Our study demonstrates that our system allows users to achieve better results while requiring little effort compared to existing solutions for the 3D Ken Burns effect creation.

Artistic glyph image synthesis via one-stage few-shot learning

Automatic generation of artistic glyph images is a challenging task that attracts many research interests. Previous methods either are specifically designed for shape synthesis or focus on texture transfer. In this paper, we propose a novel model, AGIS-Net, to transfer both shape and texture styles in one-stage with only a few stylized samples. To achieve this goal, we first disentangle the representations for content and style by using two encoders, ensuring the multi-content and multi-style generation. Then we utilize two collaboratively working decoders to generate the glyph shape image and its texture image simultaneously. In addition, we introduce a local texture refinement loss to further improve the quality of the synthesized textures. In this manner, our one-stage model is much more efficient and effective than other multi-stage stacked methods. We also propose a large-scale dataset with Chinese glyph images in various shape and texture styles, rendered from 35 professional-designed artistic fonts with 7,326 characters and 2,460 synthetic artistic fonts with 639 characters, to validate the effectiveness and extendability of our method. Extensive experiments on both English and Chinese artistic glyph image datasets demonstrate the superiority of our model in generating high-quality stylized glyph images against other state-of-the-art methods.

A novel framework for inverse procedural texture modeling

Procedural textures are powerful tools that have been used in graphics for decades. In contrast to the alternative exemplar-based texture synthesis techniques, procedural textures provide user control and fast texture generation with low-storage cost and unlimited texture resolution. However, creating procedural models for complex textures requires a time-consuming process of selecting a combination of procedures and parameters. We present an example-based framework to automatically select procedural models and estimate parameters. In our framework, we consider textures categorized by commonly used high level classes. For each high level class we build a data-driven inverse modeling system based on an extensive collection of real-world textures and procedural texture models in the form of node graphs. We use unsupervised learning on collected real-world images in a texture class to learn sub-classes. We then classify the output of each of the collected procedural models into these sub-classes. For each of the collected models we train a convolutional neural network (CNN) to learn the parameters to produce a specific output texture. To use our framework, a user provides an exemplar texture image within a high level class. The system first classifies the texture into a sub-class, and selects the procedural models that produce output in that sub-class. The pre-trained CNNs of the selected models are used to estimate the parameters of the texture example. With the predicted parameters, the system can generate appropriate procedural textures for the user. The user can easily edit the textures by adjusting the node graph parameters. In a last optional step, style transfer augmentation can be applied to the fitted procedural textures to recover details lost in the procedural modeling process. We demonstrate our framework for four high level classes and show that our inverse modeling system can produce high-quality procedural textures for both structural and non-structural textures.

Comic-guided speech synthesis

We introduce a novel approach for synthesizing realistic speeches for comics. Using a comic page as input, our approach synthesizes speeches for each comic character following the reading flow. It adopts a cascading strategy to synthesize speeches in two stages: Comic Visual Analysis and Comic Speech Synthesis. In the first stage, the input comic page is analyzed to identify the gender and age of the characters, as well as texts each character speaks and corresponding emotion. Guided by this analysis, in the second stage, our approach synthesizes realistic speeches for each character, which are consistent with the visual observations. Our experiments show that the proposed approach can synthesize realistic and lively speeches for different types of comics. Perceptual studies performed on the synthesis results of multiple sample comics validate the efficacy of our approach.

SESSION: Fluids aflow

Transport-based neural style transfer for smoke simulations

Artistically controlling fluids has always been a challenging task. Optimization techniques rely on approximating simulation states towards target velocity or density field configurations, which are often handcrafted by artists to indirectly control smoke dynamics. Patch synthesis techniques transfer image textures or simulation features to a target flow field. However, these are either limited to adding structural patterns or augmenting coarse flows with turbulent structures, and hence cannot capture the full spectrum of different styles and semantically complex structures. In this paper, we propose the first Transport-based Neural Style Transfer (TNST) algorithm for volumetric smoke data. Our method is able to transfer features from natural images to smoke simulations, enabling general content-aware manipulations ranging from simple patterns to intricate motifs. The proposed algorithm is physically inspired, since it computes the density transport from a source input smoke to a desired target configuration. Our transport-based approach allows direct control over the divergence of the stylization velocity field by optimizing incompressible and irrotational potentials that transport smoke towards stylization. Temporal consistency is ensured by transporting and aligning subsequent stylized velocities, and 3D reconstructions are computed by seamlessly merging stylizations from different camera viewpoints.

Consistent shepard interpolation for SPH-based fluid animation

We present a novel technique to correct errors introduced by the discretization of a fluid body when animating it with smoothed particle hydrodynamics (SPH). Our approach is based on the Shepard correction, which reduces the interpolation errors from irregularly spaced data. With Shepard correction, the smoothing kernel function is normalized using the weighted sum of the kernel function values in the neighborhood. To compute the correction factor, densities of neighboring particles are needed, which themselves are computed with the uncorrected kernel. This results in an inconsistent formulation and an error-prone correction of the kernel. As a consequence, the density computation may be inaccurate, thus the pressure forces are erroneous and may cause instabilities in the simulation process. We present a consistent formulation by using the corrected densities to compute the exact kernel correction factor and, thereby, increase the accuracy of the simulation. Employing our method, a smooth density distribution is achieved, i.e., the noise in the density field is reduced by orders of magnitude. To show that our method is independent of the SPH variant, we evaluate our technique on weakly compressible SPH and on divergence-free SPH. Incorporating the corrected density into the correction process, the problem cannot be stated explicitly anymore. We propose an efficient and easy-to-implement algorithm to solve the implicit problem by applying the power method. Additionally, we demonstrate how our model can be applied to improve the density distribution on rigid bodies when using a well-known rigid-fluid coupling approach.

A multi-scale model for coupling strands with shear-dependent liquid

We propose a framework for simulating the complex dynamics of strands interacting with compressible, shear-dependent liquids, such as oil paint, mud, cream, melted chocolate, and pasta sauce. Our framework contains three main components: the strands modeled as discrete rods, the bulk liquid represented as a continuum (material point method), and a reduced-dimensional flow of liquid on the surface of the strands with detailed elastoviscoplastic behavior. These three components are tightly coupled together. To enable discrete strands interacting with continuum-based liquid, we develop models that account for the volume change of the liquid as it passes through strands and the momentum exchange between the strands and the liquid. We also develop an extended constraint-based collision handling method that supports cohesion between strands. Furthermore, we present a principled method to preserve the total momentum of a strand and its surface flow, as well as an analytic plastic flow approach for Herschel-Bulkley fluid that enables stable semi-implicit integration at larger time steps. We explore a series of challenging scenarios, involving splashing, shaking, and agitating the liquid which causes the strands to stick together and become entangled.

The reduced immersed method for real-time fluid-elastic solid interaction and contact simulation

We introduce the Reduced Immersed Method (RIM) for the real-time simulation of two-way coupled incompressible fluids and elastic solids and the interaction of multiple deformables with (self-)collisions. Our framework is based on a novel discretization of the immersed boundary equations of motion, which model fluid and deformables as a single incompressible medium and their interaction as a unified system on a fixed domain combining Eulerian and Lagrangian terms. One advantage for real-time simulations resulting from this modeling is that two-way coupling phenomena can be faithfully simulated while avoiding costly calculations such as tracking the deforming fluid-solid interfaces and the associated fluid boundary conditions. Our discretization enables the combination of a PIC/FLIP fluid solver with a reduced-order Lagrangian elasticity solver. Crucial for the performance of RIM is the efficient transfer of information between the elasticity and the fluid solver and the synchronization of the Lagrangian and Eulerian settings. We introduce the concept of twin subspaces that enables an efficient reduced-order modeling of the transfer. Our experiments demonstrate that RIM handles complex meshes and highly resolved fluids for large time steps at high framerates on off-the-shelf hardware, even in the presence of high velocities and rapid user interaction. Furthermore, it extends reduced-order elasticity solvers such as Hyper-Reduced Projective Dynamics with natural collision handling.

A thermomechanical material point method for baking and cooking

We present a Material Point Method for visual simulation of baking breads, cookies, pancakes and similar materials that consist of dough or batter (mixtures of water, flour, eggs, fat, sugar and leavening agents). We develop a novel thermomechanical model using mixture theory to resolve interactions between individual water, gas and dough species. Heat transfer with thermal expansion is used to model thermal variations in material properties. Water-based mass transfer is resolved through the porous mixture, gas represents carbon dioxide produced by leavening agents in the baking process and dough is modeled as a viscoelastoplastic solid to represent its varied and complex rheological properties. Water content in the mixture reduces during the baking process according to Fick's Law which contributes to drying and cracking of crust at the material boundary. Carbon dioxide gas produced by leavening agents during baking creates internal pressure that causes rising. The viscoelastoplastic model for the dough is temperature dependent and is used to model melting and solidification. We discretize the governing equations using a novel Material Point Method designed to track the solid phase of the mixture.

SESSION: Building knowledge

Design and structural optimization of topological interlocking assemblies

We study assemblies of convex rigid blocks regularly arranged to approximate a given freeform surface. Our designs rely solely on the geometric arrangement of blocks to form a stable assembly, neither requiring explicit connectors or complex joints, nor relying on friction between blocks. The convexity of the blocks simplifies fabrication, as they can be easily cut from different materials such as stone, wood, or foam. However, designing stable assemblies is challenging, since adjacent pairs of blocks are restricted in their relative motion only in the direction orthogonal to a single common planar interface surface. We show that despite this weak interaction, structurally stable, and in some cases, globally interlocking assemblies can be found for a variety of freeform designs. Our optimization algorithm is based on a theoretical link between static equilibrium conditions and a geometric, global interlocking property of the assembly---that an assembly is globally interlocking if and only if the equilibrium conditions are satisfied for arbitrary external forces and torques. Inspired by this connection, we define a measure of stability that spans from single-load equilibrium to global interlocking, motivated by tilt analysis experiments used in structural engineering. We use this measure to optimize the geometry of blocks to achieve a static equilibrium for a maximal cone of directions, as opposed to considering only self-load scenarios with a single gravity direction. In the limit, this optimization can achieve globally interlocking structures. We show how different geometric patterns give rise to a variety of design options and validate our results with physical prototypes.

Extrusion-based ceramics printing with strictly-continuous deposition

We propose a method for integrated tool path planning and support structure generation tailored to the specific constraints of extrusion-based ceramics printing. Existing path generation methods for thermoplastic materials rely on transfer moves to navigate between different print paths in a given layer. However, when printing with clay, these transfer moves can lead to severe artifacts and failure. Our method eliminates transfer moves altogether by generating deposition paths that are continuous within and across layers. Our algorithm is implemented as a sequential top-down pass through the layer stack. In each layer, we detect points that require support, connect support points and model paths, and optimize the shape of the resulting continuous path with respect to length, smoothness, and distance to the model. For each of these subproblems, we propose dedicated solutions that take into account the fabrication constraints imposed by printable clay. We evaluate our method on a set of examples with multiple disconnected components and challenging support requirements. Comparisons to existing path generation methods designed for thermoplastic materials show that our method substantially improves print quality and often makes the difference between success and failure.

Carpentry compiler

Traditional manufacturing workflows strongly decouple design and fabrication phases. As a result, fabrication-related objectives such as manufacturing time and precision are difficult to optimize in the design space, and vice versa. This paper presents HL-HELM, a high-level, domain-specific language for expressing abstract, parametric fabrication plans; it also introduces LL-HELM, a low-level language for expressing concrete fabrication plans that take into account the physical constraints of available manufacturing processes. We present a new compiler that supports the real-time, unoptimized translation of high-level, geometric fabrication operations into concrete, tool-specific fabrication instructions; this gives users immediate feedback on the physical feasibility of plans as they design them. HELM offers novel optimizations to improve accuracy and reduce fabrication time as well as material costs. Finally, optimized low-level plans can be interpreted as step-by-step instructions for users to actually fabricate a physical product. We provide a variety of example fabrication plans in the carpentry domain that are designed using our high-level language, show how the compiler translates and optimizes these plans to generate concrete low-level instructions, and present the final physical products fabricated in wood.

Computational LEGO technic design

SESSION: Geometry with style

Cubic stylization

We present a 3D stylization algorithm that can turn an input shape into the style of a cube while maintaining the content of the original shape. The key insight is that cubic style sculptures can be captured by the as-rigid-as-possible energy with an ℓ1-regularization on rotated surface normals. Minimizing this energy naturally leads to a detail-preserving, cubic geometry. Our optimization can be solved efficiently without any mesh surgery. Our method serves as a non-realistic modeling tool where one can incorporate many artistic controls to create stylized geometries.

LOGAN: unpaired shape transform in latent overcomplete space

We introduce LOGAN, a deep neural network aimed at learning generalpurpose shape transforms from unpaired domains. The network is trained on two sets of shapes, e.g., tables and chairs, while there is neither a pairing between shapes from the domains as supervision nor any point-wise correspondence between any shapes. Once trained, LOGAN takes a shape from one domain and transforms it into the other. Our network consists of an autoencoder to encode shapes from the two input domains into a common latent space, where the latent codes concatenate multi-scale shape features, resulting in an overcomplete representation. The translator is based on a generative adversarial network (GAN), operating in the latent space, where an adversarial loss enforces cross-domain translation while a feature preservation loss ensures that the right shape features are preserved for a natural shape transform. We conduct ablation studies to validate each of our key network designs and demonstrate superior capabilities in unpaired shape transforms on a variety of examples over baselines and state-of-the-art approaches. We show that LOGAN is able to learn what shape features to preserve during shape translation, either local or non-local, whether content or style, depending solely on the input domains for training.

Orometry-based terrain analysis and synthesis

Mountainous digital terrains are an important element of many virtual environments and find application in games, film, simulation and training. Unfortunately, while existing synthesis methods produce locally plausible results they often fail to respect global structure. This is exacerbated by a dearth of automated metrics for assessing terrain properties at a macro level.

We address these issues by building on techniques from orometry, a field that involves the measurement of mountains and other relief features. First, we construct a sparse metric computed on the peaks and saddles of a mountain range and show that, when used for classification, this is capable of robustly distinguishing between different mountain ranges. Second, we present a synthesis method that takes a coarse elevation map as input and builds a graph of peaks and saddles respecting a given orometric distribution. This is then expanded into a fully continuous elevation function by deriving a consistent river network and shaping the valley slopes. In terms of authoring, users provide various control maps and are also able to edit, reposition, insert and remove terrain features all while retaining the characteristics of a selected mountain range.

The result is a terrain analysis and synthesis method that considers and incorporates orometric properties, and is, on the basis of our perceptual study, more visually plausible than existing terrain generation methods.

Multi-theme generative adversarial terrain amplification

Achieving highly detailed terrain models spanning vast areas is crucial to modern computer graphics. The pipeline for obtaining such terrains is via amplification of a low-resolution terrain to refine the details given a desired theme, which is a time-consuming and labor-intensive process. Recently, data-driven methods, such as the sparse construction tree, have provided a promising direction to equip the artist with better control over the theme.

These methods learn to amplify terrain details by using an exemplar of high-resolution detailed terrains to transfer the theme. In this paper, we propose Generative Adversarial Terrain Amplification (GATA) that achieves better local/global coherence compared to the existing data-driven methods while providing even more ways to control the theme. GATA is comprised of two key ingredients. Thefi rst one is a novel embedding of themes into vectors of real numbers to achieve a single tool for multi-theme amplification. The theme component can leverage existing LIDAR data to generate similar terrain features. It can also generate newfi ctional themes by tuning the embedding vector or even encoding a new example terrain into an embedding. The second one is an adversarially trained model that, conditioned on an embedding and a low-resolution terrain, generates a high-resolution terrain adhering to the desired theme. The proposed integral approach reduces the need for unnecessary manual adjustments, can speed up the development, and brings the model quality to a new level. Our implementation of the proposed method has proved successful in large-scale terrain authoring for an open-world game.

SESSION: Watch your language

Taichi: a language for high-performance computation on spatially sparse data structures

3D visual computing data are often spatially sparse. To exploit such sparsity, people have developed hierarchical sparse data structures, such as multi-level sparse voxel grids, particles, and 3D hash tables. However, developing and using these high-performance sparse data structures is challenging, due to their intrinsic complexity and overhead. We propose Taichi, a new data-oriented programming language for efficiently authoring, accessing, and maintaining such data structures. The language offers a high-level, data structure-agnostic interface for writing computation code. The user independently specifies the data structure. We provide several elementary components with different sparsity properties that can be arbitrarily composed to create a wide range of multi-level sparse data structures. This decoupling of data structures from computation makes it easy to experiment with different data structures without changing computation code, and allows users to write computation as if they are working with a dense array. Our compiler then uses the semantics of the data structure and index analysis to automatically optimize for locality, remove redundant operations for coherent accesses, maintain sparsity and memory allocations, and generate efficient parallel and vectorized instructions for CPUs and GPUs.

Our approach yields competitive performance on common computational kernels such as stencil applications, neighbor lookups, and particle scattering. We demonstrate our language by implementing simulation, rendering, and vision tasks including a material point method simulation, finite element analysis, a multigrid Poisson solver for pressure projection, volumetric path tracing, and 3D convolution on sparse grids. Our computation-data structure decoupling allows us to quickly experiment with different data arrangements, and to develop high-performance data structures tailored for specific computational tasks. With 1<u>1</u>0 th as many lines of code, we achieve 4.55× higher performance on average, compared to hand-optimized reference implementations.

Staged metaprogramming for shader system development

The shader system for a modern game engine comprises much more than just compilation of source code to executable kernels. Shaders must also be exposed to art tools, interfaced with engine code, and specialized for performance. Engines typically address each of these tasks in an ad hoc fashion, without a unifying abstraction. The alternative of developing a more powerful compiler framework is prohibitive for most engines.

In this paper, we identify staged metaprogramming as a unifying abstraction and implementation strategy to develop a powerful shader system with modest effort. By using a multi-stage language to perform metaprogramming at compile time, engine-specific code can consume, analyze, transform, and generate shader code that will execute at runtime. Staged metaprogramming reduces the effort required to implement a shader system that provides earlier error detection, avoids repeat declarations of shader parameters, and explores opportunities to improve performance.

To demonstrate the value of this approach, we design and implement a shader system, called Selos, built using staged metaprogramming. In our system, shader and application code are written in the same language and can share types and functions. We implement a design space exploration framework for Selos that investigates static versus dynamic composition of shader features, exploring the impact of shader specialization in a deferred renderer. Staged metaprogramming allows Selos to provide compelling features with a simple implementation.

Mitsuba 2: a retargetable forward and inverse renderer

Modern rendering systems are confronted with a dauntingly large and growing set of requirements: in their pursuit of realism, physically based techniques must increasingly account for intricate properties of light, such as its spectral composition or polarization. To reduce prohibitive rendering times, vectorized renderers exploit coherence via instruction-level parallelism on CPUs and GPUs. Differentiable rendering algorithms propagate derivatives through a simulation to optimize an objective function, e.g., to reconstruct a scene from reference images. Catering to such diverse use cases is challenging and has led to numerous purpose-built systems---partly, because retrofitting features of this complexity onto an existing renderer involves an error-prone and infeasibly intrusive transformation of elementary data structures, interfaces between components, and their implementations (in other words, everything).

We propose Mitsuba 2, a versatile renderer that is intrinsically retargetable to various applications including the ones listed above. Mitsuba 2 is implemented in modern C++ and leverages template metaprogramming to replace types and instrument the control flow of components such as BSDFs, volumes, emitters, and rendering algorithms. At compile time, it automatically transforms arithmetic, data structures, and function dispatch, turning generic algorithms into a variety of efficient implementations without the tedium of manual redesign. Possible transformations include changing the representation of color, generating a "wide" renderer that operates on bundles of light paths, just-in-time compilation to create computational kernels that run on the GPU, and forward/reverse-mode automatic differentiation. Transformations can be chained, which further enriches the space of algorithms derived from a single generic implementation.

We demonstrate the effectiveness and simplicity of our approach on several applications that would be very challenging to create without assistance: a rendering algorithm based on coherent MCMC exploration, a caustic design method for gradient-index optics, and a technique for reconstructing heterogeneous media in the presence of multiple scattering.

Automatically translating image processing libraries to halide

This paper presents Dexter, a new tool that automatically translates image processing functions from a low-level general-purpose language to a high-level domain-specific language (DSL), allowing them to leverage cross-platform optimizations enabled by DSLs. Rather than building a classical syntax-driven compiler to do this translation, Dexter leverages recent advances in program synthesis and program verification, along with a new domain-specific synthesis algorithm, to translate C++ image processing code to the Halide DSL, while guaranteeing semantic equivalence. This new synthesis algorithm scales and generalizes to much larger and more complex functions than prior work, including the ability to handle tiling, conditionals, and multi-stage pipelines in the original low-level code. To demonstrate the effectiveness of our approach, we evaluate Dexter using real-world image processing functions from Adobe Photoshop, a widely used multi-platform image processing program. Our results show that Dexter can translate 264 out of 353 functions in our test set, with the original implementations ranging from 20 to 150 lines of code. By leveraging Halide's advanced auto-scheduling capabilities, we get median speedups of 7.03× and 4.52× for Dexter-translated functions as compared to the original implementations on Intel and ARM architectures, respectively.

SESSION: Learning to move

Learning predict-and-simulate policies from unorganized human motion data

The goal of this research is to create physically simulated biped characters equipped with a rich repertoire of motor skills. The user can control the characters interactively by modulating their control objectives. The characters can interact physically with each other and with the environment. We present a novel network-based algorithm that learns control policies from unorganized, minimally-labeled human motion data. The network architecture for interactive character animation incorporates an RNN-based motion generator into a DRL-based controller for physics simulation and control. The motion generator guides forward dynamics simulation by feeding a sequence of future motion frames to track. The rich future prediction facilitates policy learning from large training data sets. We will demonstrate the effectiveness of our approach with biped characters that learn a variety of dynamic motor skills from large, unorganized data and react to unexpected perturbation beyond the scope of the training data.

DReCon: data-driven responsive control of physics-based characters

Interactive control of self-balancing, physically simulated humanoids is a long standing problem in the field of real-time character animation. While physical simulation guarantees realistic interactions in the virtual world, simulated characters can appear unnatural if they perform unusual movements in order to maintain balance. Therefore, obtaining a high level of responsiveness to user control, runtime performance, and diversity has often been overlooked in exchange for motion quality. Recent work in the field of deep reinforcement learning has shown that training physically simulated characters to follow motion capture clips can yield high quality tracking results. We propose a two-step approach for building responsive simulated character controllers from unstructured motion capture data. First, meaningful features from the data such as movement direction, heading direction, speed, and locomotion style, are interactively specified and drive a kinematic character controller implemented using motion matching. Second, reinforcement learning is used to train a simulated character controller that is general enough to track the entire distribution of motion that can be generated by the kinematic controller. Our design emphasizes responsiveness to user input, visual quality, and low runtime cost for application in video-games.

Learning body shape variation in physics-based characters

Recently, deep reinforcement learning (DRL) has attracted great attention in designing controllers for physics-based characters. Despite the recent success of DRL, the learned controller is viable for a single character. Changes in body size and proportions require learning controllers from scratch. In this paper, we present a new method of learning parametric controllers for body shape variation. A single parametric controller enables us to simulate and control various characters having different heights, weights, and body proportions. The users are allowed to create new characters through body shape parameters, and they can control the characters immediately. Our characters can also change their body shapes on the fly during simulation. The key to the success of our approach includes the adaptive sampling of body shapes that tackles the challenges in learning parametric controllers, which relies on the marginal value function that measures control capabilities of body shapes. We demonstrate parametric controllers for various physically simulated characters such as bipeds, quadrupeds, and underwater animals.

SoftCon: simulation and control of soft-bodied animals with biomimetic actuators

We present a novel and general framework for the design and control of underwater soft-bodied animals. The whole body of an animal consisting of soft tissues is modeled by tetrahedral and triangular FEM meshes. The contraction of muscles embedded in the soft tissues actuates the body and limbs to move. We present a novel muscle excitation model that mimics the anatomy of muscular hydrostats and their muscle excitation patterns. Our deep reinforcement learning algorithm equipped with the muscle excitation model successfully learned the control policy of soft-bodied animals, which can be physically simulated in real-time, controlled interactively, and resilient to external perturbations. We demonstrate the effectiveness of our approach with various simulated animals including octopuses, lampreys, starfishes, stingrays and cuttlefishes. They learn diverse behaviors such as swimming, grasping, and escaping from a bottle. We also implemented a simple user interface system that allows the user to easily create their creatures.

Neural state machine for character-scene interactions

We propose Neural State Machine, a novel data-driven framework to guide characters to achieve goal-driven actions with precise scene interactions. Even a seemingly simple task such as sitting on a chair is notoriously hard to model with supervised learning. This difficulty is because such a task involves complex planning with periodic and non-periodic motions reacting to the scene geometry to precisely position and orient the character. Our proposed deep auto-regressive framework enables modeling of multi-modal scene interaction behaviors purely from data. Given high-level instructions such as the goal location and the action to be launched there, our system computes a series of movements and transitions to reach the goal in the desired state. To allow characters to adapt to a wide range of geometry such as different shapes of furniture and obstacles, we incorporate an efficient data augmentation scheme to randomly switch the 3D geometry while maintaining the context of the original motion. To increase the precision to reach the goal during runtime, we introduce a control scheme that combines egocentric inference and goal-centric inference. We demonstrate the versatility of our model with various scene interaction tasks such as sitting on a chair, avoiding obstacles, opening and entering through a door, and picking and carrying objects generated in real-time just from a single model.

SESSION: Thoughts on display

Reducing simulator sickness with perceptual camera control

Virtual-reality provides an immersive environment but can induce cybersickness due to the discrepancy between visual and vestibular cues. To avoid this problem, the movement of the virtual camera needs to match the motion of the user in the real world. Unfortunately, this is usually difficult due to the mismatch between the size of the virtual environments and the space available to the users in the physical domain. The resulting constraints on the camera movement significantly hamper the adoption of virtual-reality headsets in many scenarios and make the design of the virtual environments very challenging. In this work, we study how the characteristics of the virtual camera movement (e.g., translational acceleration and rotational velocity) and the composition of the virtual environment (e.g., scene depth) contribute to perceived discomfort. Based on the results from our user experiments, we devise a computational model for predicting the magnitude of the discomfort for a given scene and camera trajectory. We further apply our model to a new path planning method which optimizes the input motion trajectory to reduce perceptual sickness. We evaluate the effectiveness of our method in improving perceptual comfort in a series of user studies targeting different applications. The results indicate that our method can reduce the perceived discomfort while maintaining the fidelity of the original navigation, and perform better than simpler alternatives.

DiCE: dichoptic contrast enhancement for VR and stereo displays

In stereoscopic displays, such as those used in VR/AR headsets, our eyes are presented with two different views. The disparity between the views is typically used to convey depth cues, but it could be also used to enhance image appearance. We devise a novel technique that takes advantage of binocular fusion to boost perceived local contrast and visual quality of images. Since the technique is based on fixed tone curves, it has negligible computational cost and it is well suited for real-time applications, such as VR rendering. To control the trade-off between contrast gain and binocular rivalry, we conduct a series of experiments to explain the factors that dominate rivalry perception in a dichoptic presentation where two images of different contrasts are displayed. With this new finding, we can effectively enhance contrast and control rivalry in mono- and stereoscopic images, and in VR rendering, as confirmed in validation experiments.

DeepFovea: neural reconstruction for foveated rendering and video compression using learned statistics of natural videos

In order to provide an immersive visual experience, modern displays require head mounting, high image resolution, low latency, as well as high refresh rate. This poses a challenging computational problem. On the other hand, the human visual system can consume only a tiny fraction of this video stream due to the drastic acuity loss in the peripheral vision. Foveated rendering and compression can save computations by reducing the image quality in the peripheral vision. However, this can cause noticeable artifacts in the periphery, or, if done conservatively, would provide only modest savings. In this work, we explore a novel foveated reconstruction method that employs the recent advances in generative adversarial neural networks. We reconstruct a plausible peripheral video from a small fraction of pixels provided every frame. The reconstruction is done by finding the closest matching video to this sparse input stream of pixels on the learned manifold of natural videos. Our method is more efficient than the state-of-the-art foveated rendering, while providing the visual experience with no noticeable quality degradation. We conducted a user study to validate our reconstruction method and compare it against existing foveated rendering and video compression techniques. Our method is fast enough to drive gaze-contingent head-mounted displays in real time on modern hardware. We plan to publish the trained network to establish a new quality bar for foveated rendering and compression as well as encourage follow-up research.

Wirtinger holography for near-eye displays

Near-eye displays using holographic projection are emerging as an exciting display approach for virtual and augmented reality at high-resolution without complex optical setups --- shifting optical complexity to computation. While precise phase modulation hardware is becoming available, phase retrieval algorithms are still in their infancy, and holographic display approaches resort to heuristic encoding methods or iterative methods relying on various relaxations.

In this work, we depart from such existing approximations and solve the phase retrieval problem for a hologram of a scene at a single depth at a given time by revisiting complex Wirtinger derivatives, also extending our framework to render 3D volumetric scenes. Using Wirtinger derivatives allows us to pose the phase retrieval problem as a quadratic problem which can be minimized with first-order optimization methods. The proposed Wirtinger Holography is flexible and facilitates the use of different loss functions, including learned perceptual losses parametrized by deep neural networks, as well as stochastic optimization methods. We validate this framework by demonstrating holographic reconstructions with an order of magnitude lower error, both in simulation and on an experimental hardware prototype.

Holographic near-eye displays based on overlap-add stereograms

Holographic near-eye displays are a key enabling technology for virtual and augmented reality (VR/AR) applications. Holographic stereograms (HS) are a method of encoding a light field into a hologram, which enables them to natively support view-dependent lighting effects. However, existing HS algorithms require the choice of a hogel size, forcing a tradeoff between spatial and angular resolution. Based on the fact that the short-time Fourier transform (STFT) connects a hologram to its observable light field, we develop the overlap-add stereogram (OLAS) as the correct method of "inverting" the light field into a hologram via the STFT. The OLAS makes more efficient use of the information contained within the light field than previous HS algorithms, exhibiting better image quality at a range of distances and hogel sizes. Most remarkably, the OLAS does not degrade spatial resolution with increasing hogel size, overcoming the spatio-angular resolution tradeoff that previous HS algorithms face. Importantly, the optimal hogel size of previous methods typically varies with the depth of every object in a scene, making the OLAS not only a hogel size-invariant method, but also nearly scene independent. We demonstrate the performance of the OLAS both in simulation and on a prototype near-eye display system, showing focusing capabilities and view-dependent effects.

SESSION: Light hardware

Tomographic projector: large scale volumetric display with uniform viewing experiences

Over the past century, as display evolved, people have demanded more realistic and immersive experiences in theaters. Here, we present a tomographic projector for a volumetric display system that accommodates large audiences while providing a uniform experience. The tomographic projector combines high-speed digital micromirror and three spatial light modulators to refresh projection images at 7200 Hz. With synchronization of the tomographic projector and wearable focus-tunable eyepieces, the presented system can reconstruct 60 focal planes for volumetric representation right in front of audiences. We demonstrate proof of concept of the proposed system by implementing a miniaturized theater environment. Experimentally, we show that this system has wide expressible depth range with focus cues from 25 cm to optical infinity with sufficient tolerance while preserving high resolution and contrast. We also confirm that the proposed system provides uniform experience in a wide range of viewing zone through simulation and experiment. Additionally, the tomographic projector has capability to equalize vergence state that varies in conventional stereoscopic 3D theater according to viewing position as well as interpupillary distance. This study is concluded with thorough discussion about tomographic projectors in terms of challenges and research issues.

An integrated 6DoF video camera and system design

Designing a fully integrated 360° video camera supporting 6DoF head motion parallax requires overcoming many technical hurdles, including camera placement, optical design, sensor resolution, system calibration, real-time video capture, depth reconstruction, and real-time novel view synthesis. While there is a large body of work describing various system components, such as multi-view depth estimation, our paper is the first to describe a complete, reproducible system that considers the challenges arising when designing, building, and deploying a full end-to-end 6DoF video camera and playback environment. Our system includes a computational imaging software pipeline supporting online markerless calibration, high-quality reconstruction, and real-time streaming and rendering. Most of our exposition is based on a professional 16-camera configuration, which will be commercially available to film producers. However, our software pipeline is generic and can handle a variety of camera geometries and configurations. The entire calibration and reconstruction software pipeline along with example datasets is open sourced to encourage follow-up research in high-quality 6DoF video reconstruction and rendering 1.

The relightables: volumetric performance capture of humans with realistic relighting

We present "The Relightables", a volumetric capture system for photorealistic and high quality relightable full-body performance capture. While significant progress has been made on volumetric capture systems, focusing on 3D geometric reconstruction with high resolution textures, much less work has been done to recover photometric properties needed for relighting. Results from such systems lack high-frequency details and the subject's shading is prebaked into the texture. In contrast, a large body of work has addressed relightable acquisition for image-based approaches, which photograph the subject under a set of basis lighting conditions and recombine the images to show the subject as they would appear in a target lighting environment. However, to date, these approaches have not been adapted for use in the context of a high-resolution volumetric capture system. Our method combines this ability to realistically relight humans for arbitrary environments, with the benefits of free-viewpoint volumetric capture and new levels of geometric accuracy for dynamic performances. Our subjects are recorded inside a custom geodesic sphere outfitted with 331 custom color LED lights, an array of high-resolution cameras, and a set of custom high-resolution depth sensors. Our system innovates in multiple areas: First, we designed a novel active depth sensor to capture 12.4 MP depth maps, which we describe in detail. Second, we show how to design a hybrid geometric and machine learning reconstruction pipeline to process the high resolution input and output a volumetric video. Third, we generate temporally consistent reflectance maps for dynamic performers by leveraging the information contained in two alternating color gradient illumination images acquired at 60Hz. Multiple experiments, comparisons, and applications show that The Relightables significantly improves upon the level of realism in placing volumetrically captured human performances into arbitrary CG scenes.

Modeling endpoint distribution of pointing selection tasks in virtual reality environments

Understanding the endpoint distribution of pointing selection tasks can reveal the underlying patterns on how users tend to acquire a target, which is one of the most essential and pervasive tasks in interactive systems. It could further aid designers to create new graphical user interfaces and interaction techniques that are optimized for accuracy, efficiency, and ease of use. Previous research has explored the modeling of endpoint distribution outside of virtual reality (VR) systems that have shown to be useful in predicting selection accuracy and guide the design of new interactive techniques. This work aims at developing an endpoint distribution of selection tasks for VR systems which has resulted in EDModel, a novel model that can be used to predict endpoint distribution of pointing selection tasks in VR environments. The development of EDModel is based on two users studies that have explored how factors such as target size, movement amplitude, and target depth affect the endpoint distribution. The model is built from the collected data and its generalizability is subsequently tested in complex scenarios with more relaxed conditions. Three applications of EDModel inspired by previous research are evaluated to show the broad applicability and usefulness of the model: correcting the bias in Fitts's law, predicting selection accuracy, and enhancing pointing selection techniques. Overall, EDModel can achieve high prediction accuracy and can be adapted to different types of applications in VR.

Learned large field-of-view imaging with thin-plate optics

Typical camera optics consist of a system of individual elements that are designed to compensate for the aberrations of a single lens. Recent computational cameras shift some of this correction task from the optics to post-capture processing, reducing the imaging optics to only a few optical elements. However, these systems only achieve reasonable image quality by limiting the field of view (FOV) to a few degrees - effectively ignoring severe off-axis aberrations with blur sizes of multiple hundred pixels.

In this paper, we propose a lens design and learned reconstruction architecture that lift this limitation and provide an order of magnitude increase in field of view using only a single thin-plate lens element. Specifically, we design a lens to produce spatially shift-invariant point spread functions, over the full FOV, that are tailored to the proposed reconstruction architecture. We achieve this with a mixture PSF, consisting of a peak and and a low-pass component, which provides residual contrast instead of a small spot size as in traditional lens designs. To perform the reconstruction, we train a deep network on captured data from a display lab setup, eliminating the need for manual acquisition of training data in the field. We assess the proposed method in simulation and experimentally with a prototype camera system.

We compare our system against existing single-element designs, including an aspherical lens and a pinhole, and we compare against a complex multielement lens, validating high-quality large field-of-view (i.e. 53°) imaging performance using only a single thin-plate element.

SESSION: Looking & sounding great

Learning an intrinsic garment space for interactive authoring of garment animation

Authoring dynamic garment shapes for character animation on body motion is one of the fundamental steps in the CG industry. Established workflows are either time and labor consuming (i.e., manual editing on dense frames with controllers), or lack keyframe-level control (i.e., physically-based simulation). Not surprisingly, garment authoring remains a bottleneck in many production pipelines. Instead, we present a deep-learning-based approach for semi-automatic authoring of garment animation, wherein the user provides the desired garment shape in a selection of keyframes, while our system infers a latent representation for its motion-independent intrinsic parameters (e.g., gravity, cloth materials, etc.). Given new character motions, the latent representation allows to automatically generate a plausible garment animation at interactive rates. Having factored out character motion, the learned intrinsic garment space enables smooth transition between keyframes on a new motion sequence. Technically, we learn an intrinsic garment space with an motion-driven autoencoder network, where the encoder maps the garment shapes to the intrinsic space under the condition of body motions, while the decoder acts as a differentiable simulator to generate garment shapes according to changes in character body motion and intrinsic parameters. We evaluate our approach qualitatively and quantitatively on common garment types. Experiments demonstrate our system can significantly improve current garment authoring workflows via an interactive user interface. Compared with the standard CG pipeline, our system significantly reduces the ratio of required keyframes from 20% to 1 -- 2%.

Biomimetic eye modeling & deep neuromuscular oculomotor control

We present a novel, biomimetic model of the eye for realistic virtual human animation. We also introduce a deep learning approach to oculomotor control that is compatible with our biomechanical eye model. Our eye model consists of the following functional components: (i) submodels of the 6 extraocular muscles that actuate realistic eye movements, (ii) an iris submodel, actuated by pupillary muscles, that accommodates to incoming light intensity, (iii) a corneal submodel and a deformable, ciliary-muscle-actuated lens submodel, which refract incoming light rays for focal accommodation, and (iv) a retina with a multitude of photoreceptors arranged in a biomimetic, foveated distribution. The light intensity captured by the photoreceptors is computed using ray tracing from the photoreceptor positions through the finite aperture pupil into the 3D virtual environment, and the visual information from the retina is output via an optic nerve vector. Our oculomotor control system includes a foveation controller implemented as a locally-connected, irregular Deep Neural Network (DNN), or "LiNet", that conforms to the nonuniform retinal photoreceptor distribution, and a neuromuscular motor controller implemented as a fully-connected DNN, plus auxiliary Shallow Neural Networks (SNNs) that control the accommodation of the pupil and lens. The DNNs are trained offline through deep learning from data synthesized by the eye model itself. Once trained, the oculomotor control system operates robustly and efficiently online. It innervates the intraocular muscles to perform illumination and focal accommodation and the extraocular muscles to produce natural eye movements in order to foveate and pursue moving visual targets. We additionally demonstrate the operation of our eye model (binocularly) within our recently introduced sensorimotor control framework involving an anatomically-accurate biomechanical human musculoskeletal model.

Acoustic texture rendering for extended sources in complex scenes

Extended stochastic sources, like falling rain or a flowing waterway, provide an immersive ambience in virtual environments. In complex scenes, the rendered sound should vary naturally with listener position, differing not only in overall loudness but also in texture, to capture the indistinct murmur of a faraway brook versus the bright babbling of one up close. Modeling an ambient sound as a collection of random events such as individual raindrop impacts or water bubble oscillations, this variation can be seen as a change in the statistical distribution of events heard by the listener: the arrival rate of nearby, louder events relative to more distant or occluded, quieter ones. Reverberation and edge diffraction from scene geometry multiply and mix events more extensively compared to an empty scene and introduce salient spatial variation in texture. We formalize the notion of acoustic texture by introducing the event loudness density (ELD), which relates the rapidity of received events to their loudness. To model spatial variation in texture, the ELD is made a function of listener location in the scene. We show that this ELD field can be extracted from a single wave simulation for each extended source and rendered flexibly using a granular synthesis pipeline, with grains derived procedurally or from recordings. Our system yields believable, realtime changes in acoustic texture as the listener moves, driven by sound propagation in the scene.

SESSION: Samples & speckles

GradNet: unsupervised deep screened poisson reconstruction for gradient-domain rendering

Monte Carlo (MC) methods for light transport simulation are flexible and general but typically suffer from high variance and slow convergence. Gradientdomain rendering alleviates this problem by additionally generating image gradients and reformulating rendering as a screened Poisson image reconstruction problem. To improve the quality and performance of the reconstruction, we propose a novel and practical deep learning based approach in this paper. The core of our approach is a multi-branch auto-encoder, termed GradNet, which end-to-end learns a mapping from a noisy input image and its corresponding image gradients to a high-quality image with low variance. Once trained, our network is fast to evaluate and does not require manual parameter tweaking. Due to the difficulty in preparing ground-truth images for training, we design and train our network in a completely unsupervised manner by learning directly from the input data. This is the first solution incorporating unsupervised deep learning into the gradient-domain rendering framework. The loss function is defined as an energy function including a data fidelity term and a gradient fidelity term. To further reduce the noise of the reconstructed image, the loss function is reinforced by adding a regularizer constructed from selected rendering-specific features. We demonstrate that our method improves the reconstruction quality for a diverse set of scenes, and reconstructing a high-resolution image takes far less than one second on a recent GPU.

Adversarial Monte Carlo denoising with conditioned auxiliary feature modulation

Denoising Monte Carlo rendering with a very low sample rate remains a major challenge in the photo-realistic rendering research. Many previous works, including regression-based and learning-based methods, have been explored to achieve better rendering quality with less computational cost. However, most of these methods rely on handcrafted optimization objectives, which lead to artifacts such as blurs and unfaithful details. In this paper, we present an adversarial approach for denoising Monte Carlo rendering. Our key insight is that generative adversarial networks can help denoiser networks to produce more realistic high-frequency details and global illumination by learning the distribution from a set of high-quality Monte Carlo path tracing images. We also adapt a novel feature modulation method to utilize auxiliary features better, including normal, albedo and depth. Compared to previous state-of-the-art methods, our approach produces a better reconstruction of the Monte Carlo integral from a few samples, performs more robustly at different sample rates, and takes only a second for megapixel images.

Learning generative models for rendering specular microgeometry

Rendering specular material appearance is a core problem of computer graphics. While smooth analytical material models are widely used, the high-frequency structure of real specular highlights requires considering discrete, finite microgeometry. Instead of explicit modeling and simulation of the surface microstructure (which was explored in previous work), we propose a novel direction: learning the high-frequency directional patterns from synthetic or measured examples, by training a generative adversarial network (GAN). A key challenge in applying GAN synthesis to spatially varying BRDFs is evaluating the reflectance for a single location and direction without the cost of evaluating the whole hemisphere. We resolve this using a novel method for partial evaluation of the generator network. We are also able to control large-scale spatial texture using a conditional GAN approach. The benefits of our approach include the ability to synthesize spatially large results without repetition, support for learning from measured data, and evaluation performance independent of the complexity of the dataset synthesis or measurement.

Deep point correlation design

Designing point patterns with desired properties can require substantial effort, both in hand-crafting coding and mathematical derivation. Retaining these properties in multiple dimensions or for a substantial number of points can be challenging and computationally expensive. Tackling those two issues, we suggest to automatically generate scalable point patterns from design goals using deep learning. We phrase pattern generation as a deep composition of weighted distance-based unstructured filters. Deep point pattern design means to optimize over the space of all such compositions according to a user-provided point correlation loss, a small program which measures a pattern's fidelity in respect to its spatial or spectral statistics, linear or non-linear (e. g., radial) projections, or any arbitrary combination thereof. Our analysis shows that we can emulate a large set of existing patterns (blue, green, step, projective, stair, etc.-noise), generalize them to countless new combinations in a systematic way and leverage existing error estimation formulations to generate novel point patterns for a user-provided class of integrand functions. Our point patterns scale favorably to multiple dimensions and numbers of points: we demonstrate nearly 10k points in 10-D produced in one second on one GPU. All the resources (source code and the pre-trained networks) can be found at https://sampling.mpi-inf.mpg.de/deepsampling.html.

SESSION: Differentiable rendering

A differential theory of radiative transfer

Physics-based differentiable rendering is the task of estimating the derivatives of radiometric measures with respect to scene parameters. The ability to compute these derivatives is necessary for enabling gradient-based optimization in a diverse array of applications: from solving analysis-by-synthesis problems to training machine learning pipelines incorporating forward rendering processes. Unfortunately, physics-based differentiable rendering remains challenging, due to the complex and typically nonlinear relation between pixel intensities and scene parameters.

We introduce a differential theory of radiative transfer, which shows how individual components of the radiative transfer equation (RTE) can be differentiated with respect to arbitrary differentiable changes of a scene. Our theory encompasses the same generality as the standard RTE, allowing differentiation while accurately handling a large range of light transport phenomena such as volumetric absorption and scattering, anisotropic phase functions, and heterogeneity. To numerically estimate the derivatives given by our theory, we introduce an unbiased Monte Carlo estimator supporting arbitrary surface and volumetric configurations. Our technique differentiates path contributions symbolically and uses additional boundary integrals to capture geometric discontinuities such as visibility changes.

We validate our method by comparing our derivative estimations to those generated using the finite-difference method. Furthermore, we use a few synthetic examples inspired by real-world applications in inverse rendering, non-line-of-sight (NLOS) and biomedical imaging, and design, to demonstrate the practical usefulness of our technique.

Reparameterizing discontinuous integrands for differentiable rendering

Differentiable rendering has recently opened the door to a number of challenging inverse problems involving photorealistic images, such as computational material design and scattering-aware reconstruction of geometry and materials from photographs. Differentiable rendering algorithms strive to estimate partial derivatives of pixels in a rendered image with respect to scene parameters, which is difficult because visibility changes are inherently non-differentiable.

We propose a new technique for differentiating path-traced images with respect to scene parameters that affect visibility, including the position of cameras, light sources, and vertices in triangle meshes. Our algorithm computes the gradients of illumination integrals by applying changes of variables that remove or strongly reduce the dependence of the position of discontinuities on differentiable scene parameters. The underlying parameterization is created on the fly for each integral and enables accurate gradient estimates using standard Monte Carlo sampling in conjunction with automatic differentiation. Importantly, our approach does not rely on sampling silhouette edges, which has been a bottleneck in previous work and tends to produce high-variance gradients when important edges are found with insufficient probability in scenes with complex visibility and high-resolution geometry. We show that our method only requires a few samples to produce gradients with low bias and variance for challenging cases such as glossy reflections and shadows. Finally, we use our differentiable path tracer to reconstruct the 3D geometry and materials of several real-world objects from a set of reference photographs.

Non-linear sphere tracing for rendering deformed signed distance fields

Signed distance fields (SDFs) are a powerful implicit representation for modeling solids, volumes and surfaces. Their infinite resolution, controllable continuity and robust constructive solid geometry operations, coupled with smooth blending, enable powerful and intuitive sculpting tools for creating complex SDF models. SDF metric properties also admit efficient surface rendering with sphere tracing. Unfortunately, SDFs remain incompatible with many popular direct deformation techniques which re-position a surface via its explicit representation. Linear blend skinning used in character articulation, for example, directly displaces each vertex of a triangle mesh. To overcome this limitation, we propose a variant of sphere tracing for directly rendering deformed SDFs. We show that this problem reduces to integrating a non-linear ordinary differential equation. We propose an efficient numerical solution, with controllable error, which first automatically computes an initial value along each cast ray before walking conservatively along a curved ray in the undeformed space according to the signed distance. Importantly, our approach does not require knowledge, computation or even global existence of the inverse deformation, which allows us to readily apply many existing forward deformations. We demonstrate our method's effectiveness for interactive rendering of a variety of popular deformation techniques that were, to date, limited to explicit surfaces.

Differentiable surface splatting for point-based geometry processing

We propose Differentiable Surface Splatting (DSS), a high-fidelity differentiable renderer for point clouds. Gradients for point locations and normals are carefully designed to handle discontinuities of the rendering function. Regularization terms are introduced to ensure uniform distribution of the points on the underlying surface. We demonstrate applications of DSS to inverse rendering for geometry synthesis and denoising, where large scale topological changes, as well as small scale detail modifications, are accurately and robustly handled without requiring explicit connectivity, outperforming state-of-the-art techniques. The data and code are at https://github.com/yifita/DSS.

The camera offset space: real-time potentially visible set computations for streaming rendering

Potential visibility has historically always been of importance when rendering performance was insufficient. With the rise of virtual reality, rendering power may once again be insufficient, e.g., for integrated graphics of head-mounted displays. To tackle the issue of efficient potential visibility computations on modern graphics hardware, we introduce the camera offset space (COS). Opposite to how traditional visibility computations work---where one determines which pixels are covered by an object under all potential viewpoints---the COS describes under which camera movement a sample location is covered by a triangle. In this way, the COS opens up a new set of possibilities for visibility computations. By evaluating the pairwise relations of triangles in the COS, we show how to efficiently determine occluded triangles. Constructing the COS for all pixels of a rendered view leads to a complete potentially visible set (PVS) for complex scenes. By fusing triangles to larger occluders, including locations between pixel centers, and considering camera rotations, we describe an exact PVS algorithm that includes all viewing directions inside a view cell. Implementing the COS is a combination of real-time rendering and compute steps. We provide the first GPU PVS implementation that works without preprocessing, on-the-fly, on unconnected triangles. This opens the door to a new approach of rendering for virtual reality head-mounted displays and server-client settings for streaming 3D applications such as video games.

SESSION: Hairy & sketchy geometry

OpenSketch: a richly-annotated dataset of product design sketches

Product designers extensively use sketches to create and communicate 3D shapes and thus form an ideal audience for sketch-based modeling, non-photorealistic rendering and sketch filtering. However, sketching requires significant expertise and time, making design sketches a scarce resource for the research community. We introduce OpenSketch, a dataset of product design sketches aimed at offering a rich source of information for a variety of computer-aided design tasks. OpenSketch contains more than 400 sketches representing 12 man-made objects drawn by 7 to 15 product designers of varying expertise. We provided participants with front, side and top views of these objects, and instructed them to draw from two novel perspective viewpoints. This drawing task forces designers to construct the shape from their mental vision rather than directly copy what they see. They achieve this task by employing a variety of sketching techniques and methods not observed in prior datasets. Together with industrial design teachers, we distilled a taxonomy of line types and used it to label each stroke of the 214 sketches drawn from one of the two viewpoints. While some of these lines have long been known in computer graphics, others remain to be reproduced algorithmically or exploited for shape inference. In addition, we also asked participants to produce clean presentation drawings from each of their sketches, resulting in aligned pairs of drawings of different styles. Finally, we registered each sketch to its reference 3D model by annotating sparse correspondences. We provide an analysis of our annotated sketches, which reveals systematic drawing strategies over time and shapes, as well as a positive correlation between presence of construction lines and accuracy. Our sketches, in combination with provided annotations, form challenging benchmarks for existing algorithms as well as a great source of inspiration for future developments. We illustrate the versatility of our data by using it to test a 3D reconstruction deep network trained on synthetic drawings, as well as to train a filtering network to convert concept sketches into presentation drawings. We distribute our dataset under the Creative Commons CC0 license: https://ns.inria.fr/d3/OpenSketch.

Language-based colorization of scene sketches

Being natural, touchless, and fun-embracing, language-based inputs have been demonstrated effective for various tasks from image generation to literacy education for children. This paper for the first time presents a language-based system for interactive colorization of scene sketches, based on semantic comprehension. The proposed system is built upon deep neural networks trained on a large-scale repository of scene sketches and cartoonstyle color images with text descriptions. Given a scene sketch, our system allows users, via language-based instructions, to interactively localize and colorize specific foreground object instances to meet various colorization requirements in a progressive way. We demonstrate the effectiveness of our approach via comprehensive experimental results including alternative studies, comparison with the state-of-the-art methods, and generalization user studies. Given the unique characteristics of language-based inputs, we envision a combination of our interface with a traditional scribble-based interface for a practical multimodal colorization system, benefiting various applications. The dataset and source code can be found at https://github.com/SketchyScene/SketchySceneColorization.

Data-driven interior plan generation for residential buildings

We propose a novel data-driven technique for automatically and efficiently generating floor plans for residential buildings with given boundaries. Central to this method is a two-stage approach that imitates the human design process by locating rooms first and then walls while adapting to the input building boundary. Based on observations of the presence of the living room in almost all floor plans, our designed learning network begins with positioning a living room and continues by iteratively generating other rooms. Then, walls are first determined by an encoder-decoder network, and then they are refined to vector representations using dedicated rules. To effectively train our networks, we construct RPLAN - a manually collected large-scale densely annotated dataset of floor plans from real residential buildings. Intensive experiments, including formative user studies and comparisons, are conducted to illustrate the feasibility and efficacy of our proposed approach. By comparing the plausibility of different floor plans, we have observed that our method substantially outperforms existing methods, and in many cases our floor plans are comparable to human-created ones.

Dynamic hair modeling from monocular videos using deep neural networks

We introduce a deep learning based framework for modeling dynamic hairs from monocular videos, which could be captured by a commodity video camera or downloaded from Internet. The framework mainly consists of two neural networks, i.e., HairSpatNet for inferring 3D spatial features of hair geometry from 2D image features, and HairTempNet for extracting temporal features of hair motions from video frames. The spatial features are represented as 3D occupancy fields depicting the hair volume shapes and 3D orientation fields indicating the hair growing directions. The temporal features are represented as bidirectional 3D warping fields, describing the forward and backward motions of hair strands cross adjacent frames. Both HairSpatNet and HairTempNet are trained with synthetic hair data. The spatial and temporal features predicted by the networks are subsequently used for growing hair strands with both spatial and temporal consistency. Experiments demonstrate that our method is capable of constructing plausible dynamic hair models that closely resemble the input video, and compares favorably to previous single-view techniques.

SESSION: Data-driven dynamics

Real2Sim: visco-elastic parameter estimation from dynamic motion

This paper presents a method for optimizing visco-elastic material parameters of a finite element simulation to best approximate the dynamic motion of real-world soft objects. We compute the gradient with respect to the material parameters of a least-squares error objective function using either direct sensitivity analysis or an adjoint state method. We then optimize the material parameters such that the simulated motion matches real-world observations as closely as possible. In this way, we can directly build a useful simulation model that captures the visco-elastic behaviour of the specimen of interest. We demonstrate the effectiveness of our method on various examples such as numerical coarsening, custom-designed objective functions, and of course real-world flexible elastic objects made of foam or 3D printed lattice structures, including a demo application in soft robotics.

Video-guided real-to-virtual parameter transfer for viscous fluids

In physically-based simulation, it is essential to choose appropriate material parameters to generate desirable simulation results. In many cases, however, choosing appropriate material parameters is very challenging, and often tedious trial-and-error parameter tuning steps are inevitable. In this paper, we propose a real-to-virtual parameter transfer framework that identifies material parameters of viscous fluids with example video data captured from real-world phenomena. Our method first extracts positional data of fluids and then uses the extracted data as a reference to identify the viscosity parameters, combining forward viscous fluid simulations and parameter optimization in an iterative process. We evaluate our method with a range of synthetic and real-world example data, and demonstrate that our method can identify the hidden physical variables and viscosity parameters. This set of recovered physical variables and parameters can then be effectively used in novel scenarios to generate viscous fluid behaviors visually consistent with the example videos.

Fluid carving: intelligent resizing for fluid simulation data

We present a method for intelligently resizing fluid simulation data using seam carving methods. While advances in post-processing techniques have allowed artists greater control over content late in the production process, this technology has largely remained confined to image processing. Our fluid carving system allows fluid simulation post-processing by performing content-aware non-uniform scaling on baked-out fluid simulation data. Specifically, we extend video seam carving techniques to 4-dimensional animated fluid volume data with a graph cut energy function based on mean curvature and kinetic energy. To reduce the complexity of performing graph cuts on 4D data, we provide a new graph construction formulation that greatly reduces the run-time and memory consumption, which are otherwise prohibitively expensive. We demonstrate that our system is useful for post-production fluid simulation changes and editable fluid FX libraries.

ScalarFlow: a large-scale volumetric data set of real-world scalar transport flows for computer animation and machine learning

In this paper, we present ScalarFlow, a first large-scale data set of reconstructions of real-world smoke plumes. In addition, we propose a framework for accurate physics-based reconstructions from a small number of video streams. Central components of our framework are a novel estimation of unseen inflow regions and an efficient optimization scheme constrained by a simulation to capture real-world fluids. Our data set includes a large number of complex natural buoyancy-driven flows. The flows transition to turbulence and contain observable scalar transport processes. As such, the ScalarFlow data set is tailored towards computer graphics, vision, and learning applications. The published data set contains volumetric reconstructions of velocity and density as well as the corresponding input image sequences with calibration data, code, and instructions how to reproduce the commodity hardware capture setup. We further demonstrate one of the many potential applications: a first perceptual evaluation study, which reveals that the complexity of the reconstructed flows would require large simulation resolutions for regular solvers in order to recreate at least parts of the natural complexity contained in the captured data.

SESSION: Geometry off the deep end

RPM-Net: recurrent prediction of motion and parts from point cloud

We introduce RPM-Net, a deep learning-based approach which simultaneously infers movable parts and hallucinates their motions from a single, un-segmented, and possibly partial, 3D point cloud shape. RPM-Net is a novel Recurrent Neural Network (RNN), composed of an encoder-decoder pair with interleaved Long Short-Term Memory (LSTM) components, which together predict a temporal sequence of pointwise displacements for the input point cloud. At the same time, the displacements allow the network to learn movable parts, resulting in a motion-based shape segmentation. Recursive applications of RPM-Net on the obtained parts can predict finer-level part motions, resulting in a hierarchical object segmentation. Furthermore, we develop a separate network to estimate part mobilities, e.g., per-part motion parameters, from the segmented motion sequence. Both networks learn deep predictive models from a training set that exemplifies a variety of mobilities for diverse objects. We show results of simultaneous motion and part predictions from synthetic and real scans of 3D objects exhibiting a variety of part mobilities, possibly involving multiple movable parts.

Learning adaptive hierarchical cuboid abstractions of 3D shape collections

Abstracting man-made 3D objects as assemblies of primitives, i.e., shape abstraction, is an important task in 3D shape understanding and analysis. In this paper, we propose an unsupervised learning method for automatically constructing compact and expressive shape abstractions of 3D objects in a class. The key idea of our approach is an adaptive hierarchical cuboid representation that abstracts a 3D shape with a set of parametric cuboids adaptively selected from a hierarchical and multi-level cuboid representation shared by all objects in the class. The adaptive hierarchical cuboid abstraction offers a compact representation for modeling the variant shape structures and their coherence at different abstraction levels. Based on this representation, we design a convolutional neural network (CNN) for predicting the parameters of each cuboid in the hierarchical cuboid representation and the adaptive selection mask of cuboids for each input 3D shape. For training the CNN from an unlabeled 3D shape collection, we propose a set of novel loss functions to maximize the approximation quality and compactness of the adaptive hierarchical cuboid abstraction and present a progressive training scheme to refine the cuboid parameters and the cuboid selection mask effectively.

We evaluate the effectiveness of our approach on various 3D shape collections and demonstrate its advantages over the existing cuboid abstraction approach. We also illustrate applications of the resulting adaptive cuboid representations in various shape analysis and manipulation tasks.

StructureNet: hierarchical graph networks for 3D shape generation

The ability to generate novel, diverse, and realistic 3D shapes along with associated part semantics and structure is central to many applications requiring high-quality 3D assets or large volumes of realistic training data. A key challenge towards this goal is how to accommodate diverse shape variations, including both continuous deformations of parts as well as structural or discrete alterations which add to, remove from, or modify the shape constituents and compositional structure. Such object structure can typically be organized into a hierarchy of constituent object parts and relationships, represented as a hierarchy of n-ary graphs. We introduce StructureNet, a hierarchical graph network which (i) can directly encode shapes represented as such n-ary graphs, (ii) can be robustly trained on large and complex shape families, and (iii) be used to generate a great diversity of realistic structured shape geometries. Technically, we accomplish this by drawing inspiration from recent advances in graph neural networks to propose an order-invariant encoding of n-ary graphs, considering jointly both part geometry and inter-part relations during network training. We extensively evaluate the quality of the learned latent spaces for various shape families and show significant advantages over baseline and competing methods. The learned latent spaces enable several structure-aware geometry processing applications, including shape generation and interpolation, shape editing, or shape structure discovery directly from un-annotated images, point clouds, or partial scans.

SDM-NET: deep generative network for structured deformable mesh

We introduce SDM-NET, a deep generative neural network which produces structured deformable meshes. Specifically, the network is trained to generate a spatial arrangement of closed, deformable mesh parts, which respects the global part structure of a shape collection, e.g., chairs, airplanes, etc. Our key observation is that while the overall structure of a 3D shape can be complex, the shape can usually be decomposed into a set of parts, each homeomorphic to a box, and the finer-scale geometry of the part can be recovered by deforming the box. The architecture of SDM-NET is that of a two-level variational autoencoder (VAE). At the part level, a PartVAE learns a deformable model of part geometries. At the structural level, we train a Structured Parts VAE (SP-VAE), which jointly learns the part structure of a shape collection and the part geometries, ensuring the coherence between global shape structure and surface details. Through extensive experiments and comparisons with the state-of-the-art deep generative models of shapes, we demonstrate the superiority of SDM-NET in generating meshes with visual quality, flexible topology, and meaningful structures, benefiting shape interpolation and other subsequent modeling tasks.