ACM Transactions on Graphics (TOG) - Proceedings of ACM SIGGRAPH 2015, Volume 34 Issue 4, August 2015

Full Citation in the ACM Digital Library

SESSION: Computational illumination

Homogeneous codes for energy-efficient illumination and imaging

Programmable coding of light between a source and a sensor has led to several important results in computational illumination, imaging and display. Little is known, however, about how to utilize energy most effectively, especially for applications in live imaging. In this paper, we derive a novel framework to maximize energy efficiency by "homogeneous matrix factorization" that respects the physical constraints of many coding mechanisms (DMDs/LCDs, lasers, etc.). We demonstrate energy-efficient imaging using two prototypes based on DMD and laser illumination. For our DMD-based prototype, we use fast local optimization to derive codes that yield brighter images with fewer artifacts in many transport probing tasks. Our second prototype uses a novel combination of a low-power laser projector and a rolling shutter camera. We use this prototype to demonstrate never-seen-before capabilities such as (1) capturing live structured-light video of very bright scenes---even a light bulb that has been turned on; (2) capturing epipolar-only and indirect-only live video with optimal energy efficiency; (3) using a low-power projector to reconstruct 3D objects in challenging conditions such as strong indirect light, strong ambient light, and smoke; and (4) recording live video from a projector's---rather than the camera's---point of view.

Doppler time-of-flight imaging

Over the last few years, depth cameras have become increasingly popular for a range of applications, including human-computer interaction and gaming, augmented reality, machine vision, and medical imaging. Many of the commercially-available devices use the time-of-flight principle, where active illumination is temporally coded and analyzed in the camera to estimate a per-pixel depth map of the scene. In this paper, we propose a fundamentally new imaging modality for all time-of-flight (ToF) cameras: per-pixel radial velocity measurement. The proposed technique exploits the Doppler effect of objects in motion, which shifts the temporal illumination frequency before it reaches the camera. Using carefully coded illumination and modulation frequencies of the ToF camera, object velocities directly map to measured pixel intensities. We show that a slight modification of our imaging system allows for color, depth, and velocity information to be captured simultaneously. Combining the optical flow computed on the RGB frames with the measured metric radial velocity allows us to further estimate the full 3D metric velocity field of the scene. The proposed technique has applications in many computer graphics and vision problems, for example motion tracking, segmentation, recognition, and motion deblurring.

Micron-scale light transport decomposition using interferometry

We present a computational imaging system, inspired by the optical coherence tomography (OCT) framework, that uses interferometry to produce decompositions of light transport in small scenes or volumes. The system decomposes transport according to various attributes of the paths that photons travel through the scene, including where on the source the paths originate, their pathlengths from source to camera through the scene, their wavelength, and their polarization. Since it uses interference, the system can achieve high pathlength resolutions, with the ability to distinguish paths whose lengths differ by as little as ten microns. We describe how to construct and optimize an optical assembly for this technique, and we build a prototype to measure and visualize three-dimensional shape, direct and indirect reflection components, and properties of scattering, refractive/dispersive, and birefringent materials.

SESSION: Geometry field trip

Integrable PolyVector fields

We present a framework for designing curl-free tangent vector fields on discrete surfaces. Such vector fields are gradients of locally-defined scalar functions, and this property is beneficial for creating surface parameterizations, since the gradients of the parameterization coordinate functions are then exactly aligned with the designed fields. We introduce a novel definition for discrete curl between unordered sets of vectors (PolyVectors), and devise a curl-eliminating continuous optimization that is independent of the matchings between them. Our algorithm naturally places the singularities required to satisfy the user-provided alignment constraints, and our fields are the gradients of an inversion-free parameterization by design.

Stripe patterns on surfaces

Stripe patterns are ubiquitous in nature, describing macroscopic phenomena such as stripes on plants and animals, down to material impurities on the atomic scale. We propose a method for synthesizing stripe patterns on triangulated surfaces, where singularities are automatically inserted in order to achieve user-specified orientation and line spacing. Patterns are characterized as global minimizers of a convex-quadratic energy which is well-defined in the smooth setting. Computation amounts to finding the principal eigenvector of a symmetric positive-definite matrix with the same sparsity as the standard graph Laplacian. The resulting patterns are globally continuous, and can be applied to a variety of tasks in design and texture synthesis.

Frame field generation through metric customization

This paper presents a new technique for frame field generation. As generic frame fields (with arbitrary anisotropy, orientation, and sizing) can be regarded as cross fields in a specific Riemannian metric, we tackle frame field design by first computing a discrete metric on the input surface that is compatible with a sparse or dense set of input constraints. The final frame field is then found by computing an optimal cross field in this customized metric. We propose frame field design constraints on alignment, size, and skewness at arbitrary locations on the mesh as well as along feature curves, offering much improved flexibility over previous approaches. We demonstrate the advantages of our frame field generation through the automatic quadrangulation of man-made and organic shapes with controllable anisotropy, robust handling of narrow surface strips, and precise feature alignment. We also extend our technique to the design of n-vector fields.

SESSION: Modeling, controlling & suturing humans

Computational bodybuilding: anatomically-based modeling of human bodies

We propose a method to create a wide range of human body shapes from a single input 3D anatomy template. Our approach is inspired by biological processes responsible for human body growth. In particular, we simulate growth of skeletal muscles and subcutaneous fat using physics-based models which combine growth and elasticity. Together with a tool to edit proportions of the bones, our method allows us to achieve a desired shape of the human body by directly controlling hypertrophy (or atrophy) of every muscle and enlargement of fat tissues. We achieve near-interactive run times by utilizing a special quasi-statics solver (Projective Dynamics) and by crafting a volumetric discretization which results in accurate deformations without an excessive number of degrees of freedom. Our system is intuitive to use and the resulting human body models are ready for simulation using existing physics-based animation methods, because we deform not only the surface, but also the entire volumetric model.

Biomechanical simulation and control of hands and tendinous systems

The tendons of the hand and other biomechanical systems form a complex network of sheaths, pulleys, and branches. By modeling these anatomical structures, we obtain realistic simulations of coordination and dynamics that were previously not possible. First, we introduce Eulerian-on-Lagrangian discretization of tendon strands, with a new selective quasistatic formulation that eliminates unnecessary degrees of freedom in the longitudinal direction, while maintaining the dynamic behavior in transverse directions. This formulation also allows us to take larger time steps. Second, we introduce two control methods for biomechanical systems: first, a general-purpose learning-based approach requiring no previous system knowledge, and a second approach using data extracted from the simulator. We use various examples to compare the performance of these controllers.

GRIDiron: an interactive authoring and cognitive training foundation for reconstructive plastic surgery procedures

We present an interactive simulation framework for authoring surgical procedures of soft tissue manipulation using physics-based simulation to animate the flesh. This interactive authoring tool can be used by clinical educators to craft three-dimensional illustrations of the intricate maneuvers involved in craniofacial repairs, in contrast to two-dimensional sketches and still photographs which are the medium used to describe these procedures in the traditional surgical curriculum. Our virtual environment also allows surgeons-intraining to develop cognitive skills for craniofacial surgery by experimenting with different approaches to reconstructive challenges, adapting stock techniques to flesh regions with nonstandard shape, and reach preliminary predictions about the feasibility of a given repair plan. We use a Cartesian grid-based embedded discretization of nonlinear elasticity to maximize regularity, and expose opportunities for aggressive multithreading and SIMD accelerations. Using a grid-based approach facilitates performance and scalability, but constrains our ability to capture the topology of thin surgical incisions. We circumvent this restriction by hybridizing the grid-based discretization with an explicit hexahedral mesh representation in regions where the embedding mesh necessitates overlap or nonmanifold connectivity. Finally, we detail how the front-end of our system can run on lightweight clients, while the core simulation capability can be hosted on a dedicated server and delivered as a network service.

SESSION: Face reality

Detailed spatio-temporal reconstruction of eyelids

In recent years we have seen numerous improvements on 3D scanning and tracking of human faces, greatly advancing the creation of digital doubles for film and video games. However, despite the high-resolution quality of the reconstruction approaches available, current methods are unable to capture one of the most important regions of the face - the eye region. In this work we present the first method for detailed spatio-temporal reconstruction of eyelids. Tracking and reconstructing eyelids is extremely challenging, as this region exhibits very complex and unique skin deformation where skin is folded under while opening the eye. Furthermore, eyelids are often only partially visible and obstructed due to self-occlusion and eyelashes. Our approach is to combine a geometric deformation model with image data, leveraging multi-view stereo, optical flow, contour tracking and wrinkle detection from local skin appearance. Our deformation model serves as a prior that enables reconstruction of eyelids even under strong self-occlusions caused by rolling and folding skin as the eye opens and closes. The output is a person-specific, time-varying eyelid reconstruction with anatomically plausible deformations. Our high-resolution detailed eyelids couple naturally with current facial performance capture approaches. As a result, our method can largely increase the fidelity of facial capture and the creation of digital doubles.

Dynamic 3D avatar creation from hand-held video input

We present a complete pipeline for creating fully rigged, personalized 3D facial avatars from hand-held video. Our system faithfully recovers facial expression dynamics of the user by adapting a blendshape template to an image sequence of recorded expressions using an optimization that integrates feature tracking, optical flow, and shape from shading. Fine-scale details such as wrinkles are captured separately in normal maps and ambient occlusion maps. From this user- and expression-specific data, we learn a regressor for on-the-fly detail synthesis during animation to enhance the perceptual realism of the avatars. Our system demonstrates that the use of appropriate reconstruction priors yields compelling face rigs even with a minimalistic acquisition system and limited user assistance. This facilitates a range of new applications in computer animation and consumer-level online communication based on personalized avatars. We present realtime application demos to validate our method.

Real-time high-fidelity facial performance capture

We present the first real-time high-fidelity facial capture method. The core idea is to enhance a global real-time face tracker, which provides a low-resolution face mesh, with local regressors that add in medium-scale details, such as expression wrinkles. Our main observation is that although wrinkles appear in different scales and at different locations on the face, they are locally very self-similar and their visual appearance is a direct consequence of their local shape. We therefore train local regressors from high-resolution capture data in order to predict the local geometry from local appearance at runtime. We propose an automatic way to detect and align the local patches required to train the regressors and run them efficiently in real-time. Our formulation is particularly designed to enhance the low-resolution global tracker with exactly the missing expression frequencies, avoiding superimposing spatial frequencies in the result. Our system is generic and can be applied to any real-time tracker that uses a global prior, e.g. blend-shapes. Once trained, our online capture approach can be applied to any new user without additional training, resulting in high-fidelity facial performance reconstruction with person-specific wrinkle details from a monocular video camera in real-time.

Facial performance sensing head-mounted display

There are currently no solutions for enabling direct face-to-face interaction between virtual reality (VR) users wearing head-mounted displays (HMDs). The main challenge is that the headset obstructs a significant portion of a user's face, preventing effective facial capture with traditional techniques. To advance virtual reality as a next-generation communication platform, we develop a novel HMD that enables 3D facial performance-driven animation in real-time. Our wearable system uses ultra-thin flexible electronic materials that are mounted on the foam liner of the headset to measure surface strain signals corresponding to upper face expressions. These strain signals are combined with a head-mounted RGB-D camera to enhance the tracking in the mouth region and to account for inaccurate HMD placement. To map the input signals to a 3D face model, we perform a single-instance offline training session for each person. For reusable and accurate online operation, we propose a short calibration step to readjust the Gaussian mixture distribution of the mapping before each use. The resulting animations are visually on par with cutting-edge depth sensor-driven facial performance capture systems and hence, are suitable for social interactions in virtual worlds.

SESSION: Rendering complex appearance

The SGGX microflake distribution

We introduce the Symmetric GGX (SGGX) distribution to represent spatially-varying properties of anisotropic microflake participating media. Our key theoretical insight is to represent a microflake distribution by the projected area of the microflakes. We use the projected area to parameterize the shape of an ellipsoid, from which we recover a distribution of normals. The representation based on the projected area allows for robust linear interpolation and prefiltering, and thanks to its geometric interpretation, we derive closed form expressions for all operations used in the microflake framework. We also incorporate microflakes with diffuse reflectance in our theoretical framework.

This allows us to model the appearance of rough diffuse materials in addition to rough specular materials. Finally, we use the idea of sampling the distribution of visible normals to design a perfect importance sampling technique for our SGGX microflake phase functions. It is analytic, deterministic, simple to implement, and one order of magnitude faster than previous work.

Multi-scale modeling and rendering of granular materials

We address the problem of modeling and rendering granular materials---such as large structures made of sand, snow, or sugar---where an aggregate object is composed of many randomly oriented, but discernible grains. These materials pose a particular challenge as the complex scattering properties of individual grains, and their packing arrangement, can have a dramatic effect on the large-scale appearance of the aggregate object. We propose a multi-scale modeling and rendering framework that adapts to the structure of scattered light at different scales. We rely on path tracing the individual grains only at the finest scale, and---by decoupling individual grains from their arrangement---we develop a modular approach for simulating longer-scale light transport. We model light interactions within and across grains as separate processes and leverage this decomposition to derive parameters for classical radiative transport, including standard volumetric path tracing and a diffusion method that can quickly summarize the large scale transport due to many grain interactions. We require only a one-time precomputation per exemplar grain, which we can then reuse for arbitrary aggregate shapes and a continuum of different packing rates and scales of grains. We demonstrate our method on scenes containing mixtures of tens of millions of individual, complex, specular grains that would be otherwise infeasible to render with standard techniques.

SESSION: Wave-particle fluidity

Power particles: an incompressible fluid solver based on power diagrams

This paper introduces a new particle-based approach to incompressible fluid simulation. We depart from previous Lagrangian methods by considering fluid particles no longer purely as material points, but also as volumetric parcels that partition the fluid domain. The fluid motion is described as a time series of well-shaped power diagrams (hence the name power particles), offering evenly spaced particles and accurate pressure computations. As a result, we circumvent the typical excess damping arising from kernel-based evaluations of internal forces or density without having recourse to auxiliary Eulerian grids. The versatility of our solver is demonstrated by the simulation of multiphase flows and free surfaces.

The affine particle-in-cell method

Hybrid Lagrangian/Eulerian simulation is commonplace in computer graphics for fluids and other materials undergoing large deformation. In these methods, particles are used to resolve transport and topological change, while a background Eulerian grid is used for computing mechanical forces and collision responses. Particle-in-Cell (PIC) techniques, particularly the Fluid Implicit Particle (FLIP) variants have become the norm in computer graphics calculations. While these approaches have proven very powerful, they do suffer from some well known limitations. The original PIC is stable, but highly dissipative, while FLIP, designed to remove this dissipation, is more noisy and at times, unstable. We present a novel technique designed to retain the stability of the original PIC, without suffering from the noise and instability of FLIP. Our primary observation is that the dissipation in the original PIC results from a loss of information when transferring between grid and particle representations. We prevent this loss of information by augmenting each particle with a locally affine, rather than locally constant, description of the velocity. We show that this not only stably removes the dissipation of PIC, but that it also allows for exact conservation of angular momentum across the transfers between particles and grid.

Restoring the missing vorticity in advection-projection fluid solvers

Most visual effects fluid solvers use a time-splitting approach where velocity is first advected in the flow, then projected to be incompressible with pressure. Even if a highly accurate advection scheme is used, the self-advection step typically transfers some kinetic energy from divergence-free modes into divergent modes, which are then projected out by pressure, losing energy noticeably for large time steps. Instead of taking smaller time steps or using significantly more complex time integration, we propose a new scheme called IVOCK (Integrated Vorticity of Convective Kinematics) which cheaply captures much of what is lost in self-advection by identifying it as a violation of the vorticity equation. We measure vorticity on the grid before and after advection, taking into account vortex stretching, and use a cheap multigrid V-cycle approximation to a vector potential whose curl will correct the vorticity error. IVOCK works independently of the advection scheme (we present examples with various semi-Lagrangian methods and FLIP), works independently of how boundary conditions are applied (it just corrects error in advection, leaving pressure etc. to take care of boundaries and other forces), and other solver parameters (we provide smoke, fire, and water examples). For 10 ~ 25% extra computation time per step much larger steps can be used, while producing detailed vorticial structures and convincing turbulence that are lost without correction.

A stream function solver for liquid simulations

This paper presents a liquid simulation technique that enforces the incompressibility condition using a stream function solve instead of a pressure projection. Previous methods have used stream function techniques for the simulation of detailed single-phase flows, but a formulation for liquid simulation has proved elusive in part due to the free surface boundary conditions. In this paper, we introduce a stream function approach to liquid simulations with novel boundary conditions for free surfaces, solid obstacles, and solid-fluid coupling.

Although our approach increases the dimension of the linear system necessary to enforce incompressibility, it provides interesting and surprising benefits. First, the resulting flow is guaranteed to be divergence-free regardless of the accuracy of the solve. Second, our free-surface boundary conditions guarantee divergence-free motion even in the un-simulated air phase, which enables two-phase flow simulation by only computing a single phase. We implemented this method using a variant of FLIP simulation which only samples particles within a narrow band of the liquid surface, and we illustrate the effectiveness of our method for detailed two-phase flow simulations with complex boundaries, detailed bubble interactions, and two-way solid-fluid coupling.

SESSION: Simsquishal geometry

Dihedral angle-based maps of tetrahedral meshes

We present a geometric representation of a tetrahedral mesh that is solely based on dihedral angles. We first show that the shape of a tetrahedral mesh is completely defined by its dihedral angles. This proof leads to a set of angular constraints that must be satisfied for an immersion to exist in R3. This formulation lets us easily specify conditions to avoid inverted tetrahedra and multiply-covered vertices, thus leading to locally injective maps. We then present a constrained optimization method that modifies input angles when they do not satisfy constraints. Additionally, we develop a fast spectral reconstruction method to robustly recover positions from dihedral angles. We demonstrate the applicability of our representation with examples of volume parameterization, shape interpolation, mesh optimization, connectivity shapes, and mesh compression.

Conformal mesh deformations with Möbius transformations

We establish a framework to design triangular and circular polygonal meshes by using face-based compatible Möbius transformations. Embracing the viewpoint of surfaces from circles, we characterize discrete conformality for such meshes, in which the invariants are circles, cross-ratios, and mutual intersection angles. Such transformations are important in practice for editing meshes without distortions or loss of details. In addition, they are of substantial theoretical interest in discrete differential geometry. Our framework allows for handle-based deformations, and interpolation between given meshes with controlled conformal error.

Close-to-conformal deformations of volumes

Conformal deformations are infinitesimal scale-rotations, which can be parameterized by quaternions. The condition that such a quaternion field gives rise to a conformal deformation is nonlinear and in any case only admits Möbius transformations as solutions. We propose a particular decoupling of scaling and rotation which allows us to find near to conformal deformations as minimizers of a quadratic, convex Dirichlet energy. Applied to tetrahedral meshes we find deformations with low quasiconformal distortion as the principal eigenvector of a (quaternionic) Laplace matrix. The resulting algorithms can be implemented with highly optimized standard linear algebra libraries and yield deformations comparable in quality to far more expensive approaches.

Linear subspace design for real-time shape deformation

We propose a method to design linear deformation subspaces, unifying linear blend skinning and generalized barycentric coordinates. Deformation subspaces cut down the time complexity of variational shape deformation methods and physics-based animation (reduced-order physics). Our subspaces feature many desirable properties: interpolation, smoothness, shape-awareness, locality, and both constant and linear precision. We achieve these by minimizing a quadratic deformation energy, built via a discrete Laplacian inducing linear precision on the domain boundary. Our main advantage is speed: subspace bases are solutions to a sparse linear system, computed interactively even for generously tessellated domains. Users may seamlessly switch between applying transformations at handles and editing the subspace by adding, removing or relocating control handles. The combination of fast computation and good properties means that designing the right subspace is now just as creative as manipulating handles. This paradigm shift in handle-based deformation opens new opportunities to explore the space of shape deformations.

SESSION: VR, display & interaction

eyeSelfie: self directed eye alignment using reciprocal eye box imaging

Eye alignment to the optical system is very critical in many modern devices, such as for biometrics, gaze tracking, head mounted displays, and health. We show alignment in the context of the most difficult challenge: retinal imaging. Alignment in retinal imaging, even conducted by a physician, is very challenging due to precise alignment requirements and lack of direct user eye gaze control. Self-imaging of the retina is nearly impossible.

We frame this problem as a user-interface (UI) challenge. We can create a better UI by controlling the eye box of a projected cue. Our key concept is to exploit the reciprocity, "If you see me, I see you", to develop near eye alignment displays. Two technical aspects are critical: a) tightness of the eye box and (b) the eye box discovery comfort. We demonstrate that previous pupil forming display architectures are not adequate to address alignment in depth. We then analyze two ray-based designs to determine efficacious fixation patterns. These ray based displays and a sequence of user steps allow lateral (x, y) and depth (z) wise alignment to deal with image centering and focus. We show a highly portable prototype and demonstrate the effectiveness through a user study.

Optimal presentation of imagery with focus cues on multi-plane displays

We present a technique for displaying three-dimensional imagery of general scenes with nearly correct focus cues on multi-plane displays. These displays present an additive combination of images at a discrete set of optical distances, allowing the viewer to focus at different distances in the simulated scene. Our proposed technique extends the capabilities of multi-plane displays to general scenes with occlusions and non-Lambertian effects by using a model of defocus in the eye of the viewer. Requiring no explicit knowledge of the scene geometry, our technique uses an optimization algorithm to compute the images to be displayed on the presentation planes so that the retinal images when accommodating to different distances match the corresponding retinal images of the input scene as closely as possible. We demonstrate the utility of the technique using imagery acquired from both synthetic and real-world scenes, and analyze the system's characteristics including bounds on achievable resolution.

The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues

Over the last few years, virtual reality (VR) has re-emerged as a technology that is now feasible at low cost via inexpensive cellphone components. In particular, advances of high-resolution micro displays, low-latency orientation trackers, and modern GPUs facilitate immersive experiences at low cost. One of the remaining challenges to further improve visual comfort in VR experiences is the vergence-accommodation conflict inherent to all stereoscopic displays. Accurate reproduction of all depth cues is crucial for visual comfort. By combining well-known stereoscopic display principles with emerging factored light field technology, we present the first wearable VR display supporting high image resolution as well as focus cues. A light field is presented to each eye, which provides more natural viewing experiences than conventional near-eye displays. Since the eye box is just slightly larger than the pupil size, rank-1 light field factorizations are sufficient to produce correct or nearly-correct focus cues; no time-multiplexed image display or gaze tracking is required. We analyze lens distortions in 4D light field space and correct them using the afforded high-dimensional image formation. We also demonstrate significant improvements in resolution and retinal blur quality over related near-eye displays. Finally, we analyze diffraction limits of these types of displays.

SESSION: Let's do the time warp

Decomposing time-lapse paintings into layers

The creation of a painting, in the physical world or digitally, is a process that occurs over time. Later strokes cover earlier strokes, and strokes painted at a similar time are likely to be part of the same object. In the final painting, this temporal history is lost, and a static arrangement of color is all that remains. The rich literature for interacting with image editing history cannot be used. To enable these interactions, we present a set of techniques to decompose a time lapse video of a painting (defined generally to include pencils, markers, etc.) into a sequence of translucent "stroke" images. We present translucency-maximizing solutions for recovering physical (Kubelka and Munk layering) or digital (Porter and Duff "over" blending operation) paint parameters from before/after image pairs. We also present a pipeline for processing real-world videos of paintings capable of handling long-term occlusions, such as the painter's hand and its shadow, color shifts, and noise.

Time-lapse mining from internet photos

We introduce an approach for synthesizing time-lapse videos of popular landmarks from large community photo collections. The approach is completely automated and leverages the vast quantity of photos available online. First, we cluster 86 million photos into landmarks and popular viewpoints. Then, we sort the photos by date and warp each photo onto a common viewpoint. Finally, we stabilize the appearance of the sequence to compensate for lighting effects and minimize flicker. Our resulting time-lapses show diverse changes in the world's most popular sites, like glaciers shrinking, skyscrapers being constructed, and waterfalls changing course.

Real-time hyperlapse creation via optimal frame selection

Long videos can be played much faster than real-time by recording only one frame per second or by dropping all but one frame each second, i.e., by creating a timelapse. Unstable hand-held moving videos can be stabilized with a number of recently described methods. Unfortunately, creating a stabilized timelapse, or hyperlapse, cannot be achieved through a simple combination of these two methods. Two hyperlapse methods have been previously demonstrated: one with high computational complexity and one requiring special sensors. We present an algorithm for creating hyperlapse videos that can handle significant high-frequency camera motion and runs in real-time on HD video. Our approach does not require sensor data, thus can be run on videos captured on any camera. We optimally select frames from the input video that best match a desired target speed-up while also resulting in the smoothest possible camera motion. We evaluate our approach using several input videos from a range of cameras and compare these results to existing methods.

SESSION: Meshing around

Isotopic approximation within a tolerance volume

We introduce in this paper an algorithm that generates from an input tolerance volume a surface triangle mesh guaranteed to be within the tolerance, intersection free and topologically correct. A pliant meshing algorithm is used to capture the topology and discover the anisotropy in the input tolerance volume in order to generate a concise output. We first refine a 3D Delaunay triangulation over the tolerance volume while maintaining a piecewise-linear function on this triangulation, until an isosurface of this function matches the topology sought after. We then embed the isosurface into the 3D triangulation via mutual tessellation, and simplify it while preserving the topology. Our approach extends to surfaces with boundaries and to non-manifold surfaces. We demonstrate the versatility and efficacy of our approach on a variety of data sets and tolerance volumes.

Data-driven interactive quadrangulation

We propose an interactive quadrangulation method based on a large collection of patterns that are learned from models manually designed by artists. The patterns are distilled into compact quadrangulation rules and stored in a database. At run-time, the user draws strokes to define patches and desired edge flows, and the system queries the database to extract fitting patterns to tessellate the sketches' interiors. The quadrangulation patterns are general and can be applied to tessellate large regions while controlling the positions of the singularities and the edge flow. We demonstrate the effectiveness of our algorithm through a series of live retopology sessions and an informal user study with three professional artists.

Convolutional wasserstein distances: efficient optimal transportation on geometric domains

This paper introduces a new class of algorithms for optimization problems involving optimal transportation over geometric domains. Our main contribution is to show that optimal transportation can be made tractable over large domains used in graphics, such as images and triangle meshes, improving performance by orders of magnitude compared to previous work. To this end, we approximate optimal transportation distances using entropic regularization. The resulting objective contains a geodesic distance-based kernel that can be approximated with the heat kernel. This approach leads to simple iterative numerical schemes with linear convergence, in which each iteration only requires Gaussian convolution or the solution of a sparse, pre-factored linear system. We demonstrate the versatility and efficiency of our method on tasks including reflectance interpolation, color transfer, and geometry processing.

SESSION: Video processing

Sampling based scene-space video processing

Many compelling video processing effects can be achieved if per-pixel depth information and 3D camera calibrations are known. However, the success of such methods is highly dependent on the accuracy of this "scene-space" information. We present a novel, sampling-based framework for processing video that enables high-quality scene-space video effects in the presence of inevitable errors in depth and camera pose estimation. Instead of trying to improve the explicit 3D scene representation, the key idea of our method is to exploit the high redundancy of approximate scene information that arises due to most scene points being visible multiple times across many frames of video. Based on this observation, we propose a novel pixel gathering and filtering approach. The gathering step is general and collects pixel samples in scene-space, while the filtering step is application-specific and computes a desired output video from the gathered sample sets. Our approach is easily parallelizable and has been implemented on GPU, allowing us to take full advantage of large volumes of video data and facilitating practical runtimes on HD video using a standard desktop computer. Our generic scene-space formulation is able to comprehensively describe a multitude of video processing applications such as denoising, deblurring, super resolution, object removal, computational shutter functions, and other scene-space camera effects. We present results for various casually captured, hand-held, moving, compressed, monocular videos depicting challenging scenes recorded in uncontrolled environments.

audeosynth: music-driven video montage

We introduce music-driven video montage, a media format that offers a pleasant way to browse or summarize video clips collected from various occasions, including gatherings and adventures. In music-driven video montage, the music drives the composition of the video content. According to musical movement and beats, video clips are organized to form a montage that visually reflects the experiential properties of the music. Nonetheless, it takes enormous manual work and artistic expertise to create it. In this paper, we develop a framework for automatically generating music-driven video montages. The input is a set of video clips and a piece of background music. By analyzing the music and video content, our system extracts carefully designed temporal features from the input, and casts the synthesis problem as an optimization and solves the parameters through Markov Chain Monte Carlo sampling. The output is a video montage whose visual activities are cut and synchronized with the rhythm of the music, rendering a symphony of audio-visual resonance.

High-quality streamable free-viewpoint video

We present the first end-to-end solution to create high-quality free-viewpoint video encoded as a compact data stream. Our system records performances using a dense set of RGB and IR video cameras, generates dynamic textured surfaces, and compresses these to a streamable 3D video format. Four technical advances contribute to high fidelity and robustness: multimodal multi-view stereo fusing RGB, IR, and silhouette information; adaptive meshing guided by automatic detection of perceptually salient areas; mesh tracking to create temporally coherent subsequences; and encoding of tracked textured meshes as an MPEG video stream. Quantitative experiments demonstrate geometric accuracy, texture fidelity, and encoding efficiency. We release several datasets with calibrated inputs and processed results to foster future research.

SESSION: Afternoon mapping

Bijective parameterization with free boundaries

We present a fully automatic method for generating guaranteed bijective surface parameterizations from triangulated 3D surfaces partitioned into charts. We do so by using a distortion metric that prevents local folds of triangles in the parameterization and a barrier function that prevents intersection of the chart boundaries. In addition, we show how to modify the line search of an interior point method to directly compute the singularities of the distortion metric and barrier functions to maintain a bijective map. By using an isometric metric that is efficient to compute and a spatial hash to accelerate the evaluation and gradient of the barrier function for the boundary, we achieve fast optimization times. Unlike previous methods, we do not require the boundary be constrained by the user to a non-intersecting shape to guarantee a bijection, and the boundary of the parameterization is free to change shape during the optimization to minimize distortion.

Computing locally injective mappings by advanced MIPS

Computing locally injective mappings with low distortion in an efficient way is a fundamental task in computer graphics. By revisiting the well-known MIPS (Most-Isometric ParameterizationS) method, we introduce an advanced MIPS method that inherits the local injectivity of MIPS, achieves as low as possible distortions compared to the state-of-the-art locally injective mapping techniques, and performs one to two orders of magnitude faster in computing a mesh-based mapping. The success of our method relies on two key components. The first one is an enhanced MIPS energy function that penalizes the maximal distortion significantly and distributes the distortion evenly over the domain for both mesh-based and meshless mappings. The second is a use of the inexact block coordinate descent method in mesh-based mapping in a way that efficiently minimizes the distortion with the capability not to be trapped early by the local minimum. We demonstrate the capability and superiority of our method in various applications including mesh parameterization, mesh-based and meshless deformation, and mesh improvement.

Seamless surface mappings

We introduce a method for computing seamless bijective mappings between two surface-meshes that interpolates a given set of correspondences.

A common approach for computing a map between surfaces is to cut the surfaces to disks, flatten them to the plane, and extract the mapping from the flattenings by composing one flattening with the inverse of the other. So far, a significant drawback in this class of techniques is that the choice of cuts introduces a bias in the computation of the map that often causes visible artifacts and wrong correspondences.

In this paper we develop a surface mapping technique that is indifferent to the particular cut choice. This is achieved by a novel type of surface flattenings that encodes this cut-invariance, and when optimized with a suitable energy functional results in a seamless surface-to-surface map.

We show the algorithm enables producing high-quality seamless bijective maps for pairs of surfaces with a wide range of shape variability and from a small number of prescribed correspondences. We also used this framework to produce three-way, consistent and seamless mappings for triplets of surfaces.

Bounded distortion harmonic mappings in the plane

We present a framework for the computation of harmonic and conformal mappings in the plane with mathematical guarantees that the computed mappings are C, locally injective and satisfy strict bounds on the conformal and isometric distortion. Such mappings are very desirable in many computer graphics and geometry processing applications.

We establish the sufficient and necessary conditions for a harmonic planar mapping to have bounded distortion. Our key observation is that these conditions relate solely to the boundary behavior of the mapping. This leads to an efficient and accurate algorithm that supports handle-based interactive shape-and-image deformation and is demonstrated to outperform other state-of-the-art methods.

SESSION: Deform me a solid

Data-driven finite elements for geometry and material design

Crafting the behavior of a deformable object is difficult---whether it is a biomechanically accurate character model or a new multimaterial 3D printable design. Getting it right requires constant iteration, performed either manually or driven by an automated system. Unfortunately, Previous algorithms for accelerating three-dimensional finite element analysis of elastic objects suffer from expensive precomputation stages that rely on a priori knowledge of the object's geometry and material composition. In this paper we introduce Data-Driven Finite Elements as a solution to this problem. Given a material palette, our method constructs a metamaterial library which is reusable for subsequent simulations, regardless of object geometry and/or material composition. At runtime, we perform fast coarsening of a simulation mesh using a simple table lookup to select the appropriate metamaterial model for the coarsened elements. When the object's material distribution or geometry changes, we do not need to update the metamaterial library---we simply need to update the metamaterial assignments to the coarsened elements. An important advantage of our approach is that it is applicable to non-linear material models. This is important for designing objects that undergo finite deformation (such as those produced by multimaterial 3D printing). Our method yields speed gains of up to two orders of magnitude while maintaining good accuracy. We demonstrate the effectiveness of the method on both virtual and 3D printed examples in order to show its utility as a tool for deformable object design.

Nonlinear material design using principal stretches

The Finite Element Method is widely used for solid deformable object simulation in film, computer games, virtual reality and medicine. Previous applications of nonlinear solid elasticity employed materials from a few standard families such as linear corotational, nonlinear St.Venant-Kirchhoff, Neo-Hookean, Ogden or Mooney-Rivlin materials. However, the spaces of all nonlinear isotropic and anisotropic materials are infinite-dimensional and much broader than these standard materials. In this paper, we demonstrate how to intuitively explore the space of isotropic and anisotropic nonlinear materials, for design of animations in computer graphics and related fields. In order to do so, we first formulate the internal elastic forces and tangent stiffness matrices in the space of the principal stretches of the material. We then demonstrate how to design new isotropic materials by editing a single stress-strain curve, using a spline interface. Similarly, anisotropic (orthotropic) materials can be designed by editing three curves, one for each material direction. We demonstrate that modifying these curves using our proposed interface has an intuitive, visual, effect on the simulation. Our materials accelerate simulation design and enable visual effects that are difficult or impossible to achieve with standard nonlinear materials.

Subspace condensation: full space adaptivity for subspace deformations

Subspace deformable body simulations can be very fast, but can behave unrealistically when behaviors outside the prescribed subspace such as novel external collisions, are encountered. We address this limitation by presenting a fast, flexible new method that allows full space computation to be activated in the neighborhood of novel events while the rest of the body still computes in a subspace. We achieve this using a method we call subspace condensation, a variant on the classic static condensation precomputation. However, instead of a precomputation, we use the speed of subspace methods to perform the condensation at every frame. This approach allows the full space regions to be specified arbitrarily at runtime, and forms a natural two-way coupling with the subspace regions. While condensation is usually only applicable to linear materials, the speed of our technique enables its application to non-linear materials as well. We show the effectiveness of our approach by applying it to a variety of articulated character scenarios.

SESSION: Image processing

Perceptually based downscaling of images

We propose a perceptually based method for downscaling images that provides a better apparent depiction of the input image. We formulate image downscaling as an optimization problem where the difference between the input and output images is measured using a widely adopted perceptual image quality metric. The downscaled images retain perceptually important features and details, resulting in an accurate and spatio-temporally consistent representation of the high resolution input. We derive the solution of the optimization problem in closed-form, which leads to a simple, efficient and parallelizable implementation with sums and convolutions. The algorithm has running times similar to linear filtering and is orders of magnitude faster than the state-of-the-art for image downscaling. We validate the effectiveness of the technique with extensive tests on many images, video, and by performing a user study, which indicates a clear preference for the results of the new algorithm.

An L1 image transform for edge-preserving smoothing and scene-level intrinsic decomposition

Identifying sparse salient structures from dense pixels is a longstanding problem in visual computing. Solutions to this problem can benefit both image manipulation and understanding. In this paper, we introduce an image transform based on the L1 norm for piecewise image flattening. This transform can effectively preserve and sharpen salient edges and contours while eliminating insignificant details, producing a nearly piecewise constant image with sparse structures. A variant of this image transform can perform edge-preserving smoothing more effectively than existing state-of-the-art algorithms. We further present a new method for complex scene-level intrinsic image decomposition. Our method relies on the above image transform to suppress surface shading variations, and perform probabilistic reflectance clustering on the flattened image instead of the original input image to achieve higher accuracy. Extensive testing on the Intrinsic-Images-in-the-Wild database indicates our method can perform significantly better than existing techniques both visually and numerically. The obtained intrinsic images have been successfully used in two applications, surface retexturing and 3D object compositing in photographs.

A computational approach for obstruction-free photography

We present a unified computational approach for taking photos through reflecting or occluding elements such as windows and fences. Rather than capturing a single image, we instruct the user to take a short image sequence while slightly moving the camera. Differences that often exist in the relative position of the background and the obstructing elements from the camera allow us to separate them based on their motions, and to recover the desired background scene as if the visual obstructions were not there. We show results on controlled experiments and many real and practical scenarios, including shooting through reflections, fences, and raindrop-covered windows.

SESSION: Taking control

Dynamic terrain traversal skills using reinforcement learning

The locomotion skills developed for physics-based characters most often target flat terrain. However, much of their potential lies with the creation of dynamic, momentum-based motions across more complex terrains. In this paper, we learn controllers that allow simulated characters to traverse terrains with gaps, steps, and walls using highly dynamic gaits. This is achieved using reinforcement learning, with careful attention given to the action representation, non-parametric approximation of both the value function and the policy; epsilon-greedy exploration; and the learning of a good state distance metric. The methods enable a 21-link planar dog and a 7-link planar biped to navigate challenging sequences of terrain using bounding and running gaits. We evaluate the impact of the key features of our skill learning pipeline on the resulting performance.

Online control of simulated humanoids using particle belief propagation

We present a novel, general-purpose Model-Predictive Control (MPC) algorithm that we call Control Particle Belief Propagation (C-PBP). C-PBP combines multimodal, gradient-free sampling and a Markov Random Field factorization to effectively perform simultaneous path finding and smoothing in high-dimensional spaces. We demonstrate the method in online synthesis of interactive and physically valid humanoid movements, including balancing, recovery from both small and extreme disturbances, reaching, balancing on a ball, juggling a ball, and fully steerable locomotion in an environment with obstacles. Such a large repertoire of movements has not been demonstrated before at interactive frame rates, especially considering that all our movement emerges from simple cost functions. Furthermore, we abstain from using any precomputation to train a control policy offline, reference data such as motion capture clips, or state machines that break the movements down into more manageable subtasks. Operating under these conditions enables rapid and convenient iteration when designing the cost functions.

Intuitive and efficient camera control with the toric space

A large range of computer graphics applications such as data visualization or virtual movie production require users to position and move viewpoints in 3D scenes to effectively convey visual information or tell stories. The desired viewpoints and camera paths are required to satisfy a number of visual properties (e.g. size, vantage angle, visibility, and on-screen position of targets). Yet, existing camera manipulation tools only provide limited interaction methods and automated techniques remain computationally expensive.

In this work, we introduce the Toric space, a novel and compact representation for intuitive and efficient virtual camera control. We first show how visual properties are expressed in this Toric space and propose an efficient interval-based search technique for automated viewpoint computation. We then derive a novel screen-space manipulation technique that provides intuitive and real-time control of visual properties. Finally, we propose an effective viewpoint interpolation technique which ensures the continuity of visual properties along the generated paths. The proposed approach (i) performs better than existing automated viewpoint computation techniques in terms of speed and precision, (ii) provides a screen-space manipulation tool that is more efficient than classical manipulators and easier to use for beginners, and (iii) enables the creation of complex camera motions such as long takes in a very short time and in a controllable way. As a result, the approach should quickly find its place in a number of applications that require interactive or automated camera control such as 3D modelers, navigation tools or 3D games.

SESSION: Shape analysis

Interaction context (ICON): towards a geometric functionality descriptor

We introduce a contextual descriptor which aims to provide a geometric description of the functionality of a 3D object in the context of a given scene. Differently from previous works, we do not regard functionality as an abstract label or represent it implicitly through an agent. Our descriptor, called interaction context or ICON for short, explicitly represents the geometry of object-to-object interactions. Our approach to object functionality analysis is based on the key premise that functionality should mainly be derived from interactions between objects and not objects in isolation. Specifically, ICON collects geometric and structural features to encode interactions between a central object in a 3D scene and its surrounding objects. These interactions are then grouped based on feature similarity, leading to a hierarchical structure. By focusing on interactions and their organization, ICON is insensitive to the numbers of objects that appear in a scene, the specific disposition of objects around the central object, or the objects' fine-grained geometry. With a series of experiments, we demonstrate the potential of ICON in functionality-oriented shape processing, including shape retrieval (either directly or by complementing existing shape descriptors), segmentation, and synthesis.

Elements of style: learning perceptual shape style similarity

The human perception of stylistic similarity transcends structure and function: for instance, a bed and a dresser may share a common style. An algorithmically computed style similarity measure that mimics human perception can benefit a range of computer graphics applications. Previous work in style analysis focused on shapes within the same class, and leveraged structural similarity between these shapes to facilitate analysis. In contrast, we introduce the first structure-transcending style similarity measure and validate it to be well aligned with human perception of stylistic similarity. Our measure is inspired by observations about style similarity in art history literature, which point to the presence of similarly shaped, salient, geometric elements as one of the key indicators of stylistic similarity. We translate these observations into an algorithmic measure by first quantifying the geometric properties that make humans perceive geometric elements as similarly shaped and salient in the context of style, then employing this quantification to detect pairs of matching style related elements on the analyzed models, and finally collating the element-level geometric similarity measurements into an object-level style measure consistent with human perception. To achieve this consistency we employ crowdsourcing to quantify the different components of our measure; we learn the relative perceptual importance of a range of elementary shape distances and other parameters used in our measurement from 50K responses to cross-structure style similarity queries provided by over 2500 participants.We train and validate our method on this dataset, showing it to successfully predict relative style similarity with near 90% accuracy based on 10-fold cross-validation.

Style compatibility for 3D furniture models

This paper presents a method for learning to predict the stylistic compatibility between 3D furniture models from different object classes: e.g., how well does this chair go with that table? To do this, we collect relative assessments of style compatibility using crowdsourcing. We then compute geometric features for each 3D model and learn a mapping of them into a space where Euclidean distances represent style incompatibility. Motivated by the geometric subtleties of style, we introduce part-aware geometric feature vectors that characterize the shapes of different parts of an object separately. Motivated by the need to compute style compatibility between different object classes, we introduce a method to learn object class-specific mappings from geometric features to a shared feature space. During experiments with these methods, we find that they are effective at predicting style compatibility agreed upon by people. We find in user studies that the learned compatibility metric is useful for novel interactive tools that: 1) retrieve stylistically compatible models for a query, 2) suggest a piece of furniture for an existing scene, and 3) help guide an interactive 3D modeler towards scenes with compatible furniture.

SESSION: Shape analysis

Semantic shape editing using deformation handles

We propose a shape editing method where the user creates geometric deformations using a set of semantic attributes, thus avoiding the need for detailed geometric manipulations. In contrast to prior work, we focus on continuous deformations instead of discrete part substitutions. Our method provides a platform for quick design explorations and allows non-experts to produce semantically guided shape variations that are otherwise difficult to attain. We crowdsource a large set of pairwise comparisons between the semantic attributes and geometry and use this data to learn a continuous mapping from the semantic attributes to geometry. The resulting map enables simple and intuitive shape manipulations based solely on the learned attributes. We demonstrate our method on large datasets using two different user interaction modes and evaluate its usability with a set of user studies.

Single-view reconstruction via joint analysis of image and shape collections

We present an approach to automatic 3D reconstruction of objects depicted in Web images. The approach reconstructs objects from single views. The key idea is to jointly analyze a collection of images of different objects along with a smaller collection of existing 3D models. The images are analyzed and reconstructed together. Joint analysis regularizes the formulated optimization problems, stabilizes correspondence estimation, and leads to reasonable reproduction of object appearance without traditional multi-view cues.

SESSION: Fabricating fabulous forms

Architecture-scale human-assisted additive manufacturing

Recent digital fabrication tools have opened up accessibility to personalized rapid prototyping; however, such tools are limited to product-scale objects. The materials currently available for use in 3D printing are too fine for large-scale objects, and CNC gantry sizes limit the scope of printable objects. In this paper, we propose a new method for printing architecture-scale objects. Our proposal includes three developments: (i) a construction material consisting of chopsticks and glue, (ii) a handheld chopstick dispenser, and (iii) a printing guidance system that uses projection mapping. The proposed chopstickglue material is cost effective, environmentally sustainable, and can be printed more quickly than conventional materials. The developed handheld dispenser enables consistent feeding of the chopstickglue material composite. The printing guidance system --- consisting of a depth camera and a projector --- evaluates a given shape in real time and indicates where humans should deposit chopsticks by projecting a simple color code onto the form under construction. Given the mechanical specifications of the stickglue composite, an experimental pavilion was designed as a case study of the proposed method and built without scaffoldings and formworks. The case study also revealed several fundamental limitations, such as the projector does not work in daylight, which requires future investigations.

Parametric self-supporting surfaces via direct computation of airy stress functions

This paper presents a method that employs parametric surfaces as surface geometry representations at any stage of a computational process to compute self-supporting surfaces. This approach can be differentiated from existing relevant methods because such methods represent surfaces by a triangulated mesh surface or a network consisting of lines. The proposed method is based on the theory of Airy stress functions. Although some existing methods are also based on this theory, they apply its discrete version to discrete geometries. The proposed method simultaneously applies the theory to parametric surfaces directly and the discrete theory to the edges of parametric patches. The discontinuous boundary between continuous patches naturally corresponds to ribs seen in traditional vault masonry buildings. We use nonuniform rational B-spline surfaces in this study; however, the basic idea can be applied to other parametric surfaces. A variety of self-supporting surfaces obtained by the proposed computational scheme is presented.

Foldabilizing furniture

We introduce the foldabilization problem for space-saving furniture design. Namely, given a 3D object representing a piece of furniture, our goal is to apply a minimum amount of modification to the object so that it can be folded to save space --- the object is thus foldabilized. We focus on one instance of the problem where folding is with respect to a prescribed folding direction and allowed object modifications include hinge insertion and part shrinking.

We develop an automatic algorithm for foldabilization by formulating and solving a nested optimization problem operating at two granularity levels of the input shape. Specifically, the input shape is first partitioned into a set of integral folding units. For each unit, we construct a graph which encodes conflict relations, e.g., collisions, between foldings implied by various patch foldabilizations within the unit. Finding a minimum-cost foldabilization with a conflict-free folding is an instance of the maximum-weight independent set problem. In the outer loop of the optimization, we process the folding units in an optimized ordering where the units are sorted based on estimated foldabilization costs. We show numerous foldabilization results computed at interactive speed and 3D-print physical prototypes of these results to demonstrate manufacturability.

Computational interlocking furniture assembly

Furniture typically consists of assemblies of elongated and planar parts that are connected together by glue, nails, hinges, screws, or other means that do not encourage disassembly and re-assembly. An alternative approach is to use an interlocking mechanism, where the component parts tightly interlock with one another. The challenge in designing such a network of interlocking joints is that local analysis is insufficient to guarantee global interlocking, and there is a huge number of joint combinations that require an enormous exploration effort to ensure global interlocking. In this paper, we present a computational solution to support the design of a network of interlocking joints that form a globally-interlocking furniture assembly. The key idea is to break the furniture complex into an overlapping set of small groups, where the parts in each group are immobilized by a local key, and adjacent groups are further locked with dependencies. The dependency among the groups saves the effort of exploring the immobilization of every subset of parts in the assembly, thus allowing the intensive interlocking computation to be localized within each small group. We demonstrate the effectiveness of our technique on many globally-interlocking furniture assemblies of various shapes and complexity.

SESSION: Transfer & capture

LazyFluids: appearance transfer for fluid animations

In this paper we present a novel approach to appearance transfer for fluid animations based on flow-guided texture synthesis. In contrast to common practice where pre-captured sets of fluid elements are combined in order to achieve desired motion and look, we bring the possibility of fine-tuning motion properties in advance using CG techniques, and then transferring the desired look from a selected appearance exemplar. We demonstrate that such a practical work-flow cannot be simply implemented using current state-of-the-art techniques, analyze what the main obstacles are, and propose a solution to resolve them. In addition, we extend the algorithm to allow for synthesis with rich boundary effects and video exemplars. Finally, we present numerous results that demonstrate the versatility of the proposed approach.

Fluid volume modeling from sparse multi-view images by appearance transfer

We propose a method of three-dimensional (3D) modeling of volumetric fluid phenomena from sparse multi-view images (e.g., only a single-view input or a pair of front- and side-view inputs). The volume determined from such sparse inputs using previous methods appears blurry and unnatural with novel views; however, our method preserves the appearance of novel viewing angles by transferring the appearance information from input images to novel viewing angles. For appearance information, we use histograms of image intensities and steerable coefficients. We formulate the volume modeling as an energy minimization problem with statistical hard constraints, which is solved using an expectation maximization (EM)-like iterative algorithm. Our algorithm begins with a rough estimate of the initial volume modeled from the input images, followed by an iterative process whereby we first render the images of the current volume with novel viewing angles. Then, we modify the rendered images by transferring the appearance information from the input images, and we thereafter model the improved volume based on the modified images. We iterate these operations until the volume converges. We demonstrate our method successfully provides natural-looking volume sequences of fluids (i.e., fire, smoke, explosions, and a water splash) from sparse multi-view videos. To create production-ready fluid animations, we further propose a method of rendering and editing fluids using a commercially available fluid simulator.

Deformation capture and modeling of soft objects

We present a data-driven method for deformation capture and modeling of general soft objects. We adopt an iterative framework that consists of one component for physics-based deformation tracking and another for spacetime optimization of deformation parameters. Low cost depth sensors are used for the deformation capture, and we do not require any force-displacement measurements, thus making the data capture a cheap and convenient process. We augment a state-of-the-art probabilistic tracking method to robustly handle noise, occlusions, fast movements and large deformations. The spacetime optimization aims to match the simulated trajectories with the tracked ones. The optimized deformation model is then used to boost the accuracy of the tracking results, which can in turn improve the deformation parameter estimation itself in later iterations. Numerical experiments demonstrate that the tracking and parameter optimization components complement each other nicely.

Our spacetime optimization of the deformation model includes not only the material elasticity parameters and dynamic damping coefficients, but also the reference shape which can differ significantly from the static shape for soft objects. The resulting optimization problem is highly nonlinear in high dimensions, and challenging to solve with previous methods. We propose a novel splitting algorithm that alternates between reference shape optimization and deformation parameter estimation, and thus enables tailoring the optimization of each subproblem more efficiently and robustly.

Our system enables realistic motion reconstruction as well as synthesis of virtual soft objects in response to user stimulation. Validation experiments show that our method not only is accurate, but also compares favorably to existing techniques. We also showcase the ability of our system with high quality animations generated from optimized deformation parameters for a variety of soft objects, such as live plants and fabricated models.

SESSION: Geometry zoo

Zoomorphic design

Zoomorphic shapes are man-made shapes that possess the form or appearance of an animal. They have desirable aesthetic properties, but are difficult to create using conventional modeling tools. We present a method for creating zoomorphic shapes by merging a man-made shape and an animal shape. To identify a pair of shapes that are suitable for merging, we use an efficient graph kernel based technique. We formulate the merging process as a continuous optimization problem where the two shapes are deformed jointly to minimize an energy function combining several design factors. The modeler can adjust the weighting between these factors to attain high-level control over the final shape produced. A novel technique ensures that the zoomorphic shape does not violate the design restrictions of the man-made shape. We demonstrate the versatility and effectiveness of our approach by generating a wide variety of zoomorphic shapes.

Shading-based refinement on volumetric signed distance functions

We present a novel method to obtain fine-scale detail in 3D reconstructions generated with low-budget RGB-D cameras or other commodity scanning devices. As the depth data of these sensors is noisy, truncated signed distance fields are typically used to regularize out the noise, which unfortunately leads to over-smoothed results. In our approach, we leverage RGB data to refine these reconstructions through shading cues, as color input is typically of much higher resolution than the depth data. As a result, we obtain reconstructions with high geometric detail, far beyond the depth resolution of the camera itself. Our core contribution is shading-based refinement directly on the implicit surface representation, which is generated from globally-aligned RGB-D images. We formulate the inverse shading problem on the volumetric distance field, and present a novel objective function which jointly optimizes for fine-scale surface geometry and spatially-varying surface reflectance. In order to enable the efficient reconstruction of sub-millimeter detail, we store and process our surface using a sparse voxel hashing scheme which we augment by introducing a grid hierarchy. A tailored GPU-based Gauss-Newton solver enables us to refine large shape models to previously unseen resolution within only a few seconds.

SESSION: Image similarity & search

PatchTable: efficient patch queries for large datasets and applications

This paper presents a data structure that reduces approximate nearest neighbor query times for image patches in large datasets. Previous work in texture synthesis has demonstrated real-time synthesis from small exemplar textures. However, high performance has proved elusive for modern patch-based optimization techniques which frequently use many exemplar images in the tens of megapixels or above. Our new algorithm, PatchTable, offloads as much of the computation as possible to a pre-computation stage that takes modest time, so patch queries can be as efficient as possible. There are three key insights behind our algorithm: (1) a lookup table similar to locality sensitive hashing can be precomputed, and used to seed sufficiently good initial patch correspondences during querying, (2) missing entries in the table can be filled during pre-computation with our fast Voronoi transform, and (3) the initially seeded correspondences can be improved with a precomputed k-nearest neighbors mapping. We show experimentally that this accelerates the patch query operation by up to 9× over k-coherence, up to 12× over TreeCANN, and up to 200× over PatchMatch. Our fast algorithm allows us to explore efficient and practical imaging and computational photography applications. We show results for artistic video stylization, light field super-resolution, and multi-image editing.

Learning visual similarity for product design with convolutional neural networks

Popular sites like Houzz, Pinterest, and LikeThatDecor, have communities of users helping each other answer questions about products in images. In this paper we learn an embedding for visual search in interior design. Our embedding contains two different domains of product images: products cropped from internet scenes, and products in their iconic form. With such a multi-domain embedding, we demonstrate several applications of visual search including identifying products in scenes and finding stylistically similar products. To obtain the embedding, we train a convolutional neural network on pairs of images. We explore several training architectures including re-purposing object classifiers, using siamese networks, and using multitask learning. We evaluate our search quantitatively and qualitatively and demonstrate high quality results for search across multiple visual domains, enabling new applications in interior design.

SESSION: Fabrication & function

LinkEdit: interactive linkage editing using symbolic kinematics

We present a method for interactive editing of planar linkages. Given a working linkage as input, the user can make targeted edits to the shape or motion of selected parts while preserving other, e.g., functionally-important aspects. In order to make this process intuitive and efficient, we provide a number of editing tools at different levels of abstraction. For instance, the user can directly change the structure of a linkage by displacing joints, edit the motion of selected points on the linkage, or impose limits on the size of its enclosure. Our method safeguards against degenerate configurations during these edits, thus ensuring the correct functioning of the mechanism at all times. Linkage editing poses strict requirements on performance that standard approaches fail to provide. In order to enable interactive and robust editing, we build on a symbolic kinematics approach that uses closed-form expressions instead of numerical methods to compute the motion of a linkage and its derivatives. We demonstrate our system on a diverse set of examples, illustrating the potential to adapt and personalize the structure and motion of existing linkages. To validate the feasibility of our edited designs, we fabricated two physical prototypes.

Fab forms: customizable objects for fabrication with validity and geometry caching

We address the problem of allowing casual users to customize parametric models while maintaining their valid state as 3D-printable functional objects. We define Fab Form as any design representation that lends itself to interactive customization by a novice user, while remaining valid and manufacturable. We propose a method to achieve these Fab Form requirements for general parametric designs tagged with a general set of automated validity tests and a small number of parameters exposed to the casual user. Our solution separates Fab Form evaluation into a precomputation stage and a runtime stage. Parts of the geometry and design validity (such as manufacturability) are evaluated and stored in the precomputation stage by adaptively sampling the design space. At runtime the remainder of the evaluation is performed. This allows interactive navigation in the valid regions of the design space using an automatically generated Web user interface (UI). We evaluate our approach by converting several parametric models into corresponding Fab Forms.

Computational design of twisty joints and puzzles

We present the first computational method that allows ordinary users to create complex twisty joints and puzzles inspired by the Rubik's Cube mechanism. Given a user-supplied 3D model and a small subset of rotation axes, our method automatically adjusts those rotation axes and adds others to construct a "non-blocking" twisty joint in the shape of the 3D model. Our method outputs the shapes of pieces which can be directly 3D printed and assembled into an interlocking puzzle. We develop a group-theoretic approach to representing a wide class of twisty puzzles by establishing a connection between non-blocking twisty joints and the finite subgroups of the rotation group SO(3). The theoretical foundation enables us to build an efficient system for automatically completing the set of rotation axes and fast collision detection between pieces. We also generalize the Rubik's Cube mechanism to a large family of twisty puzzles.

Reduced-order shape optimization using offset surfaces

Given the 2-manifold surface of a 3d object, we propose a novel method for the computation of an offset surface with varying thickness such that the solid volume between the surface and its offset satisfies a set of prescribed constraints and at the same time minimizes a given objective functional. Since the constraints as well as the objective functional can easily be adjusted to specific application requirements, our method provides a flexible and powerful tool for shape optimization. We use manifold harmonics to derive a reduced-order formulation of the optimization problem, which guarantees a smooth offset surface and speeds up the computation independently from the input mesh resolution without affecting the quality of the result. The constrained optimization problem can be solved in a numerically robust manner with commodity solvers. Furthermore, the method allows simultaneously optimizing an inner and an outer offset in order to increase the degrees of freedom. We demonstrate our method in a number of examples where we control the physical mass properties of rigid objects for the purpose of 3d printing.

SESSION: Reconstruction & analysis

RAPter: rebuilding man-made scenes with regular arrangements of planes

With the proliferation of acquisition devices, gathering massive volumes of 3D data is now easy. Processing such large masses of pointclouds, however, remains a challenge. This is particularly a problem for raw scans with missing data, noise, and varying sampling density. In this work, we present a simple, scalable, yet powerful data reconstruction algorithm. We focus on reconstruction of man-made scenes as regular arrangements of planes (RAP), thereby selecting both local plane-based approximations along with their global inter-plane relations. We propose a novel selection formulation to directly balance between data fitting and the simplicity of the resulting arrangement of extracted planes. The main technical contribution is a formulation that allows less-dominant orientations to still retain their internal regularity, and not become overwhelmed and regularized by the dominant scene orientations. We evaluate our approach on a variety of complex 2D and 3D pointclouds, and demonstrate the advantages over existing alternative methods.

Coupled segmentation and similarity detection for architectural models

Recent shape retrieval and interactive modeling algorithms enable the re-use of existing models in many applications. However, most of those techniques require a pre-labeled model with some semantic information. We introduce a fully automatic approach to simultaneously segment and detect similarities within an existing 3D architectural model. Our framework approaches the segmentation problem as a weighted minimum set cover over an input triangle soup, and maximizes the repetition of similar segments to find a best set of unique component types and instances. The solution for this set-cover formulation starts with a search space reduction to eliminate unlikely combinations of triangles, and continues with a combinatorial optimization within each disjoint subspace that outputs the components and their types. We show the discovered components of a variety of architectural models obtained from public databases. We demonstrate experiments testing the robustness of our algorithm, in terms of threshold sensitivity, vertex displacement, and triangulation variations of the original model. In addition, we compare our components with those of competing approaches and evaluate our results against user-based segmentations. We have processed a database of 50 buildings, with various structures and over 200K polygons per building, with a segmentation time averaging up to 4 minutes.

SESSION: Procedural modeling

Controlling procedural modeling programs with stochastically-ordered sequential Monte Carlo

We present a method for controlling the output of procedural modeling programs using Sequential Monte Carlo (SMC). Previous probabilistic methods for controlling procedural models use Markov Chain Monte Carlo (MCMC), which receives control feedback only for completely-generated models. In contrast, SMC receives feedback incrementally on incomplete models, allowing it to reallocate computational resources and converge quickly. To handle the many possible sequentializations of a structured, recursive procedural modeling program, we develop and prove the correctness of a new SMC variant, Stochastically-Ordered Sequential Monte Carlo (SOSMC). We implement SOSMC for general-purpose programs using a new programming primitive: the stochastic future. Finally, we show that SOSMC reliably generates high-quality outputs for a variety of programs and control scoring functions. For small computational budgets, SOSMC's outputs often score nearly twice as high as those of MCMC or normal SMC.

WorldBrush: interactive example-based synthesis of procedural virtual worlds

We present a novel approach for the interactive synthesis and editing of virtual worlds. Our method is inspired by painting operations and uses methods for statistical example-based synthesis to automate content synthesis and deformation. Our real-time approach takes a form of local inverse procedural modeling based on intermediate statistical models: selected regions of procedurally and manually constructed example scenes are analyzed, and their parameters are stored as distributions in a palette, similar to colors on a painter's palette. These distributions can then be interactively applied with brushes and combined in various ways, like in painting systems. Selected regions can also be moved or stretched while maintaining the consistency of their content. Our method captures both distributions of elements and structured objects, and models their interactions. Results range from the interactive editing of 2D artwork maps to the design of 3D virtual worlds, where constraints set by the terrain's slope are also taken into account.

Advanced procedural modeling of architecture

We present the novel grammar language CGA++ for the procedural modeling of architecture. While existing grammar-based approaches can produce stunning results, they are limited in what modeling scenarios can be realized. In particular, many context-sensitive tasks are precluded, not least because within the rules specifying how one shape is refined, the necessary knowledge about other shapes is not available. Transcending such limitations, CGA++ significantly raises the expressiveness and offers a generic and integrated solution for many advanced procedural modeling problems. Pivotally, CGA++ grants first-class citizenship to shapes, enabling, within a grammar, directly accessing shapes and shape trees, operations on multiple shapes, rewriting shape (sub)trees, and spawning new trees (e.g., to explore multiple alternatives). The new linguistic device of events allows coordination across multiple shapes, featuring powerful dynamic grouping and synchronization. Various examples illustrate CGA++, demonstrating solutions to previously infeasible modeling challenges.

Learning shape placements by example

We present a method to learn and propagate shape placements in 2D polygonal scenes from a few examples provided by a user. The placement of a shape is modeled as an oriented bounding box. Simple geometric relationships between this bounding box and nearby scene polygons define a feature set for the placement. The feature sets of all example placements are then used to learn a probabilistic model over all possible placements and scenes. With this model, we can generate a new set of placements with similar geometric relationships in any given scene. We introduce extensions that enable propagation and generation of shapes in 3D scenes, as well as the application of a learned modeling session to large scenes without additional user interaction. These concepts allow us to generate complex scenes with thousands of objects with relatively little user interaction.

SESSION: Appearance capture

Skin microstructure deformation with displacement map convolution

We present a technique for synthesizing the effects of skin microstructure deformation by anisotropically convolving a high-resolution displacement map to match normal distribution changes in measured skin samples. We use a 10-micron resolution scanning technique to measure several in vivo skin samples as they are stretched and compressed in different directions, quantifying how stretching smooths the skin and compression makes it rougher. We tabulate the resulting surface normal distributions, and show that convolving a neutral skin microstructure displacement map with blurring and sharpening filters can mimic normal distribution changes and microstructure deformations. We implement the spatially-varying displacement map filtering on the GPU to interactively render the effects of dynamic microgeometry on animated faces obtained from high-resolution facial scans.

Two-shot SVBRDF capture for stationary materials

Material appearance acquisition usually makes a trade-off between acquisition effort and richness of reflectance representation. In this paper, we instead aim for both a light-weight acquisition procedure and a rich reflectance representation simultaneously, by restricting ourselves to one, but very important, class of appearance phenomena: texture-like materials. While such materials' reflectance is generally spatially varying, they exhibit self-similarity in the sense that for any point on the texture there exist many others with similar reflectance properties. We show that the texturedness assumption allows reflectance capture using only two images of a planar sample, taken with and without a headlight flash. Our reconstruction pipeline starts with redistributing reflectance observations across the image, followed by a regularized texture statistics transfer and a non-linear optimization to fit a spatially-varying BRDF (SVBRDF) to the resulting data. The final result describes the material as spatially-varying, diffuse and specular, anisotropic reflectance over a detailed normal map. We validate the method by side-by-side and novel-view comparisons to photographs, comparing normal map resolution to sub-micron ground truth scans, as well as simulated results. Our method is robust enough to use handheld, JPEG-compressed photographs taken with a mobile phone camera and built-in flash.

Image based relighting using neural networks

We present a neural network regression method for relighting realworld scenes from a small number of images. The relighting in this work is formulated as the product of the scene's light transport matrix and new lighting vectors, with the light transport matrix reconstructed from the input images. Based on the observation that there should exist non-linear local coherence in the light transport matrix, our method approximates matrix segments using neural networks that model light transport as a non-linear function of light source position and pixel coordinates. Central to this approach is a proposed neural network design which incorporates various elements that facilitate modeling of light transport from a small image set. In contrast to most image based relighting techniques, this regression-based approach allows input images to be captured under arbitrary illumination conditions, including light sources moved freely by hand. We validate our method with light transport data of real scenes containing complex lighting effects, and demonstrate that fewer input images are required in comparison to related techniques.

Measurement-based editing of diffuse albedo with consistent interreflections

We present a novel measurement-based method for editing the albedo of diffuse surfaces with consistent interreflections in a photograph of a scene under natural lighting. Key to our method is a novel technique for decomposing a photograph of a scene in several images that encode how much of the observed radiance has interacted a specified number of times with the target diffuse surface. Altering the albedo of the target area is then simply a weighted sum of the decomposed components. We estimate the interaction components by recursively applying the light transport operator and formulate the resulting radiance in each recursion as a linear expression in terms of the relevant interaction components. Our method only requires a camera-projector pair, and the number of required measurements per scene is linearly proportional to the decomposition degree for a single target area. Our method does not impose restrictions on the lighting or on the material properties in the unaltered part of the scene. Furthermore, we extend our method to accommodate editing of the albedo in multiple target areas with consistent interreflections and we introduce a prediction model for reducing the acquisition cost. We demonstrate our method on a variety of scenes and validate the accuracy on both synthetic and real examples.

SESSION: Fluids, from air to goo

OmniAD: data-driven omni-directional aerodynamics

This paper introduces "OmniAD," a novel data-driven pipeline to model and acquire the aerodynamics of three-dimensional rigid objects. Traditionally, aerodynamics are examined through elaborate wind tunnel experiments or expensive fluid dynamics computations, and are only measured for a small number of discrete wind directions. OmniAD allows the evaluation of aerodynamic forces, such as drag and lift, for any incoming wind direction using a novel representation based on spherical harmonics. Our data-driven technique acquires the aerodynamic properties of an object simply by capturing its falling motion using a single camera. Once model parameters are estimated, OmniAD enables realistic real-time simulation of rigid bodies, such as the tumbling and gliding of leaves, without simulating the surrounding air. In addition, we propose an intuitive user interface based on OmniAD to interactively design three-dimensional kites that actually fly. Various non-traditional kites were designed to demonstrate the physical validity of our model.

An implicit viscosity formulation for SPH fluids

We present a novel implicit formulation for highly viscous fluids simulated with Smoothed Particle Hydrodynamics SPH. Compared to explicit methods, our formulation is significantly more efficient and handles a larger range of viscosities. Differing from existing implicit formulations, our approach reconstructs the velocity field from a target velocity gradient. This gradient encodes a desired shear-rate damping and preserves the velocity divergence that is introduced by the SPH pressure solver to counteract density deviations. The target gradient ensures that pressure and viscosity computation do not interfere. Therefore, only one pressure projection step is required, which is in contrast to state-of-the-art implicit Eulerian formulations. While our model differs from true viscosity in that vorticity diffusion is not encoded in the target gradient, it nevertheless captures many of the qualitative behaviors of viscous liquids. Our formulation can easily be incorporated into complex scenarios with one- and two-way coupled solids and multiple fluid phases with different densities and viscosities.

Codimensional non-Newtonian fluids

We present a novel method to simulate codimensional non-Newtonian fluids on simplicial complexes. Our method extends previous work for codimensional incompressible flow to various types of non-Newtonian fluids including both shear thinning and thickening, Bingham plastics, and elastoplastics. We propose a novel time integration scheme for semi-implicitly treating elasticity, which when combined with a semi-implicit method for variable viscosity alleviates the need for small time steps. Furthermore, we propose an improved treatment of viscosity on the rims of thin fluid sheets that allows us to capture their elusive, visually appealing twisting motion. In order to simulate complex phenomena such as the mixing of colored paint, we adopt a multiple level set framework and propose a discretization on simplicial complexes that facilitates the tracking of material interfaces across codimensions. We demonstrate the efficacy of our approach by simulating a wide variety of non-Newtonian fluid phenomena exhibiting various codimensional features.

SESSION: Character fashion & style

Animating human dressing

Dressing is one of the most common activities in human society. Perfecting the skill of dressing can take an average child three to four years of daily practice. The challenge is primarily due to the combined difficulty of coordinating different body parts and manipulating soft and deformable objects (clothes). We present a technique to synthesize human dressing by controlling a human character to put on an article of simulated clothing. We identify a set of primitive actions which account for the vast majority of motions observed in human dressing. These primitive actions can be assembled into a variety of motion sequences for dressing different garments with different styles. Exploiting both feed-forward and feedback control mechanisms, we develop a dressing controller to handle each of the primitive actions. The controller plans a path to achieve the action goal while making constant adjustments locally based on the current state of the simulated cloth when necessary. We demonstrate that our framework is versatile and able to animate dressing with different clothing types including a jacket, a pair of shorts, a robe, and a vest. Our controller is also robust to different cloth mesh resolutions which can cause the cloth simulator to generate significantly different cloth motions. In addition, we show that the same controller can be extended to assistive dressing.

A perceptual control space for garment simulation

We present a perceptual control space for simulation of cloth that works with any physical simulator, treating it as a black box. The perceptual control space provides intuitive, art-directable control over the simulation behavior based on a learned mapping from common descriptors for cloth (e.g., flowiness, softness) to the parameters of the simulation. To learn the mapping, we perform a series of perceptual experiments in which the simulation parameters are varied and participants assess the values of the common terms of the cloth on a scale. A multi-dimensional sub-space regression is performed on the results to build a perceptual generative model over the simulator parameters. We evaluate the perceptual control space by demonstrating that the generative model does in fact create simulated clothing that is rated by participants as having the expected properties. We also show that this perceptual control space generalizes to garments and motions not in the original experiments.

Space-time sketching of character animation

We present a space-time abstraction for the sketch-based design of character animation. It allows animators to draft a full coordinated motion using a single stroke called the space-time curve (STC). From the STC we compute a dynamic line of action (DLOA) that drives the motion of a 3D character through projective constraints. Our dynamic models for the line's motion are entirely geometric, require no pre-existing data, and allow full artistic control. The resulting DLOA can be refined by over-sketching strokes along the space-time curve, or by composing another DLOA on top leading to control over complex motions with few strokes. Additionally, the resulting dynamic line of action can be applied to arbitrary body parts or characters. To match a 3D character to the 2D line over time, we introduce a robust matching algorithm based on closed-form solutions, yielding a tight match while allowing squash and stretch of the character's skeleton. Our experiments show that space-time sketching has the potential of bringing animation design within the reach of beginners while saving time for skilled artists.

Realtime style transfer for unlabeled heterogeneous human motion

This paper presents a novel solution for realtime generation of stylistic human motion that automatically transforms unlabeled, heterogeneous motion data into new styles. The key idea of our approach is an online learning algorithm that automatically constructs a series of local mixtures of autoregressive models (MAR) to capture the complex relationships between styles of motion. We construct local MAR models on the fly by searching for the closest examples of each input pose in the database. Once the model parameters are estimated from the training data, the model adapts the current pose with simple linear transformations. In addition, we introduce an efficient local regression model to predict the timings of synthesized poses in the output style. We demonstrate the power of our approach by transferring stylistic human motion for a wide variety of actions, including walking, running, punching, kicking, jumping and transitions between those behaviors. Our method achieves superior performance in a comparison against alternative methods. We have also performed experiments to evaluate the generalization ability of our data-driven model as well as the key components of our system.

Dyna: a model of dynamic human shape in motion

To look human, digital full-body avatars need to have soft-tissue deformations like those of real people. We learn a model of soft-tissue deformations from examples using a high-resolution 4D capture system and a method that accurately registers a template mesh to sequences of 3D scans. Using over 40,000 scans of ten subjects, we learn how soft-tissue motion causes mesh triangles to deform relative to a base 3D body model. Our Dyna model uses a low-dimensional linear subspace to approximate soft-tissue deformation and relates the subspace coefficients to the changing pose of the body. Dyna uses a second-order auto-regressive model that predicts soft-tissue deformations based on previous deformations, the velocity and acceleration of the body, and the angular velocities and accelerations of the limbs. Dyna also models how deformations vary with a person's body mass index (BMI), producing different deformations for people with different shapes. Dyna realistically represents the dynamics of soft tissue for previously unseen subjects and motions. We provide tools for animators to modify the deformations and apply them to new stylized characters.

SESSION: Sampling & filtering

Adaptive rendering with linear predictions

We propose a new adaptive rendering algorithm that enhances the performance of Monte Carlo ray tracing by reducing the noise, i.e., variance, while preserving a variety of high-frequency edges in rendered images through a novel prediction based reconstruction. To achieve our goal, we iteratively build multiple, but sparse linear models. Each linear model has its prediction window, where the linear model predicts the unknown ground truth image that can be generated with an infinite number of samples. Our method recursively estimates prediction errors introduced by linear predictions performed with different prediction windows, and selects an optimal prediction window minimizing the error for each linear model. Since each linear model predicts multiple pixels within its optimal prediction interval, we can construct our linear models only at a sparse set of pixels in the image screen. Predicting multiple pixels with a single linear model poses technical challenges, related to deriving error analysis for regions rather than pixels, and has not been addressed in the field. We address these technical challenges, and our method with robust error analysis leads to a drastically reduced reconstruction time even with higher rendering quality, compared to state-of-the-art adaptive methods. We have demonstrated that our method outperforms previous methods numerically and visually with high performance ray tracing kernels such as OptiX and Embree.

A machine learning approach for filtering Monte Carlo noise

The most successful approaches for filtering Monte Carlo noise use feature-based filters (e.g., cross-bilateral and cross non-local means filters) that exploit additional scene features such as world positions and shading normals. However, their main challenge is finding the optimal weights for each feature in the filter to reduce noise but preserve scene detail. In this paper, we observe there is a complex relationship between the noisy scene data and the ideal filter parameters, and propose to learn this relationship using a nonlinear regression model. To do this, we use a multilayer perceptron neural network and combine it with a matching filter during both training and testing. To use our framework, we first train it in an offline process on a set of noisy images of scenes with a variety of distributed effects. Then at run-time, the trained network can be used to drive the filter parameters for new scenes to produce filtered images that approximate the ground truth. We demonstrate that our trained network can generate filtered images in only a few seconds that are superior to previous approaches on a wide range of distributed effects such as depth of field, motion blur, area lighting, glossy reflections, and global illumination.

Gradient-domain path tracing

We introduce gradient-domain rendering for Monte Carlo image synthesis. While previous gradient-domain Metropolis Light Transport sought to distribute more samples in areas of high gradients, we show, in contrast, that estimating image gradients is also possible using standard (non-Metropolis) Monte Carlo algorithms, and furthermore, that even without changing the sample distribution, this often leads to significant error reduction. This broadens the applicability of gradient rendering considerably. To gain insight into the conditions under which gradient-domain sampling is beneficial, we present a frequency analysis that compares Monte Carlo sampling of gradients followed by Poisson reconstruction to traditional Monte Carlo sampling. Finally, we describe Gradient-Domain Path Tracing (G-PT), a relatively simple modification of the standard path tracing algorithm that can yield far superior results.

Variance analysis for Monte Carlo integration

We propose a new spectral analysis of the variance in Monte Carlo integration, expressed in terms of the power spectra of the sampling pattern and the integrand involved. We build our framework in the Euclidean space using Fourier tools and on the sphere using spherical harmonics. We further provide a theoretical background that explains how our spherical framework can be extended to the hemispherical domain. We use our framework to estimate the variance convergence rate of different state-of-the-art sampling patterns in both the Euclidean and spherical domains, as the number of samples increases. Furthermore, we formulate design principles for constructing sampling methods that can be tailored according to available resources. We validate our theoretical framework by performing numerical integration over several integrands sampled using different sampling patterns.

SESSION: Sketching & surfacing

Single-view hair modeling using a hairstyle database

Human hair presents highly convoluted structures and spans an extraordinarily wide range of hairstyles, which is essential for the digitization of compelling virtual avatars but also one of the most challenging to create. Cutting-edge hair modeling techniques typically rely on expensive capture devices and significant manual labor. We introduce a novel data-driven framework that can digitize complete and highly complex 3D hairstyles from a single-view photograph. We first construct a large database of manually crafted hair models from several online repositories. Given a reference photo of the target hairstyle and a few user strokes as guidance, we automatically search for multiple best matching examples from the database and combine them consistently into a single hairstyle to form the large-scale structure of the hair model. We then synthesize the final hair strands by jointly optimizing for the projected 2D similarity to the reference photo, the physical plausibility of each strand, as well as the local orientation coherency between neighboring strands. We demonstrate the effectiveness and robustness of our method on a variety of hairstyles and challenging images, and compare our system with state-of-the-art hair modeling algorithms.

SecondSkin: sketch-based construction of layered 3D models

SecondSkin is a sketch-based modeling system focused on the creation of structures comprised of layered, shape interdependent 3D volumes. Our approach is built on three novel insights gleaned from an analysis of representative artist sketches. First, we observe that a closed loop of strokes typically define surface patches that bound volumes in conjunction with underlying surfaces. Second, a significant majority of these strokes map to a small set of curve-types, that describe the 3D geometric relationship between the stroke and underlying layer geometry. Third, we find that a few simple geometric features allow us to consistently classify 2D strokes to our proposed set of 3D curve-types. Our algorithm thus processes strokes as they are drawn, identifies their curve-type, and interprets them as 3D curves on and around underlying 3D geometry, using other connected 3D curves for context. Curve loops are automatically surfaced and turned into volumes bound to the underlying layer, creating additional curves and surfaces as necessary. Stroke classification by 15 viewers on a suite of ground truth sketches validates our curve-types and classification algorithm. We evaluate SecondSkin via a compelling gallery of layered 3D models that would be tedious to produce using current sketch modelers.

Flow aligned surfacing of curve networks

We propose a new approach for automatic surfacing of 3D curve networks, a long standing computer graphics problem which has garnered new attention with the emergence of sketch based modeling systems capable of producing such networks. Our approach is motivated by recent studies suggesting that artist-designed curve networks consist of descriptive curves that convey intrinsic shape properties, and are dominated by representative flow lines designed to convey the principal curvature lines on the surface. Studies indicate that viewers complete the intended surface shape by envisioning a surface whose curvature lines smoothly blend these flow-line curves. Following these observations we design a surfacing framework that automatically aligns the curvature lines of the constructed surface with the representative flow lines and smoothly interpolates these representative flow, or curvature directions while minimizing undesired curvature variation. Starting with an initial triangle mesh of the network, we dynamically adapt the mesh to maximize the agreement between the principal curvature direction field on the surface and a smooth flow field suggested by the representative flow-line curves. Our main technical contribution is a framework for curvature-based surface modeling, that facilitates the creation of surfaces with prescribed curvature characteristics. We validate our method via visual inspection, via comparison to artist created and ground truth surfaces, as well as comparison to prior art, and confirm that our results are well aligned with the computed flow fields and with viewer perception of the input networks.

Topology-constrained surface reconstruction from cross-sections

In this work we detail the first algorithm that provides topological control during surface reconstruction from an input set of planar cross-sections. Our work has broad application in a number of fields including surface modeling and biomedical image analysis, where surfaces of known topology must be recovered. Given curves on arbitrarily oriented cross-sections, our method produces a manifold interpolating surface that exactly matches a user-specified genus. The key insight behind our approach is to formulate the topological search as a divide-and-conquer optimization process which scores local sets of topologies and combines them to satisfy the global topology constraint. We further extend our method to allow image data to guide the topological search, achieving even better results than relying on the curves alone. By simultaneously satisfying both geometric and topological constraints, we are able to produce accurate reconstructions with fewer input cross-sections, hence reducing the manual time needed to extract the desired shape.

SESSION: Computational printing

MultiFab: a machine vision assisted platform for multi-material 3D printing

We have developed a multi-material 3D printing platform that is high-resolution, low-cost, and extensible. The key part of our platform is an integrated machine vision system. This system allows for self-calibration of printheads, 3D scanning, and a closed-feedback loop to enable print corrections. The integration of machine vision with 3D printing simplifies the overall platform design and enables new applications such as 3D printing over auxiliary parts. Furthermore, our platform dramatically expands the range of parts that can be 3D printed by simultaneously supporting up to 10 different materials that can interact optically and mechanically. The platform achieves a resolution of at least 40 μm by utilizing piezoelectric inkjet printheads adapted for 3D printing. The hardware is low cost (less than $7,000) since it is built exclusively from off-the-shelf components. The architecture is extensible and modular -- adding, removing, and exchanging printing modules can be done quickly. We provide a detailed analysis of the system's performance. We also demonstrate a variety of fabricated multi-material objects.

Color imaging and pattern hiding on a metallic substrate

We present a new approach for the reproduction of color images on a metallic substrate that look bright and colorful under specular reflection observation conditions and also look good under non-specular reflection observation conditions. We fit amounts of both the white ink and the classical cyan, magenta and yellow inks according to a formula optimizing the reproduction of colors simultaneously under specular and non-specular observation conditions. In addition, we can hide patterns such as text or graphical symbols in one viewing mode, specular or non-specular, and reveal them in the other viewing mode. We rely on the trade-off between amounts of white diffuse ink and amounts of cyan, magenta and yellow inks to control lightness in specular and in non-specular observation conditions. Further effects are grayscale images that alternate from a first image to a second independent image when tilting the print from specular to non-specular reflection observation conditions. Applications comprise art and entertainment, publicity, posters, as well as document security.

Computational hydrographic printing

Hydrographic printing is a well-known technique in industry for transferring color inks on a thin film to the surface of a manufactured 3D object. It enables high-quality coloring of object surfaces and works with a wide range of materials, but suffers from the inability to accurately register color texture to complex surface geometries. Thus, it is hardly usable by ordinary users with customized shapes and textures.

We present computational hydrographic printing, a new method that inherits the versatility of traditional hydrographic printing, while also enabling precise alignment of surface textures to possibly complex 3D surfaces. In particular, we propose the first computational model for simulating hydrographic printing process. This simulation enables us to compute a color image to feed into our hydrographic system for precise texture registration. We then build a physical hydrographic system upon off-the-shelf hardware, integrating virtual simulation, object calibration and controlled immersion. To overcome the difficulty of handling complex surfaces, we further extend our method to enable multiple immersions, each with a different object orientation, so the combined colors of individual immersions form a desired texture on the object surface. We validate the accuracy of our computational model through physical experiments, and demonstrate the efficacy and robustness of our system using a variety of objects with complex surface textures.

SESSION: Constraints, collisions & clarinets

Stable constrained dynamics

We present a unification of the two main approaches to simulate deformable solids, namely elasticity and constraints. Elasticity accurately handles soft to moderately stiff objects, but becomes numerically hard as stiffness increases. Constraints efficiently handle high stiffness, but when integrated in time they can suffer from instabilities in the nullspace directions, generating spurious transverse vibrations when pulling hard on thin inextensible objects or articulated rigid bodies. We show that geometric stiffness, the tensor encoding the change of force directions (as opposed to intensities) in response to a change of positions, is the missing piece between the two approaches. This previously neglected stiffness term is easy to implement and dramatically improves the stability of inextensible objects and articulated chains, without adding artificial bending forces. This allows time step increases up to several orders of magnitude using standard linear solvers.

Air meshes for robust collision handling

We propose a new method for both collision detection and collision response geared towards handling complex deformable objects in close contact. Our method does not miss collision events between time steps and solves the challenging problem of untangling automatically and robustly. It is conceptually simple and straight forward to parallelize due to the regularity of the algorithm.

The main idea is to tessellate the air between objects once before the simulation and by considering one unilateral constraint per element that prevents its inversion during the simulation. If large relative rotations and translations are present in the simulation, an additional dynamic mesh optimization step is needed to prevent mesh locking. This step is fast in 2D and allows the simulation of arbitrary scenes. Because mesh optimization is expensive in 3D, however, the method is best suited for the subclass of 3D scenarios in which relative motions are limited. This subclass contains two important problems, namely the simulation of multi-layered clothing and tissue on animated characters.

Aerophones in flatland: interactive wave simulation of wind instruments

We present the first real-time technique to synthesize full-bandwidth sounds for 2D virtual wind instruments. A novel interactive wave solver is proposed that synthesizes audio at 128,000Hz on commodity graphics cards. Simulating the wave equation captures the resonant and radiative properties of the instrument body automatically. We show that a variety of existing non-linear excitation mechanisms such as reed or lips can be successfully coupled to the instrument's 2D wave field. Virtual musical performances can be created by mapping user inputs to control geometric features of the instrument body, such as tone holes, and modifying parameters of the excitation model, such as blowing pressure. Field visualizations are also produced. Our technique promotes experimentation by providing instant audio-visual feedback from interactive virtual designs. To allow artifact-free audio despite dynamic geometric modification, we present a novel time-varying Perfectly Matched Layer formulation that yields smooth, natural-sounding transitions between notes. We find that visco-thermal wall losses are crucial for musical sound in 2D simulations and propose a practical approximation. Weak non-linearity at high amplitudes is incorporated to improve the sound quality of brass instruments.

SESSION: Printing elasties

Elastic textures for additive fabrication

We introduce elastic textures: a set of parametric, tileable, printable, cubic patterns achieving a broad range of isotropic elastic material properties: the softest pattern is over a thousand times softer than the stiffest, and the Poisson's ratios range from below zero to nearly 0.5. Using a combinatorial search over topologies followed by shape optimization, we explore a wide space of truss-like, symmetric 3D patterns to obtain a small family. This pattern family can be printed without internal support structure on a single-material 3D printer and can be used to fabricate objects with prescribed mechanical behavior. The family can be extended easily to create anisotropic patterns with target orthotropic properties. We demonstrate that our elastic textures are able to achieve a user-supplied varying material property distribution. We also present a material optimization algorithm to choose material properties at each point within an object to best fit a target deformation under a prescribed scenario. We show that, by fabricating these spatially varying materials with elastic textures, the desired behavior is achieved.

Microstructures to control elasticity in 3D printing

We propose a method for fabricating deformable objects with spatially varying elasticity using 3D printing. Using a single, relatively stiff printer material, our method designs an assembly of small-scale microstructures that have the effect of a softer material at the object scale, with properties depending on the microstructure used in each part of the object. We build on work in the area of metamaterials, using numerical optimization to design tiled microstructures with desired properties, but with the key difference that our method designs families of related structures that can be interpolated to smoothly vary the material properties over a wide range. To create an object with spatially varying elastic properties, we tile the object's interior with microstructures drawn from these families, generating a different microstructure for each cell using an efficient algorithm to select compatible structures for neighboring cells. We show results computed for both 2D and 3D objects, validating several 2D and 3D printed structures using standard material tests as well as demonstrating various example applications.

By-example synthesis of structurally sound patterns

Several techniques exist to automatically synthesize a 2D image resembling an input exemplar texture. Most of the approaches optimize a new image so that the color neighborhoods in the output closely match those in the input, across all scales. In this paper we revisit by-example texture synthesis in the context of additive manufacturing. Our goal is to generate not only colors, but also structure along output surfaces: given an exemplar indicating 'solid' and 'empty' pixels, we generate a similar pattern along the output surface. The core challenge is to guarantee that the pattern is not only fully connected, but also structurally sound.

To achieve this goal we propose a novel formulation for on-surface by-example texture synthesis that directly works in a voxel shell around the surface. It enables efficient local updates to the pattern, letting our structural optimizer perform changes that improve the overall rigidity of the pattern. We use this technique in an iterative scheme that jointly optimizes for appearance and structural soundness. We consider fabricability constraints and a user-provided description of a force profile that the object has to resist.

Our results fully exploit the capabilities of additive manufacturing by letting users design intricate structures along surfaces. The structures are complex, yet they resemble input exemplars, resulting in a modeling tool accessible to casual users.

Design and fabrication of flexible rod meshes

We present a computational tool for fabrication-oriented design of flexible rod meshes. Given a deformable surface and a set of deformed poses as input, our method automatically computes a printable rod mesh that, once manufactured, closely matches the input poses under the same boundary conditions. The core of our method is formed by an optimization scheme that adjusts the cross-sectional profiles of the rods and their rest centerline in order to best approximate the target deformations. This approach allows us to locally control the bending and stretching resistance of the surface with a single material, yielding high design flexibility and low fabrication cost.

SESSION: Perception & color

Palette-based photo recoloring

Image editing applications offer a wide array of tools for color manipulation. Some of these tools are easy to understand but offer a limited range of expressiveness. Other more powerful tools are time consuming for experts and inscrutable to novices. Researchers have described a variety of more sophisticated methods but these are typically not interactive, which is crucial for creative exploration. This paper introduces a simple, intuitive and interactive tool that allows non-experts to recolor an image by editing a color palette. This system is comprised of several components: a GUI that is easy to learn and understand, an efficient algorithm for creating a color palette from an image, and a novel color transfer algorithm that recolors the image based on a user-modified palette. We evaluate our approach via a user study, showing that it is faster and easier to use than two alternatives, and allows untrained users to achieve results comparable to those of experts using professional software.

SESSION: Meshful thinking

3DFlow: continuous summarization of mesh editing workflows

Mesh editing software is improving, allowing skilled artists to create detailed meshes efficiently. For a variety of reasons, artists are interested in sharing not just their final mesh but also their whole workflow, though the common media for sharing has limitations. In this paper, we present 3DFlow, an algorithm that computes continuous summarizations of mesh editing workflows. 3DFlow takes as input a sequence of meshes and outputs a visualization of the workflow summarized at any level of detail. The output is enhanced by highlighting edited regions and, if provided, overlaying visual annotations to indicated the artist's work, e.g. summarizing brush strokes in sculpting. We tested 3DFlow with a large set of inputs using a variety of mesh editing techniques, from digital sculpting to low-poly modeling, and found 3DFlow performed well for all. Furthermore, 3DFlow is independent of the modeling software used because it requires only mesh snapshots, and uses the additional information only for optional overlays. We release 3DFlow as open source for artists to showcase their work and release all our datasets so other researchers can improve upon our work.

Practical hex-mesh optimization via edge-cone rectification

The usability of hexahedral meshes depends on the degree to which the shape of their elements deviates from a perfect cube; a single concave, or inverted element makes a mesh unusable. While a range of methods exist for discretizing 3D objects with an initial topologically suitable hex mesh, their output meshes frequently contain poorly shaped and even inverted elements, requiring a further quality optimization step. We introduce a novel framework for optimizing hex-mesh quality capable of generating inversion-free high-quality meshes from such poor initial inputs. We recast hex quality improvement as an optimization of the shape of overlapping cones, or unions, of tetrahedra surrounding every directed edge in the hex mesh, and show the two to be equivalent. We then formulate cone shape optimization as a sequence of convex quadratic optimization problems, where hex convexity is encoded via simple linear inequality constraints. As this solution space may be empty, we therefore present an alternate formulation which allows the solver to proceed even when constraints cannot be satisfied exactly. We iteratively improve mesh element quality by solving at each step a set of local, per-cone, convex constrained optimization problems, followed by a global energy minimization step which reconciles these local solutions. This latter method provides no theoretical guarantees on the solution but produces inversion-free, high quality meshes in practice. We demonstrate the robustness of our framework by optimizing numerous poor quality input meshes generated using a variety of initial meshing methods and producing high-quality inversion-free meshes in each case. We further validate our algorithm by comparing it against previous work, and demonstrate a significant improvement in both worst and average element quality.

Hexahedral mesh re-parameterization from aligned base-complex

Recently, generating a high quality all-hex mesh of a given volume has gained much attention. However, little, if any, effort has been put into the optimization of the hex-mesh structure, which is equally important to the local element quality of a hex-mesh that may influence the performance and accuracy of subsequent computations. In this paper, we present a first and complete pipeline to optimize the global structure of a hex-mesh. Specifically, we first extract the base-complex of a hex-mesh and study the misalignments among its singularities by adapting the previously introduced hexahedral sheets to the base-complex. Second, we identify the valid removal base-complex sheets from the base-complex that contain misaligned singularities. We then propose an effective algorithm to remove these valid removal sheets in order. Finally, we present a structure-aware optimization strategy to improve the geometric quality of the resulting hex-mesh after fixing the misalignments. Our experimental results demonstrate that our pipeline can significantly reduce the number of components of a variety of hex-meshes generated by state-of-the-art methods, while maintaining high geometric quality.

Dyadic T-mesh subdivision

Meshes with T-joints (T-meshes) and related high-order surfaces have many advantages in situations where flexible local refinement is needed. At the same time, designing subdivision rules and bases for T-meshes is much more difficult, and fewer options are available. For common geometric modeling tasks it is desirable to retain the simplicity and flexibility of commonly used subdivision surfaces, and extend them to handle T-meshes.

We propose a subdivision scheme extending Catmull-Clark and NURSS to a special class of quad T-meshes, dyadic T-meshes, which have no more than one T-joint per edge. Our scheme is based on a factorization with the same structure as Catmull-Clark subdivision. On regular T-meshes it is a refinement scheme for a subset of standard T-splines. While we use more variations of subdivision masks compared to Catmull-Clark and NURSS, the minimal size of the stencil is maintained, and all variations in formulas are due to simple changes in coefficients.

SESSION: Scalable graphics

Lillicon: using transient widgets to create scale variations of icons

Good icons are legible, and legible icons are scale-dependent. Experienced icon designers use a set of common strategies to create legible scale variations of icons, but executing those strategies with current tools can be challenging. In part, this is because many apparent objects, like hairlines formed by negative space, are not explicitly represented as objects in vector drawings. We present transient widgets as a mechanism for selecting and manipulating apparent objects that is independent of the underlying drawing representation. We implement transient widgets using a constraint-based editing framework; demonstrate their utility for performing the kinds of edits most common when producing scale variations of icons; and report qualitative feedback on the system from professional icon designers.

Vector graphics animation with time-varying topology

We introduce the Vector Animation Complex (VAC), a novel data structure for vector graphics animation, designed to support the modeling of time-continuous topological events. This allows features of a connected drawing to merge, split, appear, or disappear at desired times via keyframes that introduce the desired topological change. Because the resulting space-time complex directly captures the time-varying topological structure, features are readily edited in both space and time in a way that reflects the intent of the drawing. A formal description of the data structure is provided, along with topological and geometric invariants. We illustrate our modeling paradigm with experimental results on various examples.

Accelerating vector graphics rendering using the graphics hardware pipeline

We describe our successful initiative to accelerate Adobe Illustrator with the graphics hardware pipeline of modern GPUs. Relying on OpenGL 4.4 plus recent OpenGL extensions for advanced blend modes and first-class GPU-accelerated path rendering, we accelerate the Adobe Graphics Model (AGM) layer responsible for rendering sophisticated Illustrator scenes. Illustrator documents render in either an RGB or CMYK color mode. While GPUs are designed and optimized for RGB rendering, we orchestrate OpenGL rendering of vector content in the proper CMYK color space and accommodate the 5+ color components required. We support both non-isolated and isolated transparency groups, knockout, patterns, and arbitrary path clipping. We harness GPU tessellation to shade paths smoothly with gradient meshes. We do all this and render complex Illustrator scenes 2 to 6x faster than CPU rendering at Full HD resolutions; and 5 to 16x faster at Ultra HD resolutions.

Piko: a framework for authoring programmable graphics pipelines

We present Piko, a framework for designing, optimizing, and retargeting implementations of graphics pipelines on multiple architectures. Piko programmers express a graphics pipeline by organizing the computation within each stage into spatial bins and specifying a scheduling preference for these bins. Our compiler, Pikoc, compiles this input into an optimized implementation targeted to a massively-parallel GPU or a multicore CPU.

Piko manages work granularity in a programmable and flexible manner, allowing programmers to build load-balanced parallel pipeline implementations, to exploit spatial and producer-consumer locality in a pipeline implementation, and to explore tradeoffs between these considerations. We demonstrate that Piko can implement a wide range of pipelines, including rasterization, Reyes, ray tracing, rasterization/ray tracing hybrid, and deferred rendering. Piko allows us to implement efficient graphics pipelines with relative ease and to quickly explore design alternatives by modifying the spatial binning configurations and scheduling preferences for individual stages, all while delivering real-time performance that is within a factor six of state-of-the-art rendering systems.

SESSION: Simulating with surfaces

Fast grid-free surface tracking

We present a novel explicit surface tracking method. Its main advantage over existing approaches is the fact that it is both completely grid-free and fast which makes it ideal for the use in large unbounded domains. A further advantage is that its running time is less sensitive to temporal variations of the input mesh than existing approaches. In terms of performance, the method provides a good trade-off point between speed and quality. The main idea behind our approach to handle topological changes is to delete all overlapping triangles and to fill or join the resulting holes in a robust and efficient way while guaranteeing that the output mesh is both manifold and without boundary. We demonstrate the flexibility, speed and quality of our method in various applications such as Eulerian and Lagrangian liquid simulations and the simulation of solids under large plastic deformations.

Double bubbles sans toil and trouble: discrete circulation-preserving vortex sheets for soap films and foams

Simulating the delightful dynamics of soap films, bubbles, and foams has traditionally required the use of a fully three-dimensional many-phase Navier-Stokes solver, even though their visual appearance is completely dominated by the thin liquid surface. We depart from earlier work on soap bubbles and foams by noting that their dynamics are naturally described by a Lagrangian vortex sheet model in which circulation is the primary variable. This leads us to derive a novel circulation-preserving surface-only discretization of foam dynamics driven by surface tension on a non-manifold triangle mesh. We represent the surface using a mesh-based multimaterial surface tracker which supports complex bubble topology changes, and evolve the surface according to the ambient air flow induced by a scalar circulation field stored on the mesh. Surface tension forces give rise to a simple update rule for circulation, even at non-manifold Plateau borders, based on a discrete measure of signed scalar mean curvature. We further incorporate vertex constraints to enable the interaction of soap films with wires. The result is a method that is at once simple, robust, and efficient, yet able to capture an array of soap films behaviors including foam rearrangement, catenoid collapse, blowing bubbles, and double bubbles being pulled apart.

Simulating rigid body fracture with surface meshes

We present a new brittle fracture simulation method based on a boundary integral formulation of elasticity and recent explicit surface mesh evolution algorithms. Unlike prior physically-based simulations in graphics, this avoids the need for volumetric sampling and calculations, which aren't reflected in the rendered output. We represent each quasi-rigid body by a closed triangle mesh of its boundary, on which we solve quasi-static linear elasticity via boundary integrals in response to boundary conditions and loads such as impact forces and gravity. A fracture condition based on maximum tensile stress is subsequently evaluated at mesh vertices, while crack initiation and propagation are formulated as an interface tracking procedure in material space. Existing explicit mesh tracking methods are modified to support evolving cracks directly in the triangle mesh representation, giving highly detailed fractures with sharp features, independent of any volumetric sampling (unlike tetrahedral mesh or level set approaches); the triangle mesh representation also allows simple integration into rigid body engines. We also give details on our well-conditioned integral equation treatment solved with a kernel-independent Fast Multipole Method for linear time summation. Various brittle fracture scenarios demonstrate the efficacy and robustness of our new method.

High-resolution brittle fracture simulation with boundary elements

We present a method for simulating brittle fracture under the assumptions of quasi-static linear elastic fracture mechanics (LEFM). Using the boundary element method (BEM) and Lagrangian crack-fronts, we produce highly detailed fracture surfaces. The computational cost of the BEM is alleviated by using a low-resolution mesh and interpolating the resulting stress intensity factors when propagating the high-resolution crack-front.

Our system produces physics-based fracture surfaces with high spatial and temporal resolution, taking spatial variation of material toughness and/or strength into account. It also allows for crack initiation to be handled separately from crack propagation, which is not only more reasonable from a physics perspective, but can also be used to control the simulation.

Separating the resolution of the crack-front from the resolution of the computational mesh increases the efficiency and therefore the amount of visual detail on the resulting fracture surfaces. The BEM also allows us to re-use previously computed blocks of the system matrix.

SESSION: Lightfields

Improving light field camera sample design with irregularity and aberration

Conventional camera designs usually shun sample irregularities and lens aberrations. We demonstrate that such irregularities and aberrations, when properly applied, can improve the quality and usability of light field cameras. Examples include spherical aberrations for the mainlens, and misaligned sampling patterns for the microlens and photosensor elements. These observations are a natural consequence of a key difference between conventional and light field cameras: optimizing for a single captured 2D image versus a range of reprojected 2D images from a captured 4D light field. We propose designs in mainlens aberrations and microlens/photosensor sample patterns, and evaluate them through simulated measurements and captured results with our hardware prototype.