Computations Using Graphics
As the performance and programmability of graphics processing units
(or GPUs) increases, the GPU begins to appear more like a general
processing unit or a coprocessor to the CPU, running in parallel
with it. Future generations of graphics hardware will likely make
the graphics pipeline—once a fixed-function series of steps
designed with the sole purpose of rendering triangles to the display—resemble
even more a general programmable stream processor. Consider all
the computational power a GPU provides. Rich in highly pipelined
computational units, the GPU provides an efficient streaming model
of computation. Furthermore, GPUs have very high memory bandwidth,
able to access multiple gigabytes of memory per second. Yet on the
other hand, the nature of the graphics rendering pipeline means
that GPUs process data in relative isolation: typically a single
vertex enters the pipeline with its associated data and is processed
independently, for the most part, from the vertices that follows.
In addition, GPUs are largely hardwired to run a single highly specialized
pipeline as fast as possible; though parts of the pipeline can be
programmed, there is no sense of control flow as there is in a CPU.
The only way to simulate control flow, or decision-making, is to
make multiple rendering passes, where the current pass uses buffer
data (typically stencil buffer) generated in a previous pass.
In their course entitled “Interactive Geometric Computations
Using Graphics Hardware”, Dinesh Manocha of the University
of North Carolina at Chapel Hill, Michael Doggett of ATI Technologies
Inc., Ned Greene and Mark Kilgard of NVIDIA Corporation, Ming C.
Lin and Dinesh Manocha of University of North Carolina at Chapel
Hill, and Shankar Krishnan of AT&T Labs presented their ideas
and contributions on the use of graphics hardware to perform interactive
geometric computations. The course aimed to demonstrate applications
in such areas as visibility, collision detection, shadow volumes,
and problems involving the use of generalized voronoi diagrams.
Dinesh Manocha explained how graphics hardware could be used in
a conceptually simple way (that is also easy to implement) to calculate
an approximation of a generalized voronoi diagram.
In brief, a voronoi diagram consists of primitives and cell regions.
Primitives may be line segments, curve segments, or points (for
simplicity, only points will be considered here). Consider a set
of five point primitives, labeled A through E, on a plane. Now consider
a region around primitive A such that every point within that region
is closer to
primitive A than any of the other four primitives. This region forms
a voronoi cell around primitive A. Now consider the region around
primitive B such that every point in that region is closer to primitive
B than any other primitive—and so on. Note that there will
be points on the plane that are as close to one primitive as to
another primitive (or more than one other primitive). These points
form the border between the voronoi cells. Voronoi diagrams have
a variety of applications including motion planning, medial axis
computation, and non-photorealistic rendering.
Note that because graphics hardware is involved—with its
inherent limitations on the accuracy of computations due to the
precision of the values supported in the graphics pipeline—,
only approximations of the solutions to any problem can be found.
In the problem of generating a voronoi diagram for a set of primitives,
the approximation comes from the fact that any image generated in
a GPU’s framebuffer is, by the very nature of rasterized graphics,
pixilated. The brute-force algorithm to find the voronoi diagram
based on a set of primitives is simple but ineffective: for each
point (or pixel, in the graphics hardware approximation) in the
plane, determine the distance to each primitive and identify the
closest one. Then, color that pixel with a color that is unique
to its closest primitive, thereby forming voronoi cells. Pixels
that are equidistant to two or more primitives may be colored black
to represent the boundary of the voronoi cells.
Dinesh Manocha and Ming Lin described an old yet efficient method
for approximating voronoi diagrams that ports well to graphics hardware.
Notice that if the apex of a right circular cone is aligned with
every point primitive in the plane, the projection of the orthographic
projections (or a “birds eye view”) of the cone intersections
onto the plane form the boundaries of the voronoi cells, and the
projected cones themselves form the interior of the cells. Furthermore,
if preserved, the depth value (the distance from the primitive point
to the cone) at each pixel in the plane is proportional to the distance
from that pixel to its closest primitive. The University of North
Carolina group’s new approach to the same problem involves
distance functions. Unlike the brute-force method, this approach
avoids per-pixel distance evaluation by coarsely point-sampling
the distance function in a region near the primitive, in order to
generate a polygonal mesh approximation that will render much faster
on the graphics hardware. Among the group’s most important
contributions are the specialized algorithms to efficiently construct
such distance meshes for various 2D primitives (recall that these
can include not only points, but line segments and curves also).
Other problems and their graphics hardware solutions presented
in the course included shadow volumes, digital geometry processing,
and visibility culling.