eTech & Art & More

Point Based Rendering and Shadows

16 August 2001
by Forrester Cole

Point based rendering is an alternative rendering technique to conventional polygon based rendering. Instead of a set of triangles, the object is represented by a cloud of single points, each with a surface normal. Point based rendering is very useful for highly complex models, which would otherwise require a huge number of triangles. It is also a natural model for data captured by laser range scanners and similar devices, which use a point cloud for their native representation. The session included four papers, two of which are directly related to rendering with points, and two that use point based techniques to enhance polygon rendering.

The Randomized z-Buffer Algorithm
The first paper presented a method of using point sampling to greatly improve the efficiency of a conventional z-buffer, making it possible to render extremely complex polygonal scenes at interactive rates. The paper was the work of Michael Wand, Ingmar Peter, and Wolfgang Strasser of the Universitaet Tuebingen, and Matthias Fischer and Friedhelm Meyer auf der Heide of the Universitaet Paderborn. Wand gave the presentation.

The algorithm chooses which triangles to draw based on a random sample of points collected in screen space. The authors were able to reduce the asymptotic time taken to render extremely large polygon scenes from linear to logarithmic. In a demonstration, Wand was able to render a scene containing 10^14 triangles in only 40ms. Since the algorithm is randomized, however, sparkling and other temporal artifacts can be a problem. The authors tried to address the problem by averaging images over several frames, but some demos still showed significant sparkling in the distance.

Surface Splatting
Matthias Zwicker of Eidgenoessiche Technische Hochschule (ETH) Zuerich presented the next paper, which directly tackled point based rendering. The paper was co-authored by Markus Gross of ETH Zuerich, and Hanspeter Pfister and Jeroen van Baar of Mitsubishi Electric Research Laboratory. The algorithm addresses aliasing, a central problem of point based rendering. To make a good image out of points, a reconstruction filter must be used to blend over the holes between points. A naïve filtering solution creates bad aliasing on distant points, where more than one point may overlap in a single pixel.

The solution presented by the authors uses a hybrid filter that both removes aliasing in far points and blends near points without losing detail.

Spectral Processing of Point-Sampled Geometry
The Fourier Transform is an indispensable tool for filtering and spectral analysis on two dimensional signals, but it is tricky to apply to irregularly sampled points on a 3D surface. The third paper, authored by Mark Pauly and Markus Gross of ETH Zuerich, attempts to adapt the Fourier Transform to a point-sampled manifold. The goals of the work were to develop a method for spectral analysis that allowed simple filtering but also maintained some local control over that filtering. The Fourier Transform is a global operation, and does not provide for any breakdown of a signal into local pieces. Moreover, the Fast Fourier Transform algorithm requires a regularly sampled grid, which a point cloud is decidedly not.

The solution that the authors developed was to break the point cloud into a series of patches, each of which could be transformed and manipulated independently. Each patch is also resampled into a regular grid before transforming. Any filtering operations can then be done easily in the frequency domain. Another resampling step and recombination of the patches yields the new, filtered point cloud. This system allows for good local control and good filtering opportunities. An audience member asked whether breaking the point cloud into patches introduced aliasing. Pauly admitted that in theory it did, but in practice the effect was not noticeable.

Adaptive Shadow Maps
The final paper was not directly related to point based rendering, but addressed the tangential problem of aliasing in shadow maps from over-sampling. The paper was presented by Randima Fernando, who authored it along with Sebastien Fernandez, Kavita Bala, and Donald P. Greenberg, all from Cornell. Shadow maps are created at a set resolution, which causes them to create pronounced jaggy artifacts when their resolution is too low. To reduce these artifacts, programmers are forced to spend large amounts of time optimizing the resolution of shadow maps for appearance versus memory use. The new algorithm simulates shadow maps of extremely high resolution by adaptively increasing detail in the areas that need it most: the edges of shadows. The algorithm can run at interactive rates on a PC, and uses a fixed amount of memory no matter how close the camera zooms in on a shadow.
The tradeoff of the method comes in two parts. First, the algorithm is much more complicated than conventional shadow mapping, and thus is much slower. Even with optimization, calculating the change in the hierarchical map takes time. Secondly, the method as presented cannot take advantage of shadow mapping hardware currently available.



SIGGRAPH 2001 page for this session.




This page is maintained by YON - Jan C. Hardenbergh All photos you see in the 2001 reports are due to a generous loan of Cybershot digital cameras from SONY