Illumination
& Textures
17 August 2001 by
Forrester Cole
The Illumination
& Textures session brought together two papers on techniques
for rendering specific objects with two more general papers on improvements
to texture mapping methods. The first illumination paper was on
the realistic rendering of knitwear, such as wool. The second was
a collection of research on methods for modeling a realistic night
sky. The first texture mapping paper was a on a general improvement
to texturing progressive meshes, where the second was a specific
technique for applying texture maps constrained to features of a
model.
Photo-Realistic
Rendering of Knitwear
Baining Guo showed a new technique for rendering wool and other
loosely woven cloth. The work was done by Ying-Qing Xu, Yanyun Chen,
Stephen Lin, Hua Zhong, Enhua Wu, Guo, and Heung-Yueng Shum, all
of Microsoft Research China.
A central problem
with rendering knitwear is that knitwear is made up of many tiny
fibers. The small fibers are loosely woven, so that each fiber itself
is visible and contributes to the look of the knit. Rather than
model each fiber separately, the authors chose to abstract the form
of a string of yarn to a two dimensional lumislice. A lumislice
represents the light reflected by a slice of the yarn for a given
light direction. Computing the lumislice for a given light direction
captures the tiny shadowing caused by the individual fibers. To
reconstruct the yarn, the lumislice is extruded along the yarns
axis, rotated to simulate the lie of the yarn. The high level shadowing
caused by whole yarn threads is then added into the final image.
The results of
the method are convincing. The tiny detail on the knitwear is apparent,
and the overall appearance is good, if not quite photo-realistic
as advertised.
A Physically-Based
Night Sky Model
This paper had only one fewer author than the previous one. Henrik
Wann Jensen of Stanford presented the work, which he wrote with
Michael M. Stark, Simon Premoze, and Peter Shirley of the University
of Utah, and Fredo Durand and Julie Dorsey of MIT. The motivation
for their work was to create an easy to use model that captured
the beauty and complexity of the night sky. The paper was more a
collection of techniques than a unified model, but the results were
indeed pretty.
Jensen stressed
the importance of making a model correct in its physical units.
The amount of light in different parts of the night sky varies over
several orders of magnitude. If the scene is too dim, you cannot
simply double the amount of light and expect to get good results.
The relative magnitudes will not be preserved.
The authors
model attempts to faithfully recreate the Moon, atmospheric scattering
due to dust, individual stars, and the glow of the Milky Way. Besides
creating a large scale physical model, the authors also model the
effects of human night vision. One dramatic phenomenon is the shifting
in the observed spectrum towards blue. This is an empirically verified
effect, and artists have used palette shifts to represent night
and low light scenes. Without a blue shift, a simply darkened night
scene can look like an under exposed photograph.
The final results
of the work appeared believable. However, some color quantization
problems (perhaps to be expected, given the almost entirely dark
blue palette) tarnished the images.
Texture Mapping Progressive Meshes
The third paper took a dramatic turn from the previous subject to
a general method for improving texture mapping. A progressive mesh
is a mesh representation well suited to compression and level of
detail adjustment. Conventional methods of texturing a progressive
mesh can introduce excessive stretching artifacts. Pedro Sander
of Harvard presented his work on addressing the problem. The paper
was co-authored by Steven Gortler of Harvard and John Synder and
Hugues Hoppe, both of Microsoft Research.
The authors introduced a metric that measures the stretching caused
by a certain texture parameterization. They also presented an algorithm
that can be used to create a parameterization that minimizes the
stretching artifacts. The results of using the new metric are textured
models with texture skin that is more consistently detailed than
unadjusted models. The demonstrations that Sander presented were
subtle, but showed improved detail in some difficult areas of high
curvature.
Constrained Texture
Mapping for Polygonal Meshes
Bruno Levy of INRIA Loria finished the session with the second texture
mapping paper. His system is designed to allow a user to associate
details of a texture map with details of a model. One of Levys
examples for the concept was a cows head textured with the
face of a tiger. The basic approach is to warp the texture in a
separate step prior to applying the texture to the model. The user
sets the parameters for the pre-warp step by placing feature points
on the texture and on the model. One audience member complained
that the work seemed to duplicate features already available in
commercial packages such as Maya. Levy responded that while it is
true that his method and previously available commercial methods
perform similar tasks, his optimizations make the concept more intuitive
and allow a user to perform a texturing task that might take half
an hour in only a couple minutes.
|