Images and Video
24 July 2002
WYSIWYG. What you see is what you get, right? Well, with the help
of this year's authors, photography may someday be that accurate.
It's a beautiful winter day. The sun is shining and you've managed
to avoid the hustle and bustle of the local shopping mall. Better
yet, it's snowing outside, and you have nowhere you need to be.
You look out into the backyard from your newly purchased farmhouse,
only to see acres of bleached grass, covered in inches of snow.
Immediately you reach for your camera, frame the perfect picture
(with your favorite pine tree in the foreground, and rolling hills
in the back), and snap...Only to be utterly disappointed when the
film is developed.
Once again, either the snow is beautiful and the tree looks like
a washed out blob, or the tree looks immaculately real and the snow
looks like your dirtiest, stained sweatsocks.
OK, so you don't own a farmhouse, you live in Florida and you haven't
seen snow in years, but you understand where I'm going with this,
Thanks to the innovation of some of this year's SIGGRAPH authors,
there soon will be a solution to making High Dynamic Range (HDR)
images look more realistic.
Ranaan Fattal, Dani Lischinski and Michael Werman of the Hebrew
University of Jerusalem have developed a method of compression to
enable the rendering of HDR images on Low Dynamic Range (LDR) devices,
which is fully described in their paper titled "Gradient Dynamic
High Dynamic Range Compression."
On Tuesday, they offered an overview of their findings, stating
their primary goal of avoiding overexposure without losing detail
or adding artifacts. Their method borrows from Tomasi and Manduchi's
ideas on bilateral filtering and unifies it with anistropic diffusion,
which gives the user a more reliable estimation.
Two other authors from the same session offered similar results
from different methods. Fredo Durand and Julie Dorsey from the Massachusetts
Institute of Technology's Laboratory for Computer Science chose
to combine a larger scale layer with a detail layer to gain the
correct input in a faster time. Theirs is an approach of non-linear
filtering that does not blur across the edges. It is explained in
their paper, "Fast Bilateral Filtering for the Display of High Dynamic
The University of Utah's Erik Reinhard, Michael Stark and Peter
Shirley, along with Cornell University's James Ferwerda define high
dynamic range as the ratio of the brightness and darkness regions
where detail is available. Their paper titled "Photographic Tone
Reproduction for Digital Images" goes on to explain their belief
that most HDR images are actually medium dynamic range (MDR) images,
based on a scale that they have developed.
In the same papers session, "Images and Video," two other papers
The first was named "Video Matting of Complex Scenes," which was
written and coloborated on by the University of Washington's Yung-Yu
Chang, Aseem Agarwala and Brian Curless; University of Washington
and Microsoft Research's David Salesin; and Microsoft Research's
Richard Szeliski. Their inventive approach entails trimap interpolation,
the use of optical flow and background reconstruction enabling a
clean finished product with minimal keyframes.
The final paper sparked imagination, as the State University at
Stonybrook's Tomihisa Welsh, Michael Ashikhmin and Klaus Mueller
explained their advancements with "Transferring Color to Greyscale
Images" in a paper of the same name. Through a series of equations,
they have discovered a way to color greyscale images based on their
luminance values. This conception hints that someday we may see
our favorite black and white films in their actual color.
So, while you may not own that farmhouse, and you may not even
own a camera, beware. Talented intellectuals, such as these authors,
are taking such strides in images and video that WYSIWYG may actually
ring true sooner than you think.