Preparing for a Lunar Impact
Author - Ernie
Wright
University
of
Maryland
Baltimore
County
Scientific
Visualization
Studio,
NASA
Goddard
Space
Flight
Center
"Ernie Wright, a
visualization
specialist
at
NASA
Goddard,
uses
the
work
he
did
for
the
Lunar
Reconnaissance
Orbiter
mission
to
illustrate
how
they
routinely
employ
a
set
of
tools
to
produce
visualization
for
operations,
public
outreach,
and
education."
Kwan-Liu
Ma, VisFiles Editor
A
two-ton
Atlas
Centaur
rocket
body,
part
of
the
Lunar
Crater
Observation
and
Sensing
Satellite
(LCROSS),
struck
the
floor
of
Cabeus
crater,
near
the
south
pole
of
the
Moon,
at
11:31
UT
on
October
9,
2009.
The
purpose
of
the
crash
was
to
create
a
plume
of
debris
that
could
be
examined
for
the
presence
of
water
and
other
chemicals
in
the
lunar
regolith.
The
effects
of
the
impact
were
captured
by
sensors
on
board
a
shepherding
satellite
following
four
minutes
behind
the
Centaur.
They
were
also
recorded
by
the
Lunar
Reconnaissance
Orbiter
(LRO),
which
passed
over
the
crash
site
less
than
two
minutes
after
the
impact.
[1]
Long
before
the
event,
LCROSS
mission
planners
organized
the
LCROSS
Observation
Campaign
to
recruit
ground-based
observatories
in
the
effort
to
observe
the
impact.
[2]
Participating
telescopes
included
Magdalena
Ridge
and
Apache
Point
in
New
Mexico,
the
MMT
in
Arizona,
Lick,
Mt.
Wilson,
and
Palomar
in
California,
and
the
big
telescopes
(Keck,
Subaru,
Gemini
North,
NASA
IRTF)
on
the
Mauna
Kea
summit
in
Hawaii.
As
part
of
the
preparations
for
ground-‐based
observations,
the
Scientific
Visualization
Studio
(SVS)
at
NASA
Goddard
Space
Flight
Center
[3]
was
asked
to
model
the
impact
site
as
it
would
appear
from
Earth.
The
resulting
images
would
be
used
to
help
aim
the
telescopes.
Their
aim
had
to
be
extremely
precise,
on
the
order
of
a
few
arcseconds
for
spectrographic
observations,
and
the
target—the
rim
of
an
edge-‐on
crater
hidden
in
the
rough,
shadowy
terrain
very
near
the
southern
limb
of
the
Moon—was
not
easy
to
recognize.
Figure
1
shows
the
target
area
rendered
from
the
point
of
view
of
California
observatories,
one
of
the
images
created
for
the
Observation
Campaign.
The
impact
site
is
near
the
center
of
the
image,
in
the
shadow
behind
the
prominent
mountain
in
the
foreground.
The
rendered
image
accurately
represents
terrain,
shadows,
and
perspective,
as
can
be
seen
by
comparison
with
a
photograph
taken
at
Palomar
Observatory.
[4]

Figure
1:
View
of
Cabeus
crater
from
California
at
the
time
of
the
impact.
The
image
on
the
left
is
a
visualization.
The
one
on
the
right
is
an
astrophotograph
taken
by
Antonin
Bouchez
through
the
200-‐
inch
(5.1-‐meter)
Hale
telescope
at
Palomar
Observatory.
The
success
of
the
visualization
model
led
to
its
use
for
far
more
than
just
ground-‐based
observation
planning.
It
significantly
influenced
the
selection
of
the
impact
site
by
providing
accurate
information
about
the
visibility
of
several
candidate
sites
from
Earth.
Image
maps
with
prerendered
(baked)
shadows
based
on
the
model
were
incorporated
into
Analytical
Graphics'
Satellite
Tool
Kit
(STK),
which
is
real-‐
time
spacecraft
visualization
software
used
by
the
LCROSS
team
for
mission
operations
exercises.
The
STK
display
was
also
a
visual
component
of
the
live
NASA
TV
coverage
of
the
impact
(Figure
2).

Figure
2:
Real-‐time
animation
shown
during
live
coverage
of
the
LCROSS
impact.
The
author's
contribution
is
the
image
map
of
the
lunar
surface,
with
baked
shadows.
Once
the
model
was
set
up,
it
was
relatively
easy
to
render
images
of
the
impact
site
from
any
point
of
view.
Figure
3
is
a
nadir-‐pointing
(straight
down)
view
with
the
target
location
centered
in
the
image.
Compare
the
rendered
image
on
the
left
with
the
image
transmitted
by
the
LCROSS
shepherding
satellite's
near-‐infrared
camera.
Cabeus
is
the
large,
ragged-‐edged
crater
in
the
lower
center.
The
Centaur
struck
in
the
northern
end
of
the
crater
floor,
in
the
center
of
the
roughly
U-‐shaped
shadow
along
the
northern
rim.
Figure
3:
Cabeus
crater,
bottom
center
of
both
images.
The
image
on
the
left
is
a
visualization.
The
one
on
the
right
was
taken
by
the
LCROSS
NIR
camera.
Tools and Methods
The
LCROSS
visualization
used
Autodesk
Maya
for
scene
construction
and
Pixar's
PhotoRealistic
RenderMan
(PRMan)
for
rendering.
The
color
of
the
lunar
surface
was
taken
from
the
Clementine
global
mosaic.
Displacement
maps
for
the
terrain
were
provided
by
LRO's
Lunar
Orbiter
Laser
Altimeter
(LOLA)
instrument
team
in
the
form
of
a
polar
stereographic
projection
out
to
75
degrees
south
latitude
at
a
resolution
of
240
meters
per
pixel.
The
LCROSS
mission
supplied
target
coordinates,
and
the
Observation
Campaign
furnished
astrophotographs
of
the
target
region
taken
under
similar
lighting
conditions,
which
were
used
for
sanity
checking
the
renders.
Custom
C
code
for
transforming
between
Moon-‐fixed,
Earth-‐fixed,
and
J2000
astronomical
coordinate
systems,
and
for
translating
these
to
Maya
positions
and
rotations,
relied
on
the
JPL/NAIF
SPICE
library
and
DE421
ephemeris.
Custom
IDL
code
was
written
to
translate
the
terrain
data
into
an
image
map,
to
create
additional
maps
for
annotating
the
images,
and
to
find
the
elevation
and
slope
of
the
target
site.
Rendering
relied
on
several
custom
RenderMan
shaders
written
by
SVS
colleagues.
In
the
remainder
of
this
article,
I'll
discuss
some
of
the
novel
challenges
of
visualizing
the
LCROSS
impact
site.
Although
I
occasionally
refer
to
the
tools
I
used,
the
issues
are
quite
general.
Viewing Geometry
Simulating
telescopic
views
of
the
Moon
required
a
field
of
view
on
the
order
of
100
kilometers
at
the
Earth-‐Moon
distance,
or
about
an
arcminute
(1/60th
of
a
degree).
Figure
4
illustrates
the
scale
of
the
impact
site
and
the
Earth-‐Moon
system.
The
Moon
lies
at
a
distance
of
about
30
Earth
diameters.
The
disk
of
the
Moon
fills
the
frame
in
a
0.5-‐degree
field
of
view.
The
field
in
the
bottom
right
of
the
figure
is
15
times
narrower.

Figure
4:
The
true
scale
of
the
Earth-‐Moon
system
(top)
and
the
impact
site.
Blue
arcs
show
distances
of
200,
100
and
50
kilometers
from
the
impact.
While
unremarkable
by
astronomical
standards,
such
a
small
field
is
far
outside
the
typical
use
cases
for
Hollywood
animation
software,
and
is
in
fact
outside
the
range
that
Maya's
interface
allows
the
user
to
set
directly.
It
sets
a
lower
bound
of
1.0
degree
on
its
Angle
of
View
camera
attribute,
and
it
sets
an
upper
bound
of
3500
mm
on
the
focal
length,
equivalent
to
0.33
degrees
with
the
default
camera
back.
As
in
most
animation
software,
the
field
of
view
of
the
Maya
perspective
camera
is
both
inversely
proportional
to
the
focal
length
and
directly
proportional
to
what
Maya
calls
the
aperture
size.
This
is
the
physical
size
of
the
simulated
image
medium.
By
default,
Maya
sets
this
to
36
x
24
mm
(1.417
x
0.965
inches),
the
size
of
the
image
in
a
35
mm
still-‐image
camera.
Fortunately,
Maya
allows
the
user
to
simultaneously
set
the
focal
length
and
the
aperture
size
to
achieve
the
arcminute
field
of
view
that
was
needed.
Such
an
extreme
field
also
raises
concerns
about
numerical
precision.
Precision
problems
would
manifest
in
the
rendered
images
as
quantized,
brick-‐like
geometry,
incorrect
z-‐depth
relationships,
and
other
visually
obvious
artifacts.
While
it
was
a
(relatively
minor)
issue
for
the
OpenGL
preview
in
Maya's
interface,
which
uses
single-‐precision
arithmetic,
there
were
no
apparent
artifacts
in
the
rendered
images.
As
often
happens
in
production,
this
answer
was
necessarily
good
enough.
I
never
determined
how
severely
this
very
narrow
field
of
view
tests
the
limits
of
numerical
precision
in
the
Maya/PRMan
pipeline.
Spheres
An
accuracy
problem
of
a
different
kind
arose
from
the
representation
chosen
for
the
spherical
geometry
of
the
Moon.
In
Figure
5,
the
LCROSS
impact
site
is
marked
in
two
different
ways.
The
red
disk
is
an
image
map
applied
to
the
surface
of
the
Moon
sphere.
The
smaller
gray
disk
is
the
end
cap
of
a
cylinder
placed
at
the
impact
site
like
the
flag
stick
on
a
golf
green.
(The
flag
stick
is
visible
in
Figure
4.)
These
two
disks
should
be
concentric,
but
in
the
image
on
the
left,
they
are
not.
The
red
disk
and
the
displacement
map
think
that
the
impact
site
is
in
one
place,
while
the
flag
stick
and
the
camera
think
it's
in
another.
The
left
image
uses
Maya's
cubic
NURBS
sphere
to
represent
the
Moon.
For
the
image
on
the
right,
an
RiSphere,
the
RenderMan
sphere
primitive,
was
substituted
in
the
RIB
file
submitted
to
PRMan.
This
was
the
approach
adopted
for
the
LCROSS
visualization.
The
obvious
inference
is
that
Maya's
cubic
NURBS
sphere
isn't
exactly
spherical,
but
it
isn't
clear
whether
the
problem
lies
in
the
actual
shape
of
the
geometry
or
in
the
image
mapping
parameterization.

Figure
5:
A
cylinder
(centered
gray
dot)
and
an
image
map
(red)
positioned
at
the
impact
site.
On
a
cubic
NURBS
sphere
(left),
the
geometry
and
the
image
map
don't
coincide.
The
agreement
on
an
RiSphere
(right)
is
much
better.
Sharp Terminator Shader
In
the
Lambert
(or
matte)
diffuse
reflectance
model,
the
intensity
of
the
light
reflected
from
a
surface
is
just
the
cosine
of
the
angle
between
the
light
and
the
surface
normal.
This
is
a
familiar
and
reasonable
baseline
model
for
many
objects.
The
Moon,
however,
is
perhaps
the
archetype
of
a
class
of
objects
that
do
not
behave
this
way.
A
Lambertian
full
Moon
would
appear
much
darker
toward
the
edges
than
it
does
in
the
middle,
when
in
reality
the
full
Moon
looks
like
a
uniformly
bright
disk.
Several
physically
motivated
models
of
such
rough,
light-‐scattering
surfaces
are
available
from
both
computer
graphics
(Oren-‐Nayar
[5])
and
planetary
photometry
(Hapke-‐Lommel-‐Seeliger
[6]),
but
to
varying
degrees
they
are
computationally
expensive
or
artistically
inconvenient
(Oren-‐Nayar,
for
example,
tends
to
look
dark
and
low-‐contrast).
The
sharp
terminator
custom
shader
used
in
the
LCROSS
visualization
simply
raises
the
Lambert
cosine
term
to
a
fractional
power,
which
creates
a
high
plateau
of
intensity
across
a
wide
range
of
angles,
followed
by
a
steep
falloff
near
the
terminator
(the
shadow
line).
This
idea
did
not
originate
with
me.
It's
been
a
special-‐purpose
shading
option
in
several
renderers
for
decades,
and
it's
the
same
idea
as
the
cosine
power
used
in
Phong
specularity.

Figure
6:
Sharp
terminator
shader
(top)
and
Lambert
shading
(bottom).
Raytrace Bias
Raytracing
is
the
natural
approach
for
modeling
shadows
accurately.
A
ray
is
fired
toward
the
light
(in
this
case
representing
the
sun)
from
each
visible
point
on
the
surface,
and
if
the
ray
intersects
geometry
before
reaching
the
light,
the
origin
of
the
ray
is
assumed
to
be
in
shadow.
While
physically
rigorous
in
principle,
raytracing
can
be
affected
by
the
limitations
of
real
computers.
One
well-‐known
problem
is
ray
self-‐intersection
caused
by
finite
numerical
precision.
A
ray
fails
to
leave
the
starting
gate
because
it
hits
its
own
origin.
The
qualitative
effect
is
isolated
black
dots,
called
surface
acne,
where
the
surface
has
been
falsely
shadowed
by
self-‐intersecting
rays.
[7]
The
most
common
solution
is
to
set
an
epsilon,
or
bias,
such
that
ray
intersection
distances
less
than
this
bias
value
are
ignored.
The
choice
of
bias
value
is
generally
ad
hoc
and
scene
dependent.
In
this
case,
the
grazing
angle
of
the
illumination
at
the
lunar
south
pole
made
the
problem
more
acute.
A
ray
fired
nearly
parallel
to
the
surface
can
travel
a
long
way
before
incorrectly
intersecting
the
surface
"from
below."
Finding
the
right
balance
between
false
negative
and
false
positive
was
important,
since
the
goal
was
to
make
shadows
that
were
accurate,
not
merely
plausible.
Figure
7
shows
the
effect
of
varying
the
bias
value.
In
the
image
on
the
left,
the
bias
is
too
high,
causing
areas
that
should
be
in
shadow
to
appear
lit.
In
the
image
on
the
right,
the
bias
value
is
too
low.
The
large
area
of
the
crater
floor
is
for
the
most
part
correctly
shadowed,
but
speckles
of
false
shadow
have
emerged
on
the
brightly
lit
crater
rims.
Figure
7:
The
effect
of
raytrace
bias
on
shadow
calculations.
Conclusion: A Wider View
The
SVS
uses
a
combination
of
off-‐the-‐shelf
Hollywood
tools
and
custom
scripts,
shaders,
and
standalone
software
to
create
scientific
imagery.
The
bulk
of
our
work
is
for
public
outreach,
communicating
the
science
done
at
NASA
Goddard
to
a
general
audience.
We
take
terabytes
of
satellite
data
and
computer
simulations
and
turn
it
into
visualizations
of
hurricanes,
sea
ice,
ocean
temperature,
vegetation,
cloud
cover,
solar
dynamics,
planetary
topology,
atmospheric
and
ocean
currents,
spacecraft
trajectories,
and
a
number
of
other
Earth
and
space
science
phenomena.
Our
work
has
been
featured
in
SIGGRAPH's
Computer
Animation
Festival,
in
competitions
sponsored
by
the
National
Science
Foundation
and
IEEE,
and
on
the
covers
of
science
journals.
The
SVS
Web
site
[3]
archives
and
serves
a
database
of
several
thousand
animations
organized
into
galleries
and
searchable
by
keyword.
While
we
always
work
closely
with
scientists,
it's
unusual
that
we
have
an
opportunity
to
participate
in
actually
creating
science,
much
less
to
have
even
a
small
role
in
the
planning
of
a
multimillion-‐dollar
experiment
with
no
do-‐overs.
But
in
this
case,
the
scientific
question
was
one
we
were
well-‐equipped
to
answer:
What
will
it
really
look
like?
What
will
sunlight
falling
on
a
particular
piece
of
lunar
terrain
on
a
particular
date
and
time
look
like
through
telescopes
at
particular
locations
on
the
Earth?
Answering
that
question
was
uniquely
rewarding,
and
we
look
forward
to
future
interdisciplinary
collaborations
of
this
kind.
Acknowledgments
I
thank
Tim
McClanahan,
co-‐investigator
on
LRO's
LEND
instrument,
for
bringing
this
project
to
the
SVS,
for
freely
sharing
his
preliminary
work,
and
for
collaboration
and
advice;
LCROSS
principle
investigator
Tony
Colaprete
and
Observational
Campaign
astronomer
Diane
Wooden,
for
their
accessibility
and
extremely
helpful
feedback;
LCROSS
simulation
software
lead
Mark
Shirley
and
Analytical
Graphics,
Inc.
software
engineer
Jonathan
Lowe,
for
help
and
advice
on
getting
image
maps
into
STK;
Erwan
Mazarico
and
Greg
Neumann
of
the
LRO
LOLA
instrument
team,
for
making
available
the
most
up-‐to-‐date
terrain
data;
Alex
Kekesi,
for
collaboration
and
advice,
the
Clementine
basemap
shader
and
terrain
data
wrangling
at
crunch
time;
Greg
Shirah,
who
wrote
many
of
the
SVS's
indispensable
scripts
and
shaders;
SVS
director
Horace
Mitchell,
for
managing
the
project;
and
LRO
project
scientist
Rich
Vondrak,
for
allowing
me
to
be
lent
to
LCROSS
for
a
little
while.
Using
an
RiSphere
was
Greg
and
Alex's
idea.
I'm
grateful
to
Helen-‐Nicole
Kostis,
Alex,
Greg,
Tom
Bridgman
and
Lori
Perkins
for
feedback
on
a
draft
of
this
article.
About the Author

Ernie
Wright
is
a
programmer/animator
at
the
NASA
Goddard
Space
Flight
Center
Scientific
Visualization
Studio,
where
most
of
his
time
is
currently
devoted
to
education
and
public
outreach
for
the
Lunar
Reconnaissance
Orbiter.
Prior
to
coming
to
the
SVS,
his
early
work
included
isosurface
cloud
visualization
for
the
Defense
Nuclear
Agency
Weapon
Effects
Directorate
and
terrain
visualization
for
the
Central
Intelligence
Agency
Office
of
Imagery
Analysis.
More
recently,
he
was
a
LightWave
3D
senior
programmer.
He
has
a
B.S.
in
computer
and
information
science
with
a
minor
in
communication
from
University
of
Maryland
University
College.
References:
1.
Schultz,
P.H.,
Hermalyn,
B.,
Colaprete,
A.,
Ennico,
K.,
Shirley,
M.,
Marshall,
W.S.
2010.
The
LCROSS
Cratering
Experiment.
Science
22
October
2010,
pp.
468-‐472.
See
also
the
half-‐dozen
related
articles
in
the
same
journal.
2.
Heldmann,
J.L.,
Colaprete,
A.,
Wooden,
D.,
Asphaug,
E.,
Schultz,
P.H.,
Plesko,
C.S.,
Ong,
L.,
Korycansky,
D.G.,
Galal,
K.,
Briggs,
G.
2008.
Lunar
Crater
Observation
and
Sensing
Satellite
(LCROSS)
Mission:
Opportunities
for
Observations
of
the
Impact
Plumes
from
Ground-‐based
and
Space-‐based
Telescopes.
39th
Lunar
and
Planetary
Science
Conference.
LPI
Contribution
No.
1391,
p.
1482.
3.
http://svs.gsfc.nasa.gov
4.
http://www.astro.caltech.edu/palomar/lcross.html
5.
Oren,
M.,
Nayar,
S.K.
1994.
Generalization
of
Lambert's
Reflectance
Model.
ACM
21st
Annual
Conference
on
Computer
Graphics
and
Interactive
Techniques
(SIGGRAPH),
pp.
239-‐246.
6.
Hapke,
B.W.
1963.
A
Theoretical
Photometric
Function
of
the
Lunar
Surface.
Journal
of
Geophysical
Research
68:15,
pp.
4571-‐4586.
7.
Woo,
A.,
Pearce,
A,
Ouellette,
M.
1996.
It's
Really
Not
a
Bug,
You
See...
IEEE
Computer
Graphics
and
Applications
16:5,
pp.
21-‐25