 |
|
 |
Interview: David Kirk, Computer Graphics Achievement Award recipient
email
28 July 2002
What first drew you to computer graphics?
I started out as an architect (the kind that designs buildings) and a
mechanical engineer. The most interesting part of both of these areas was
being able to build things, and the design process: figuring out what to
build. As I was going through my mechanical engineering degree at MIT, I
discovered Computer Aided Design (CAD) and the graphics part of CAD. I was
very intrigued by the process of making pictures of object that don't exist
yet. It was such a foreign concept to most people. After a while, I
figured out that the graphics part of it was more interesting than the rest
of the engineering process, at least for me!
Do you have any favorite CG mentors?
I think that the really early folks in computer graphics had the most
exciting time, and I was really inspired by people who discovered a new
idea, or a new way of looking at things. Of course, Jim Blinn was always
doing great new things, but I also have to credit Turner Whitted, Al Barr,
and a few others for being inspirational. In particular, I think that
hearing Frank Crow talk about anti-aliasing really cemented my interest in
graphics. This was at a SIGGRAPH course; probably my first SIGGRAPH.
What was the first time you contributed to SIGGRAPH?
My first SIGGRAPH contribution was probably at SIGGRAPH 87. Jim Arvo and I
wrote a paper called "Fast Ray Tracing by Ray Classification" that
accelerated ray tracing by using a 5-dimensional data structure that
incorporated ray direction as well as position. It was great fun producing
the work, and Jim and I presented it together at SIGGRAPH.
What year/city was your first SIGGRAPH? Which was most intense? Why?
My first SIGGRAPH was... boy, that's hard to remember. Let's see... it was
someplace hot, in the summer - I think it was Dallas, so that would make it
SIGGRAPH 81. I think that the most intense would have to be the first one
where I presented a paper. There is nothing like the adrenaline of standing
up to talk in front of a few thousand people. You're in a big dark room
with lights in your eyes; although you can't see anyone, you know they're
out there!
What contributions to SIGGRAPH are you most proud of?
I am most proud of my consistent support of the field, year after year. I
often think that I need to take a rest; maybe I'll skip SIGGRAPH this year,
but there's always an opportunity that pops up to participate in a course, a
panel, or a paper that just has to be written, or a talk that just has to be
given. I really enjoy that the SIGGRAPH community has a constancy about it.
As new people join the community and more experienced contributors move on,
the bulk of the group continues to interact and raise the level of
contribution graphics.
What's your favorite thing at this year or last year's SIGGRAPH?
I enjoy the "infinite walk". As you walk through the exhibit hall, or
toward the papers sessions, or the panels, or to any other event, there is a
continuous stream of people whom you know, who want to say hi, talk about
some technical (or, non-technical!) thing, and pass a little time.
Consequently, it takes forever to get anywhere. But, you don't mind at all,
because of all of the great interactions.
What near/intermediate developments in CG do you look forward to?
I look forward to the transition of more and more graphics algorithm
research being GPU-based. Now that it is possible to program GPUs with a
high level language, it's much easier for researchers and software
developers to write GPU code. Since GPUs are computationally much more
dense than CPUs, costs are lower per FLOP, and also GPUs encode a lot of the
domain-specific knowledge that we've accumulated over the years.
Consequently, I expect these developments to spur innovation at all of the
interesting timescales. First, real-time: more and more realism will be
possible at 30-60 FPS rates. Last year's film quality animations (Shrek,
Final Fantasy, Monsters Inc, etc) are now or will be very soon renderable in
real time. The other end of the spectrum is most interesting to me, though.
Researchers and the film community are willing to wait roughly 2 hours per
frame, on average. 2 hours of GPU-based hardware computation is much, much
more than 2 hours of CPU software rendering, and I believe that this will
lead to absolutely stunning new research results. Many approaches that were
previously thought to be hopelessly hard will simply succumb to brute force.
It's a great time to be doing graphics!
What's your career path and educational path been?
Oh, you know... school and work :-)
I've always been very interested in practical real-world applications, and
consequently I have worked in industry almost full-time during my education.
I attended MIT from 1978-84, and received a bachelor's and master's degree
in mechanical engineering, and engineering science, respectively. During
that time, I worked at Computervision, Raster Technologies, and finally at
Apollo Computer. During that time, I gradually moved from applications and
software rendering, to firmware for graphics terminals, to graphics hardware
architecture. After I graduated, I continued to work at Apollo, which
eventually became part of Hewlett-Packard. I decided to go back to school
in 1987, in order to strengthen my background in mathematics and
hardware/VLSI, so Caltech seemed like an ideal choice. During my time at
Caltech, I continued to work at HP. I received my Ph.D from Caltech in
1993, and joined Crystal Dynamics, making video game software for the 3DO
video game console platform, and later for Sony Playstation and Sega Saturn.
It was really great to learn about an entirely new field of application of
graphics. After Crystal Dynamics suffered some setbacks, I moved on to
NVIDIA, which at the time had 38 employees. You know the rest.
|
 |
|
 |
 |