Conference
  eTech & Art
  Courses
  Papers
  Panels
   New in 02!
   
Exhibition
   
Interviews
   
Life at SIGGRAPH
   
 
 
 
  Reports from SIGGRAPH 2001

State of the Art in Hardware Shading

Introduction

Much of the excitement in the most recent generations of graphics processing units, or GPUs, comes from their programmability at the vertex and pixel levels. Vertex and pixel programs, also called shaders, have provided developers with an unlimited potential for creating new real-time visual effects.

What are Vertex Programs?
Objects and scenes in 3D graphics are made up of thousands of small triangles. Even objects that appear round or curved are actually composed of many triangles, small enough and tiled together in just the right way such that together they appear to form a round object. Each of these triangles is defined by a set of three vertices (corners), each with associated data such as its position in the 3D scene, its color, its alpha channel, its texture coordinates, and lighting properties (for example, whether it has a shiny, metallic look or a softer, more ). To render the 3D scene comprised of these thousands of triangles (and even more vertices), each vertex is processed through the “graphics pipeline”, which converts triangles in the 3D scene to triangles on a 2D display (such as the screen of a monitor).

One of the first stages of the graphics pipeline, called the transformation and lighting (or “T and L” stage) transforms the vertices of a triangle in the 3D scene from world space to eye space. Each vertex of the triangle has some location (that is, a value for x, y, and z) in the world coordinate system, a coordinate system in which all the geometry and a "virtual camera" lives. The virtual camera faces the 3D scene and, depending on its field of view, sets up a plane of some size onto which the final scene will eventually be projected and rendered. The transformation part of the T and L stage simply converts the vertices of the triangle from the world coordinate system to the eye coordinate system, a coordinate system with its center at the virtual camera, its –z-axis pointing in the direction the camera is facing, and its y-axis pointing up (this transformation turns out to be a simple matrix multiplication). The lighting part of the T and L stage involves calculating the color of a vertex based on the visual properties (like shininess and position in relationship to the light sources, for example).

In brief, vertex programs allow the programmer to replace the transformation and lighting stage of the traditional fixed-function graphics pipeline with custom per-vertex processing. Programmers are provided with an assembly language interface (and, more recently, higher-level shading languages like NVIDIA’s Cg language) to the T and L unit of the graphics hardware, giving complete control of how vertices are transformed and lit.

What are Pixel Programs?
Further down the graphics pipeline, pixels are rendered to the display—after all, ultimately, an image is just a combination of the right pixels of the right color turned on. Pixel programs, also called pixel shaders or fragment shaders, allow the programmer to perform arbitrary operations per pixel before the pixel is put in the frame buffer (the frame buffer in turn provides that data to the display device to create the image on the screen).

Course Highlights
At “State of the Art in Hardware Shading,” speakers from Chas Boyd of Microsoft Corporation, Michael McCool of the University of Waterloo, Bill Mark of NVIDIA Corporation, Jason L. Mitchell of ATI Research, Marc Olano of SGI, and Randi Rost of 3Dlabs, Inc., presented their ideas and contributions to the latest in real-time graphics hardware shading.

Bill Mark previewed NVIDIA’s latest graphics hardware technology and its real-time shading capabilities. NVIDIA’s newest GPUs will feature 32-bit IEEE floating point capability throughout the graphics pipeline, as well as true data-dependent control flow (previously, a lack of control flow meant that shaders executed line by line, without the possibility to conditionally branch or loop). Vertex programs can also be much larger (and therefore create more intricate effects) due to a maximum program size of 256 instructions (double that of existing NVIDIA hardware, and effectively many times greater due to branching and looping) and an increased number of instructions and registers. The new fragment processor will boast a more rich instruction set, higher resource limits, and improved texture-mapping capabilities, although still lacking will be branching, indexed reads from registers, and memory writes. Finally, Bill Mark presented the Cg language, a high-level shading language developed by NVIDIA in close collaboration with Microsoft. Vertex and pixel programmability are of no use if they cannot be exploited by developers quickly and easily. The somewhat cryptic assembly language style of programming once required to program the vertex and pixel processors slowed development (especially due to the tweak-and-fix nature of shader programming). And so, in an effort to make the writing of vertex and shader programs more efficient, Cg provides programmers with a C-like high-level language to write their shaders. Shaders become easier to read and write, which means effects exploiting them can appear sooner in games.

Jason L. Mitchell of ATI Research previewed the just-announced Radeon 9700 GPU, ATI’s latest processor to be available in stores in approximately a month. Improving on previous models, the Radeon 9700 allows shaders to branch and loop, be longer in instruction length, and use more constant register space. The Radeon 9700’s supports DirextX 9 2.0 pixel shaders, and allow up to 64 ALU instructions and 32 texture instructions.

Randi Rost of 3Dlabs, Inc. gave a preview of OpenGL 2.0’s vertex and fragment processing support. Some of the previously fixed-function operations that the vertex processor of OpenGL 2.0 will replace are the vertex transformation, lighting, color material application, and texture coordinate generation and transformation. The fragment processor will replace those computations that are interpolated between vertices, including texture access, fog convolution, and pixel zoom. OpenGL 2.0 will provide the same C-like language for both vertex and fragment shaders, with comprehensive matrix and vector data types. In brief, each shader will be represented as a shader object (part of OpenGL 2.0’s objects infrastructure), and multiple shader objects can work together as part of a program object.

 

 

Papers coverage:

Images and Video

 

Courses started at 8:30AM on SUNDAY!!! And they continued into Wednesday.

 

SIGGRAPH 2002 Courses Page

Photos from SIGGRAPH 2002
 

 

This page is maintained by
Jan Hardenbergh
jch@siggraph.org
All photos you see in the 2002 reports are due to a generous loan of Cybershot digital cameras from SONY