Modern GPUs support more flexible programming models through systems such as DirectCompute, OpenCL, and CUDA. Although much has been made of GPGPU programming, this course focuses on the application of compute on GPUs for graphics in particular. We will start with a brief overview of the underlying GPU architectures for compute. We will then discuss how the languages are constructed to help take advantage of these architectures and what the differences are. Since this the focus is on application to graphics, we will discuss interoperability with graphics APIs and performance implications. We will also address issues related to choosing between compute and other programmable graphics stages such as pixel or fragment shaders, as well as how to interact with these other graphics pipeline stages. Finally, we will discuss instances where compute has been used specifically for graphics. The attendee will leave the course with a basic understanding of where they can make use of compute to accelerate or extend graphics applications.
Programmers who want to accelerate or extend their real-time rendering systems using compute.
Programmers with experience in modern graphics APIs and shader languages. They should also have some knowledge of basic multithreading concepts.
Karl Hillesland, AMD
Karl Hillesland received his PhD from the University of North Carolina at Chapel Hill in 2005 for his work in graphics related GPGPU work. Since then, he's worked for Electronic Arts (Maxis) and Pixelux Entertainment. He is now part of AMDs Advanced Technologies Initiatives where he is responsible for graphics research and GPU demos.