The intensity of reflected light at a point on a surface is the integral over the hemisphere above the surface of a light function L times a reflectance function R.
Or in integral form (sometimes called the "pixel equation" or the "rendering equation"):
If we restrict ourselves to specular reflections only, then we can define the ratio of the reflectance in one direction over the incident light in another direction as the Bi-Directional Reflectance Distribution Function (BRDF) or: BRDF = R(j,q,l) = Ir( x, y, z,j,q,l)/L(x, y, z,j,q,l )
So to compute the reflected intensity along any direction, e.g., the viewing direction, we must first find the total amount of incoming light.
The full rendering equation is too computationally complex to be solved in a finite time. As is normally done in science, various approximations are made to simplify the equation until it can be solved in a finite and reasonable amount of time. As computational power increases, these approximations can be refined and made more realistic, thus producing better and more realistic images. In the sections below we discuss some of the common approximations to simplify the rendering equation.
If we assume that the scene is static then we can eliminate the integral over all time. This is not true for animation but even there we can assume that each frame of the animation represents a static situation and compute the individual frames. This leads to somewhat unrealistic images in animation so some systems can add back the time dependence in the form of motion blur.
The reflected light intensity is the integral over all wavelengths of the light spectrum at each wavelength multiplied by the reflectance spectrum at that wavelength. An approximation is to sample the color spectrum at just three wavelengths: "red", "green", and "blue". These colors correspond to the photoreceptors in our eyes and the phosphors in display monitors. This approximation changes this integral into a three term sum, one for each color. Some systems allow for color sampling at more wavelengths. These multiple samples must then be recombined for display on a monitor.
Note that we do not specify exactly what "red", "green", and "blue" are, i.e., the wavelengths of light. This causes problems with presenting images on different types of media, or even just on different monitors. There is more information on this in the section on Color.
In any scene we have 2 types of light sources:
If a light emitting source is small, or distant, we can model it as a point source such as the sun. A large, close light source would be a distributed light source, e.g., a fluorescent light. In a simple model we assume that all light sources are point sources. For point light sources, the light function (L(x,y, z,j,q,l) in the pixel equation become the light intensity I(l), times a Dirac delta function d(j,q). Because of the properties of the Dirac delta function, the pixel equation integral becomes a summation over the light sources or Sum( LiR(jLi,qLi), where jLi,qLi are the polar coordinates of the direction from the point (x,y,z) to the light source Li.
If we assume that all of the incident light on a surface is from light sources, i.e., that there is no interreflected light in a scene, then this is called a "Local Illumination Model". A "Global Illumination Model" accounts for light interreflection. At the very least diffuse interreflections must be accounted for in some fashion, else only directly lighted surfaces will be visible and the image will look very unrealistic. Two popular methods for Global Illumination are ray tracing, which primarily handles specular interreflections, and radiosity, which primarily handles diffuse interreflections.
Main reflection page.
HyperGraph home page.