The basic idea is to test the z - depth of each surface to determine the closest (visible) surface. Declare an array z buffer(x, y) with one entry for each pixel position. Initialize the array to the maximum depth. Note: if have performed a perspective depth transformation then all z values 0.0 <= z (x, y) <="1.0". So initialize all values to 1.0. Then the algorithm is as follows:
for each polygon P for each pixel (x, y) in P compute z_depth at x, y if z_depth < z_buffer (x, y) then set_pixel (x, y, color) z_buffer (x, y) <= z_depth
The polyscan procedure can easily be modified to add the z-buffer test. The computation of the z_depth (x, y) is done using coherence calculations similar to the x-intersection calculations. It is actually a bi-linear interpolation, i.e., interpolation both down (y) and across (x) scan lines.
Advantages of z-buffer algorithm: It always works and is simple to implement.
Alternative method for computing z depth values using plane equations.
Now remember from 2D viewing transformation in the procedure Point Viewing Transform we had:
all of which are functions of the window, viewport, and PDC.
Now in Point Viewing Transform we did:
Now as we scan across a polygon Xp+1 = Xp + 1
So X w = xp/Sx - Ax and Xw+1 = Xp+1/Sx - Ax = Xp/Sx + 1/Sx - Ax = Xw + 1/Sx
Similarly if Yp+1 = Yp + 1 then Yw+1 = Yw + 1/Sy
Now from the plane equations Z(Xw, Yw) = (- A*Xw - B*Yw + D) / C
So Z(Xw+1, Yw) = (-A * (X w + 1/Sx) - B*Yw + D) / C
Similarly Z(Xw, Yw+1) = Z*Xw, Yw) - (B*(1/Sy)) / C
So for each polygon compute the terms (A * (1/Sx)) / C and (B*(1/Sy)) / C
So can find Xw, Yw, Zw at the polygon vertices and use the above to compute rest of Zw values.
Visible Surface Determination
HyperGraph home page.