I have a set of vertices in 3D space, and for each I retain the following information:
Right now, I'm doing perspective projection with the projecting plane being XY and the eye placed somewhere at (0, 0, d), with d < 0. By doing Z-Buffering, I need to find the depth of the point of a polygon (they're all planar) which corresponds to a certain pixel on the screen so I can hide the surfaces that are not visible. My questions are the following:
How do I determine to which polygon does a pixel belong to so I could use the formula of the plane which contains the polygon to find the Z-coordinate?
Are my data structures correct? Do I need to store something else entirely in order for this to work?
I'm just projecting the vertices onto the projection plane and joining them with lines based on the pointer lists.
You will need an active edge list, which contains a list of all polygon edges intersected by the current scanline. You will also need an in/out flag for each polygon on the scanline. The flags are toggled on/of as you cross an edge for a polygon.
The rules are drawing for each pixel along a scanline are;
An important component in the data structure needed for this algorithm is the edge node. Each edge in the polygon is represented by an edge node. An edge node contains the following information;
When constructing an edge node for the edges, the following must be taken into account;
In the above image the very top scanline intersects to edges at the top of the triangle. the two edges cause it to toggle from "out" to "in" to "out", which is correct. However, the second line also intersects two edges and is toggled "out" which is incorrect. The solution is to lower the y-int of the lower edge in this case. Now teh scan line only intersects one edge at the vertex.
This adjustment will not have any effect on the shape of the triangle as the xint will be used to determine where to start drawing. The effect of lowering the vertex is to cause it to be removed one scanline earlier than it would otherwise. All this assuming you are scanning from bottom to top.
Most implementations maintain an Active edge list (AEL), a list of edges. which contains all the edges intersected by the current scan line. The AEL needs to be maintained as we move from on scan-line y to the next scan-line (y+1) ;
Ken's addressed scanline rendering (well) but not the occlusion problem.
So... you are asking about partially visible polys. That is where the beauty of combining scanline rendering with the z-buffer comes in.
(Using the formula of a plane to determine the depth at a given coordinate, like raycasting through every individual pixel, is unnecessarily complex and costly. Avoid that.)
Determine a list of all onscreen triangles.
Transform the vertex by multiplying it by the projection matrix. This gives you depth at that vertex, and store that value in your depth buffer (and in a separate "vertex depth" buffer -- see my answer to your other question).
For each set (triangle) of 3 vertices, perform linear interpolation between the vertices to get the depth value for each intervening pixel, and store that value in your depth buffer.
Once you've done this z-buffer part, you will no longer need to concern yourself over edge sorting for potential occlusion. In essence, the output of what I have described above is what you will use as the input to the step that Ken describes. The z-buffer, existing as a maximum-resolution quantisation of the display surface (i.e. the screen), can accommodate for any full or partial occlusion with the highest accuracy possible -- this being resolution of the screen itself, leading to no possible depth discrepancies in the final output. PS. In software, continuous geometric approaches may be cheaper depending on scene complexity, with the z-buffer being more expensive (being a discrete / iterative approach) but still much easier to implement.
This is why hardware z-buffering is favoured: It is both conceptually simple and very fast when implemented in hardware.