Scan-Line Z-Buffering Dilemma

by toby   Last Updated August 03, 2018 16:13 PM - source

I have a set of vertices in 3D space, and for each I retain the following information:

• Its 3D coordinates (x, y, z).
• A list of pointers to some of the other vertices with which it's connected by edges.

Right now, I'm doing perspective projection with the projecting plane being XY and the eye placed somewhere at (0, 0, d), with d < 0. By doing Z-Buffering, I need to find the depth of the point of a polygon (they're all planar) which corresponds to a certain pixel on the screen so I can hide the surfaces that are not visible. My questions are the following:

1. How do I determine to which polygon does a pixel belong to so I could use the formula of the plane which contains the polygon to find the Z-coordinate?

2. Are my data structures correct? Do I need to store something else entirely in order for this to work?

I'm just projecting the vertices onto the projection plane and joining them with lines based on the pointer lists.

Tags :

You will need an active edge list, which contains a list of all polygon edges intersected by the current scanline. You will also need an in/out flag for each polygon on the scanline. The flags are toggled on/of as you cross an edge for a polygon.

The rules are drawing for each pixel along a scanline are;

1. no polygon flags are 'in', then draw background
2. only one polygon flag is 'in', the draw the colour of that polygon
3. two or more polygon flags are 'in', use you plane equation to find out which is closest at that pixel position.
4. as we move onto the next pixel check the AEL (see below) to see if we are crossing an edge. if so toggle the appropriate flag.

An important component in the data structure needed for this algorithm is the edge node. Each edge in the polygon is represented by an edge node. An edge node contains the following information;

1. yupper , the y co-ordinate of the upper end-point of the edge.
2. ylower , the y co-ordinate of the lower end-point of the edge.
3. xint , initially the x co-ordinate of the lower end-point of the edge. As the algorithm progresses, this field will store the x co-ordinate of the point of intersection of the scan-line.
4. ym , the reciprocal of the slope of the line ( 1/m ).
5. link to the polygon which own the edge.

When constructing an edge node for the edges, the following must be taken into account;

1. Horizontal Edges are ignored (an edge node is not generated for these), these edges will be drawn when the adjacent edges are processed.
2. If two edges share a vertex AND the vertex is an upper end point for one edge and a lower end point for another edge, the yupper of the lower edge must be reduced by 1. this is to allow the in/out toggle to work correctly.

In the above image the very top scanline intersects to edges at the top of the triangle. the two edges cause it to toggle from "out" to "in" to "out", which is correct. However, the second line also intersects two edges and is toggled "out" which is incorrect. The solution is to lower the y-int of the lower edge in this case. Now teh scan line only intersects one edge at the vertex.

This adjustment will not have any effect on the shape of the triangle as the xint will be used to determine where to start drawing. The effect of lowering the vertex is to cause it to be removed one scanline earlier than it would otherwise. All this assuming you are scanning from bottom to top.

Most implementations maintain an Active edge list (AEL), a list of edges. which contains all the edges intersected by the current scan line. The AEL needs to be maintained as we move from on scan-line y to the next scan-line (y+1) ;

1. Add any edges whose ylower is equal to y .
2. Colour the pixels along the scanline as described above
3. One or both of the edges in the AEL will no longer be intersected by (y+1) . Remove the edges from the AEL, where yupper is equal to y .
4. The intersection point of the scan line with the remaining edges will change for the new scan-line. xint is updated to xint+ym . Note that xint must be stored as a real number to avoid round-off errors.
5. increment y (to the next scan line)
6. Repeat until the last scanline is drawn.
Ken
October 14, 2012 15:25 PM

Ken's addressed scanline rendering (well) but not the occlusion problem.

So... you are asking about partially visible polys. That is where the beauty of combining scanline rendering with the z-buffer comes in.

(Using the formula of a plane to determine the depth at a given coordinate, like raycasting through every individual pixel, is unnecessarily complex and costly. Avoid that.)

1. Determine a list of all onscreen triangles.

2. Transform the vertex by multiplying it by the projection matrix. This gives you depth at that vertex, and store that value in your depth buffer (and in a separate "vertex depth" buffer -- see my answer to your other question).

3. For each set (triangle) of 3 vertices, perform linear interpolation between the vertices to get the depth value for each intervening pixel, and store that value in your depth buffer.

Once you've done this z-buffer part, you will no longer need to concern yourself over edge sorting for potential occlusion. In essence, the output of what I have described above is what you will use as the input to the step that Ken describes. The z-buffer, existing as a maximum-resolution quantisation of the display surface (i.e. the screen), can accommodate for any full or partial occlusion with the highest accuracy possible -- this being resolution of the screen itself, leading to no possible depth discrepancies in the final output. PS. In software, continuous geometric approaches may be cheaper depending on scene complexity, with the z-buffer being more expensive (being a discrete / iterative approach) but still much easier to implement.

This is why hardware z-buffering is favoured: It is both conceptually simple and very fast when implemented in hardware.

Engineer
November 03, 2012 09:28 AM