Another Thought About Micropolygons

One task which I once undertook, was To provide a general-case description, of what the possible shader-types are, that can run on a GPU – on the Graphics-Processing Unit of a powerful Graphics Card. But one problem in providing such a panoply of ideas, is the fact that too many shader-types can run, for one synopsis to explain all their uses.

One use for a Geometry Shader is, to treat an input-triangle as if to consist of multiple smaller triangles, each of which would then be called ‘a micropolygon’, so that the points between these micropolygons can be displaced along the normal-vector of the base-geometry, from which the original triangle came. One reason for which the emergence of DirectX 10, which also corresponds to OpenGL 3.x , was followed so quickly by DirectX 11, which also corresponds to OpenGL 4.y , is the fact that the tessellation of the original triangle can be performed most efficiently, when yet-another type of shader only performs the tessellation. But in principle, a Geometry Shader is also capable of performing the actual tessellation, because in response to one input-triangle, a GS can output numerous points, that either form triangles again, or that form triangle strips. And in fact, if the overall pattern to the tessellation is rectangular, triangle strips make an output-topology for the GS, that makes more sense than individual triangles. But I’m not going to get into ‘Geometry Shaders coded to work as Tessellators’, in this posting.

Instead, I’m going to focus on a different aspect of the idea of micropolygons, that I think is more in need of explanation.

Our GS doesn’t just need to displace the micropolygons – hence, the term ‘displacement shader’ – but in addition, must compute the normal vector of each Point output. If this normal vector was just the same as the one for the triangle input, then the Fragment Shader which follows the Geometry Shader, would not really be able to shade the resulting surface, as having been displaced. And this would be because, especially when viewing a 3D model from within a 2D perspective, the observer does not really see depth. He or she sees changes in surface-brightness, which his or her brain decides, must have been due to subtle differences in depth. And so a valid question which arises is, ‘How can a Geometry Shader compute the normal-vectors of all its output Points, especially since shaders typically don’t have access to data that arises outside one invocation?’

(Updated 07/08/2018, 7h55 … )

Continue reading Another Thought About Micropolygons