Another Thought About Micropolygons

One task which I once undertook, was To provide a general-case description, of what the possible shader-types are, that can run on a GPU – on the Graphics-Processing Unit of a powerful Graphics Card. But one problem in providing such a panoply of ideas, is the fact that too many shader-types can run, for one synopsis to explain all their uses.

One use for a Geometry Shader is, to treat an input-triangle as if to consist of multiple smaller triangles, each of which would then be called ‘a micropolygon’, so that the points between these micropolygons can be displaced along the normal-vector of the base-geometry, from which the original triangle came. One reason for which the emergence of DirectX 10, which also corresponds to OpenGL 3.x , was followed so quickly by DirectX 11, which also corresponds to OpenGL 4.y , is the fact that the tessellation of the original triangle can be performed most efficiently, when yet-another type of shader only performs the tessellation. But in principle, a Geometry Shader is also capable of performing the actual tessellation, because in response to one input-triangle, a GS can output numerous points, that either form triangles again, or that form triangle strips. And in fact, if the overall pattern to the tessellation is rectangular, triangle strips make an output-topology for the GS, that makes more sense than individual triangles. But I’m not going to get into ‘Geometry Shaders coded to work as Tessellators’, in this posting.

Instead, I’m going to focus on a different aspect of the idea of micropolygons, that I think is more in need of explanation.

Our GS doesn’t just need to displace the micropolygons – hence, the term ‘displacement shader’ – but in addition, must compute the normal vector of each Point output. If this normal vector was just the same as the one for the triangle input, then the Fragment Shader which follows the Geometry Shader, would not really be able to shade the resulting surface, as having been displaced. And this would be because, especially when viewing a 3D model from within a 2D perspective, the observer does not really see depth. He or she sees changes in surface-brightness, which his or her brain decides, must have been due to subtle differences in depth. And so a valid question which arises is, ‘How can a Geometry Shader compute the normal-vectors of all its output Points, especially since shaders typically don’t have access to data that arises outside one invocation?’

(Updated 07/08/2018, 7h55 … )

(As Of 07/06/2018 : )

And there is an available answer, that seems simple at first glance: ‘For each Point output, instead of sampling the depth-map once, sample it three times. Use the derivative of the depth along the texture-U-coordinate, and the derivative along the texture-V-coordinate, to compute the per-micropolygon-point normal vectors.’ This should resemble an earlier exercise of computing a normal-map, from a mere height-map.

But then the main caveat which ensues would be, that the resulting normal-vectors would be in texture-space, i.e. in tangent-space. ( :1 ) They would need to be rotated into view-space, most probably. And so it would seem to follow that Wherever micropolygons are being implemented, there is a need for some form of tangent-space-mapping as well. And yet the job which was once additionally needed, of parallax-mapping, would be taken care of in this case, since the micropolygons output, are fully Z-buffer-aware, and therefore occlude each other superbly.

(Update 07/06/2018, 18h30 : )

1: )

I should also mention that if, the methodology applied to extract normal-vectors from a height-map, indeed only sampled the height-map according to 3 texels, there would be another defect in the geometry which would result:

At high spatial frequencies, the points of maximum height, would not coincide with the points of zero slope, and the regions of maximum slope, would not coincide, with the regions where the height is in its mid-range.

And so a more-accurate method of deriving a set of normal-vectors from a mere height-map would be one, that samples the height-map according to 4 texels, like so:

micropoly_1_c

And then, I’d suggest the corresponding Math could follow like this:



 

(Update 07/07/2018, 15h00 : )

I have observed the fact, that a paid-for application I hold a user-license for, uses a Geometry Shader to tessellate Terrain, but not a DirectX11 tessellator. And I’ve asked myself why the programers made this decision, which actually enables me to continue using their software. And given the thoughts expressed above, I’ve reached the tentative conclusion, that If the GS is used in this way, at least Triangles can be output, while If a hardware-tessellation shader – i.e. a Hull Shader plus a Domain Control Function – would be used, then the output-geometry more or less has to be in the form of ‘Surfels’, i.e., Point-Sprites…

Dirk

 

Print Friendly, PDF & Email

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

This site uses Akismet to reduce spam. Learn how your comment data is processed.