A forgotten Historical benefit, of Marching Tetrahedra?

One of the facts which the WiKiPedia mentions, is, that for 20 years, there was a patent on the “Marching Cubes” algorithm, which basically forced some software developers – especially Linux and other, open-source developers – to use “Marching Tetrahedra” as an alternative. But I think that this article has one flaw:

Its assumptions are too modern.

What this article states is that, like Marching Cubes, individual tetrahedra can be fed to the GPU as “Triangle Strips”. The problem with this is the fact, that triangle strips are only recognized by the GPU, if DirectX 10(+), or OpenGL 3(+) is available, which means, that ‘a real Geometry Shader’ needs to be running.

Coders were working with Iso-surfaces, during the DirectX 9.0c / OpenGL 2 days, when there were no real Geometry Shaders. And then, one of the limitations that existed in the hardware was, that even if the Fragment Shader received vertices grouped as triangles, usually, Vertex Shaders would only get to probe one vertex at a time. So, what early coders actually did was, to implement a kind of poor man’s geometry shader, within the Fragment Shader. This was possible because one of the pixel formats which the FS could output, also corresponded to one of the vertex formats, which a VS could read as input.

Hence, a Fragment Shader running in this fashion would render its output – under the pretense that it would form an image – into the Vertex Buffer of another rendering pipeline. This was therefore appropriately named “Render-To-Vertex-Buffer”, or, ‘R2VB‘. And today, graphics cards exist, which no longer permit R2VB, but which permit OpenGL 4 and/or real Geometry Shaders, the latter of which, in turn, can group their Output Topologies into Triangle Strips.

This poses the question, ‘Because any one shader invocation can only see its own data, how could this result in a Marching Tetrahedra implementation?’ And I don’t fully know the answer.

Today, I can no longer imagine in a satisfyingly complete way, how the programmers in the old days solved such problems. Like many other people today, I need to imagine that the GPU does offer a Geometry Shader – a GS – explicitly, in order to implement a GS.


 

In a slightly different way, Marching Tetrahedra will continue to be important in the near future, because coders needed to implement the algorithm on the CPU, not the GPU, because they had Iso-Surfaces to render, but no patent-rights to the Marching Cubes algorithm, and, because programmers are not usually asked to rewrite all their predecessors’ code. Hence, code exists, which does all this purely on the CPU, and for which the man-hours don’t exist, to convert it all to Marching Cubes code.

(Update 5/09/2020, 17h30… )

Continue reading A forgotten Historical benefit, of Marching Tetrahedra?

Another Thought About Micropolygons

One task which I once undertook, was To provide a general-case description, of what the possible shader-types are, that can run on a GPU – on the Graphics-Processing Unit of a powerful Graphics Card. But one problem in providing such a panoply of ideas, is the fact that too many shader-types can run, for one synopsis to explain all their uses.

One use for a Geometry Shader is, to treat an input-triangle as if to consist of multiple smaller triangles, each of which would then be called ‘a micropolygon’, so that the points between these micropolygons can be displaced along the normal-vector of the base-geometry, from which the original triangle came. One reason for which the emergence of DirectX 10, which also corresponds to OpenGL 3.x , was followed so quickly by DirectX 11, which also corresponds to OpenGL 4.y , is the fact that the tessellation of the original triangle can be performed most efficiently, when yet-another type of shader only performs the tessellation. But in principle, a Geometry Shader is also capable of performing the actual tessellation, because in response to one input-triangle, a GS can output numerous points, that either form triangles again, or that form triangle strips. And in fact, if the overall pattern to the tessellation is rectangular, triangle strips make an output-topology for the GS, that makes more sense than individual triangles. But I’m not going to get into ‘Geometry Shaders coded to work as Tessellators’, in this posting.

Instead, I’m going to focus on a different aspect of the idea of micropolygons, that I think is more in need of explanation.

Our GS doesn’t just need to displace the micropolygons – hence, the term ‘displacement shader’ – but in addition, must compute the normal vector of each Point output. If this normal vector was just the same as the one for the triangle input, then the Fragment Shader which follows the Geometry Shader, would not really be able to shade the resulting surface, as having been displaced. And this would be because, especially when viewing a 3D model from within a 2D perspective, the observer does not really see depth. He or she sees changes in surface-brightness, which his or her brain decides, must have been due to subtle differences in depth. And so a valid question which arises is, ‘How can a Geometry Shader compute the normal-vectors of all its output Points, especially since shaders typically don’t have access to data that arises outside one invocation?’

(Updated 07/08/2018, 7h55 … )

Continue reading Another Thought About Micropolygons

Why R2VB Should Not Simply be Deprecated

The designers of certain graphics cards / GPUs, have decided that Render-To-Vertex-Buffer is deprecated. In order to appreciate why I believe this to be a mistake, the reader first needs to know what R2VB is – or was.

The rendering pipeline of DirectX 9 versus DirectX 11 is somewhat different, yet also very similar, and DirectX 9 was extremely versatile, with a wide range of applications written that use it, while the fancier Dx 11 pipeline is more powerful, but has less of an established base of algorithms.

Dx 9 is approximated in OpenGL 2, while Dx 10 and Dx 11 are approximated in OpenGL 3(+) .

Continue reading Why R2VB Should Not Simply be Deprecated

Modern Photogrammetry

Modern Photogrammetry makes use of a Geometry Shader – i.e.  Shader which starts with a coarse grid in 3D, and which interpolates a fine grid of microplygons, again in 3D.

The principle goes, that a first-order, approximate 3D model provides per-vertex “normal vector” – i.e. vectors that always stand out at right angles from the 3D model’s surface in an exact way, in 3D – and that a Geometry Shader actually renders many interpolated points, to several virtual camera positions. And these virtual camera positions correspond in 3D, to the assumed positions from which real cameras photographed the subject.

The Geometry Shader displaces each of these points, but only along their interpolated normal vector, derived from the coarse grid, until the position which those points render to, take light-values from the real photos, that correlate to the closest extent. I.e. the premise is that at some exact position along the normal vector, a point generated by a Geometry Shader will have positions on all the real camera-views, at which all the real, 2D cameras photographed the same light-value. Finding that point is a 1-dimensional process, because it only takes place along the normal vector, and can thus be achieved with successive approximation.

(Edit 01/10/2017 : To make this easier to visualize. If the original geometry was just a rectangle, then all the normal vectors would be parallel. Then, if we subdivided this rectangle finely enough, and projected each micropolygon some variable distance along that vector, There would be no reason to say that there exists some point in the volume in front of the rectangle, which would not eventually be crossed. At a point corresponding to a 3D surface, all the cameras viewing the volume should in principle have observed the same light-value.

Now, if the normal-vectors are not parallel, then these paths will be more dense in some parts of the volume, and less dense in others. But then the assumption becomes, that their density should never actually reach zero, so that finer subdivision of the original geometry can also counteract this to some extent.

But there can exist many 3D surfaces, which would occupy more than one point along the projected path of one micropolygon – such as a simple sphere in front of an initial rectangle. Many paths would enter the sphere at one distance, and exit it again at another. There could exist a whole, complex scene in front of the rectangle. In those cases, starting with a coarse mesh which approximates the real geometry in 3D, is more of a help than a hindrance, because then, optimally, again there is only one distance of projection of each micropolygon, that will correspond to the exact geometry. )

Now one observation which some people might make, is that the initial, coarse grid might be inaccurate to begin with. But surprisingly, this type of error cancels out. This is because each microploygon-point will have been displaced from the coarse grid enough, that the coarse grid will finally no longer be recognizable from the positions of micropolygons. And the way the micropolygons are displaced is also such, that they never cross paths – since their paths as such are interpolated normal vectors – and so no Mathematical contradictions can result.

To whatever extent geometric occlusion has been explained by the initial, coarse model.

Granted, If the initial model was partially concave, then projecting all the points along their normal vector will eventually cause their paths to cross. But then this also defines the extent, at which the system no longer works.

But, According to what I just wrote, even the lighting needs to be consistent between one set of 2D photos, so that any match between their light-values actually has the same meaning. And really, it’s preferable to have about 6 such photos…

Yet, there are some people who would argue, that superior Statistical Methods could still find the optimal correlations in 1-dimensional light-values, between a higher number of actual photos…

One main limitation to providing photogrammetry in practice, is the fact that the person doing it may have the strongest graphics card available, but that he eventually needs to export his data to users who do not. So in one way it works for public consumption, the actual photogrammetry will get done on a remote server – perhaps a GPU farm, but then simplified data can actually get downloaded onto our tablets or phones, which the mere GPU of that tablet or phone is powerful enough to render.

But the GPU of the tablet or phone is itself not powerful enough, to do the actual successive approximation of the micropolygon-points.

I suppose, that Hollywood might not have that latter limitation. As far as they are concerned, all their CGI specialists could all have the most powerful GPUs, all the time…

Dirk

P.S. There exists a numerical approach, which simplifies computing Statistical Variance in such a way, that Variance can effectively be computed between ‘an infinite number of sample-points’, at a computational cost which is ‘only proportional to the number of sample-points’. And the equation is not so complicated.

s = Mean(X2) - ( Mean(X) )2

(Next)

Continue reading Modern Photogrammetry