A forgotten Historical benefit, of Marching Tetrahedra?

One of the facts which the WiKiPedia mentions, is, that for 20 years, there was a patent on the “Marching Cubes” algorithm, which basically forced some software developers – especially Linux and other, open-source developers – to use “Marching Tetrahedra” as an alternative. But I think that this article has one flaw:

Its assumptions are too modern.

What this article states is that, like Marching Cubes, individual tetrahedra can be fed to the GPU as “Triangle Strips”. The problem with this is the fact, that triangle strips are only recognized by the GPU, if DirectX 10(+), or OpenGL 3(+) is available, which means, that ‘a real Geometry Shader’ needs to be running.

Coders were working with Iso-surfaces, during the DirectX 9.0c / OpenGL 2 days, when there were no real Geometry Shaders. And then, one of the limitations that existed in the hardware was, that even if the Fragment Shader received vertices grouped as triangles, usually, Vertex Shaders would only get to probe one vertex at a time. So, what early coders actually did was, to implement a kind of poor man’s geometry shader, within the Fragment Shader. This was possible because one of the pixel formats which the FS could output, also corresponded to one of the vertex formats, which a VS could read as input.

Hence, a Fragment Shader running in this fashion would render its output – under the pretense that it would form an image – into the Vertex Buffer of another rendering pipeline. This was therefore appropriately named “Render-To-Vertex-Buffer”, or, ‘R2VB‘. And today, graphics cards exist, which no longer permit R2VB, but which permit OpenGL 4 and/or real Geometry Shaders, the latter of which, in turn, can group their Output Topologies into Triangle Strips.

This poses the question, ‘Because any one shader invocation can only see its own data, how could this result in a Marching Tetrahedra implementation?’ And I don’t fully know the answer.

Today, I can no longer imagine in a satisfyingly complete way, how the programmers in the old days solved such problems. Like many other people today, I need to imagine that the GPU does offer a Geometry Shader – a GS – explicitly, in order to implement a GS.


 

In a slightly different way, Marching Tetrahedra will continue to be important in the near future, because coders needed to implement the algorithm on the CPU, not the GPU, because they had Iso-Surfaces to render, but no patent-rights to the Marching Cubes algorithm, and, because programmers are not usually asked to rewrite all their predecessors’ code. Hence, code exists, which does all this purely on the CPU, and for which the man-hours don’t exist, to convert it all to Marching Cubes code.

(Update 5/09/2020, 17h30… )

Continue reading A forgotten Historical benefit, of Marching Tetrahedra?

How 3D-plotted, implicit functions are often inferior, to ISO-Surfaces rendered for 3D Gaming.

One of the subjects which I revisited in recent weeks has been, that either Computer Algebra Systems, or other numeric toolboxes, may plot functions. And a fact that should be pointed out is, that to plot a function, either as a 2D or a 3D plot, is always numeric, even if it’s being offered as part of what a ‘CAS’ can do (a “Computer Algebra System”). And so, a subcategory of what is sometimes offered, is a 3D plot, of an implicit function, kind of like this one:

hyperboloid

This is a plot, of complementary hyperboloids, which are the 3D counterparts to 2D hyperbola.

What some people might just wonder is, how the refined toolbox works, that plots this type of implicit function. And one way in which this can be done, is by generating an ISO-Surface, which is a derived mesh, along which a Density that has been computed from X, Y and Z parameters, crosses a threshold-value, which can just be named (H) for the sake of this posting.

And, in turn, such an ISO-Surface can be computed, by using the ‘Marching cubes algorithm‘. If it gets used, this algorithm forms a geometry shader, which accepts one Point as input topology, and which outputs a number of triangles from (0) to (4).

The question which this posting poses is, whether the mesh which is output by such an algorithm, will always include vertex-normals. And the short answer is No. Applications exist, in which normals are computed, and applications exist where normals are not computed. And so, because some users are used to high-end gaming, and used to seeing shaded surfaces, which can only really be shaded if normals have been made available to a fragment shader, those users might find themselves asking, why Mathematical plotting algorithms might exist, which never compute real normals.

(Updated 5/07/2020, 16h15… )

Continue reading How 3D-plotted, implicit functions are often inferior, to ISO-Surfaces rendered for 3D Gaming.

Why R2VB Should Not Simply be Deprecated

The designers of certain graphics cards / GPUs, have decided that Render-To-Vertex-Buffer is deprecated. In order to appreciate why I believe this to be a mistake, the reader first needs to know what R2VB is – or was.

The rendering pipeline of DirectX 9 versus DirectX 11 is somewhat different, yet also very similar, and DirectX 9 was extremely versatile, with a wide range of applications written that use it, while the fancier Dx 11 pipeline is more powerful, but has less of an established base of algorithms.

Dx 9 is approximated in OpenGL 2, while Dx 10 and Dx 11 are approximated in OpenGL 3(+) .

Continue reading Why R2VB Should Not Simply be Deprecated

A Method for Obtaining Signatures for Photogrammetry

I have posed myself the question in the past, in which a number of photos is each subdivided into a grid of rectangles, how a signature can be derived from each rectangle, which leads to some sort of identifier, so that between photos, these identifiers will either match or not, even though there are inherent mismatches in the photos, to decide whether a rectangle in one photo corresponds to the same subject-feature, as a different rectangle in the other photo.

Eventually, one would want this information in order to compute a 3D scene-description – a 3D Mesh, with a level of detail equal to how finely the photos were subdivided into rectangles.

Since exact pixels will not be equal, I have thought of somewhat improbable schemes in the past, of just how to compute such a signature. These schemes once went so far, as first to compute a 2D Fourier Transform of each rectangle @ 1 coefficient /octave, to quantize those into 1s and 0s, to ignore the F=(0,0) bit, and then to hash the results.

But just recently I have come to the conclusion that a much simpler method should work.

At full resolution, the photos can be analyzed as though they formed a single image, in the ways already established for computing an 8-bit color palette, i.e. a 256-color palette, like the palettes once used in GIF Images, and for other images that only had 8-bit colors.

The index-number of this palette can be used as an identifier.

After the palette has been established, each rectangle of each photo can be assigned an index number, depending on which color of the palette it best matches. It would be important that this assignment not take place, as though we were just averaging the colors of each rectangle. Instead, the strongest basis of this assignment would need to be, how many pixels in the rectangle match one color in the palette. (*)

After that, each rectangle will be associated with this identifier, and for each one the most important result will become, at what distances from its camera-position the greatest number of other cameras confirm its 3D position, according to matching identifiers.

Dirk

Continue reading A Method for Obtaining Signatures for Photogrammetry