A forgotten Historical benefit, of Marching Tetrahedra?

One of the facts which the WiKiPedia mentions, is, that for 20 years, there was a patent on the “Marching Cubes” algorithm, which basically forced some software developers – especially Linux and other, open-source developers – to use “Marching Tetrahedra” as an alternative. But I think that this article has one flaw:

Its assumptions are too modern.

What this article states is that, like Marching Cubes, individual tetrahedra can be fed to the GPU as “Triangle Strips”. The problem with this is the fact, that triangle strips are only recognized by the GPU, if DirectX 10(+), or OpenGL 3(+) is available, which means, that ‘a real Geometry Shader’ needs to be running.

Coders were working with Iso-surfaces, during the DirectX 9.0c / OpenGL 2 days, when there were no real Geometry Shaders. And then, one of the limitations that existed in the hardware was, that even if the Fragment Shader received vertices grouped as triangles, usually, Vertex Shaders would only get to probe one vertex at a time. So, what early coders actually did was, to implement a kind of poor man’s geometry shader, within the Fragment Shader. This was possible because one of the pixel formats which the FS could output, also corresponded to one of the vertex formats, which a VS could read as input.

Hence, a Fragment Shader running in this fashion would render its output – under the pretense that it would form an image – into the Vertex Buffer of another rendering pipeline. This was therefore appropriately named “Render-To-Vertex-Buffer”, or, ‘R2VB‘. And today, graphics cards exist, which no longer permit R2VB, but which permit OpenGL 4 and/or real Geometry Shaders, the latter of which, in turn, can group their Output Topologies into Triangle Strips.

This poses the question, ‘Because any one shader invocation can only see its own data, how could this result in a Marching Tetrahedra implementation?’ And I don’t fully know the answer.

Today, I can no longer imagine in a satisfyingly complete way, how the programmers in the old days solved such problems. Like many other people today, I need to imagine that the GPU does offer a Geometry Shader – a GS – explicitly, in order to implement a GS.


 

In a slightly different way, Marching Tetrahedra will continue to be important in the near future, because coders needed to implement the algorithm on the CPU, not the GPU, because they had Iso-Surfaces to render, but no patent-rights to the Marching Cubes algorithm, and, because programmers are not usually asked to rewrite all their predecessors’ code. Hence, code exists, which does all this purely on the CPU, and for which the man-hours don’t exist, to convert it all to Marching Cubes code.

(Update 5/09/2020, 17h30… )

Continue reading A forgotten Historical benefit, of Marching Tetrahedra?

There is a bug in the Wayland Compositor, under Debian Stretch.

One of the facts which I have written about before, is that modern desktop managers will use compositing – i.e. will use hardware-acceleration – to render desktop effects, specifically, when we are only running regular, 2D applications with a GUI. This feature exists with the old KDE 4, under Debian / Jessie, as well as with the new Plasma 5, under Debian / Stretch.

Under Debian / Jessie, this feature is extremely stable. Under Debian / Stretch, it is not yet so.

What will happen under Debian / Stretch, as far as I can make out, is that if an attempt has been made to disable compositing, instead of this succeeding, the desktop-session becomes corrupted, in that black rectangles will display, when we simply open multiple windows / dialogs. AFAICT, This can only be fixed, by rebooting / starting a new user-session.

I became aware of this, when running Steam-based games on the computer I name ‘Plato’. When games run that are heavy on OpenGL / Hardware-Rendering, it’s normal for the game-platform to try to switch compositing off, because often, the hardware-rendering of the game is not compatible with the desktop-compositing. After I have finished my session with Steam, the rendering errors in my desktop manager become noticeable, and Steam does not gain the permissions, to install any system software.

I do not blame this on Steam per se, because I can reproduce this problem by just clicking <Shift>+<Alt>+F12, which used to be the key-combination under KDE 4, that toggled desktop compositing on and off at will. Within seconds, under Plasma 5, this key-combination will also cause the malfunction.

(Updated 12/03/2017 : )

Now, there is a simplistic workaround for me:

 

Continue reading There is a bug in the Wayland Compositor, under Debian Stretch.

The role Materials play in CGI

When content-designers work with their favorite model editors or scene editors, in 3D, towards providing either a 3D game or another type of 3D application, they will often not map their 3D models directly to texture images. Instead, they will often connect each model to one Material, and the Material will then base its behavior on zero or more texture images. And a friend of mine has asked, what this describes.

Effectively, these Materials replace what a programmed shader would do, to define the surface properties of the simulated, 3D model. They tend to have a greater role in CPU rendering / ray tracing than they do with raster-based / DirectX or OpenGL -based graphics, but high-level editors may also be able to apply Materials to the hardware-rendered graphics, IF they can provide some type of predefined shader, that implements what the Material is supposed to implement.

A Material will often state such parameters as Gloss, Specular, Metallicity, etc.. When a camera-reflection-vector is computed, this reflection vector will land in some 3D direction relative to the defined light sources. Hence, a dot-product can be computed between it and the direction of the light source. Gloss represents the power to which this dot-product needs to be raised, resulting in specular highlights that become narrower. Often Gloss must be compensated for the fact that the integral of a power-function, is less than (1.0) times a higher power-function, and that therefore, the average brightness of a surface with gloss would seem to decrease…

But, if a content-designer enrolls a programmed shader, especially a Fragment Shader, than this shader replaces everything that a Material would otherwise have provided. It is often less-practical, though not impossible, to implement a programmed shader in software-rendered contexts, where mainly for this reason, the use of Materials still prevails.

Also, the notion often occurs to people, however unproven, that Materials will only provide basic shading options, such as ‘DOT3 Bump-Mapping‘, so that programmed shaders need to be used if more-sophisticated shading options are required, such as Tangent-Mapping. Yet, as I just wrote, every blend-mode a Material offers, is being defined by some sort of predefined shader – i.e. by a pre-programmed algorithm.

OGRE is an open-source rendering system, which requires that content-designers assign Materials to their models, even though hardware-rendering is being used, and then these Materials cause shaders to be loaded. Hence, if an OGRE content-designer wants to code his own shader, he must first also define his own Material, which will then load his custom shader.

Continue reading The role Materials play in CGI

Why R2VB Should Not Simply be Deprecated

The designers of certain graphics cards / GPUs, have decided that Render-To-Vertex-Buffer is deprecated. In order to appreciate why I believe this to be a mistake, the reader first needs to know what R2VB is – or was.

The rendering pipeline of DirectX 9 versus DirectX 11 is somewhat different, yet also very similar, and DirectX 9 was extremely versatile, with a wide range of applications written that use it, while the fancier Dx 11 pipeline is more powerful, but has less of an established base of algorithms.

Dx 9 is approximated in OpenGL 2, while Dx 10 and Dx 11 are approximated in OpenGL 3(+) .

Continue reading Why R2VB Should Not Simply be Deprecated