About how I won’t be doing any ‘ASL’ computing soon.

There exists an Open-Source code library named ‘ASL’, which stands for “Advanced Simulation Library“. Its purpose is to allow application-designers who don’t have to be deep experts at writing C++ code, to perform fluid simulations, but with the volume-based simulations running on the GPU of the computer, instead of on the CPU. This can also lead people to say, that ‘ASL’ is hardware-accelerated.

Last night I figured, that ‘ASL’ should run nicely on the Debian / Stretch computer I name ‘Plato’, because that computer has a GeForce GTX460 graphics card, which was considered state-of-the-art in 2011. But unfortunately for me, ‘ASL’ will only run its simulations correctly, if the GPU delivers ‘OpenCL’, version 1.2 or greater. The GeForce 460 graphics card is only capable of OpenCL 1.1, and is therefore no longer state-of-the-art by far.

Last night, I worked until exhausted, trying various solutions, in hopes that maybe the library had not been compiled correctly – I custom-compiled it, after finding out that the simulations were not running correctly. I also looked in to the possibility, that maybe I had just not been executing the sample experiments correctly. But alas, the problem was my ‘weak’ graphics card, that is nevertheless OpenGL 4 -capable.

As an alternative to using ‘ASL’, Linux users can use the Open-Source program-set called ‘Elmer‘. They run on the CPU.

Further, there is an associated GUI-application called ‘ParaView‘, the purpose of which is to take as input, volume-based geometries and arbitrary values – i.e., fluid states – and to render those with some amount of graphics finesse. I.e., ‘ParaView’ can be used to post-process the simulations that were created with ‘ASL’ or with ‘Elmer’, into a presentable visual. The version of ‘ParaView’ that installs from the package-manager under Debian / Stretch, ‘5.1.x’ , works fine. But for a while last night, I did not know whether problems that I was running in to were actually due to ‘ASL’ or to ‘ParaView’ misbehaving. And so what I also did, was to custom-compile ‘ParaView’, to version 5.5.2 . And if one does this, then the next problem one has, is that ParaView v5.5.2 requires VTK v7, while under Debian / Stretch, all we have is VTK v6.3 . And so on my platform, version 5.5.2 of ParaView encounters problems, in addition to ‘ASL’ encountering problems. And so for a while I had difficulty, identifying what the root causes of these bugs were.

Finally, the development branch, custom-compiled version of ‘Elmer’ and package-manager-installed ‘ParaView’ v5.1.x will serve me fine.

Dirk

 

The role that Materials and Textures play in CGI

I once had a friend, who had asked me what the difference was, between a Texture, and a Material, in CGI. And as it was then, it’s difficult to provide a definitive answer which is true in all cases, because each graphics application framework, has a slightly different definition of what a material is.

What I had told my friend, was that in general, a material is a kind of node, which we can attach texture-images to, but that usually, the material allows the content-designer, additionally, to specify certain parameters, with which the textures are to be rendered. My friend next wanted to know what then, the difference was, between a material and a shader. And basically, when we use material nodes, we usually don’t code any shaders. But, if we did code a shader, then the logical place to tell our graphics application to load it, is as another parameter of the material. In fact, the subtle details of what a material does, are often defined by a shader of some sort, which the content designer doesn’t see.

But, materials will sometimes have a more-powerful GUI, which allows the content-designer to connect apparent nodes, which are shown in front of him visually, in order to decide how his texture images are to be combined, into the final appearance of a 3D model, and this GUI can go so far as to display Nodes, visible to the content-designer, that make this work easier.

My friend was not happy with this answer, because I had not defined what a material is, in a way that applies to ALL Graphics Applications. And the reason I did not, was the fact that each graphics application is slightly different in this regard.

Dirk

 

Something Discouraging, When Using ‘Ayam’

One of the subjects which I have written about, here and here, is that I seem to like the Open-Source application named ‘Ayam‘, as a potential way to create images, which are 2D perspective views of a virtual 3D scene. This program just might do for me, what the proprietary program ‘Bryce’ once did for me, under Windows.

But obviously it’s harder to use Ayam, to create any meaningful images. And one reason seems to be, that doing so requires we define explicit, simulated light-sources, and the fact that there are no defaults, that would set up these light-sources, with parameters, that lead to good images.

I.e., the program will compute per-pixel color-values, that are derived from the positions, geometries, and intensities of the light-sources, but with no advanced warning to the user, of what intensities he needs to give his light-sources, so that the pixel-brightness values are in-range.

Usually, if we begin by rendering a cube, and we find that it renders, but only as a black outline, against a transparent background, then we may have everything set up correctly, except for the intensity of the light-source. It might just take some trial-and-error, before we set that correctly. And what I have found is that on their arbitrary scale, an intensity of 16 or so, just works well, in experimental scenes I’ve created.

At the same time, ‘Ayam’ has a specific idiosyncrasy to its GUI, which I rather appreciate. Its many numeric parameter-widgets have an arrow to the right of them, and an arrow to their left, which respectively, increase or decrease the value in the numeric field. The way these little buttons work, is either to double or halve the value they alter.

I suppose this could confuse some first-time users, because the numeric value in the field defaults to zero. This means that to click on the buttons does not alter the value, because to double or halve zero, results in zero. But what we can do is type a (+1) or a (-1) into the numeric field, and then, if we find that the parameter in question isn’t even close to where it’s supposed to be, we can repeatedly use the buttons, until we either find that (+16) is approximately correct, or until we find that (+0.0625) is about correct… The users just need to remember that after choosing widget-settings, we must also click to ‘Apply’ those settings before moving on, or else ‘Ayam’ will tend to forget them.

Dirk

 

Quickie: How 2D Graphics is just a Special Case of 3D Graphics

I have previously written in-depth, about what the rendering pipeline is, by which 3D graphics are rendered to a 2D, perspective view, as part of computer games, or as part of other applications that require 3D, in real time. But one problem with my writing in-depth might be, that people fail to see some relevance in the words, if the word-count goes beyond 500 words. :-)

So I’m going to try to summarize it more-briefly.

Vertex-Positions in 3D can be rotated and translated, using matrices. Matrices can be composited, meaning that if a sequence of multiplications of position-vectors by known matrices accomplishes what we want, then a multiplication by a single, derived matrix can accomplish the same thing.

According to DirectX 9 or OpenGL 2.x , 3D objects consisted of vertices that formed triangles, the positions and normal-vectors of which were transformed and rotated, respectively, and where vertices additionally possessed texture-coordinates, which could all be processed by “Vertex Pipelines”. The output from Vertex Pipelines was then rasterized and interpolated, and fed to “Pixel Pipelines”, that performed per-screen-pixel computations on the interpolated values, and on how these values were applied to Texture Images which were sampled.

All this work was done by dedicated graphics hardware, which is now known as a GPU. It was not done by software.

One difference that exists today, is that the specialization of GPU cores into Vertex- and Pixel-Pipelines no longer exists. Due to something called Unified Shader Model, any one GPU-core can act either as a Vertex- or as a Pixel-Shader, and powerful GPUs possess hundreds of cores.

So the practical question does arise, how any of this applies to 2D applications, such as Desktop Compositing. And the answer would be, that it has always been possible to render a single rectangle, as though oriented in a 3D coordinate system. This rectangle, which is also referred to as a “Quad”, first gets Tessellated, which means that it receives a diagonal subdivision into two triangles, which are still references to the same 4 vertices as before.

When an application receives a drawing surface, onto which it draws its GUI – using CPU-time – the corners of this drawing surface have 2D texture coordinates that are combinations of [ 0 ] and ( +1 ) . The drawing-surfaces themselves can be input to the GPU as though Texture Images. And the 4 vertices that define the position of the drawing surface on the desktop, can simply result from a matrix, that is much simpler than any matrix would have needed to be, that performed rotation in 3D etc., before a screen-positioning could be formed from it. Either way, the Vertex Program only needs to multiply the (notional) positions of the corners of a drawing surface, by a single matrix, before a screen-position results. This matrix does not need to be computed from complicated trig functions in the 2D case.

And the GPU renders the scene to a frame-buffer, just as it rendered 3D games.

Continue reading Quickie: How 2D Graphics is just a Special Case of 3D Graphics