## What’s what with the Aqsis Ray-Tracer, and Qt4 project development under Linux.

One of the subjects I once wrote about was, that I had a GUI installed on a Debian 8 / Jessie computer, which was called ‘Ayam’, and that it was mainly a front-end for 3D / Perspective graphics development, using the Aqsis Ray-Tracer (v1.8). And a question which some of my readers might have by now, could be, why this feature seems to have slipped out of the hands of the Linux software repositories, and therefore, out of the hands of users, of up-to-date Linux systems. Progress is supposed to go forwards and not backwards.

In order to answer that question, I feel that I must provide information, which starts completely from the opposite end. There exists a ‘GUI Library’, i.e., a library of function-calls that can be used by application programmers, to give their applications a GUI, but without having to do much of the coding, that actually generates the GUI itself. The GUI Library in question is called “Qt”. It’s a nice library, but, as with so many other versions of software, there are noticeably the Major Qt versions 4, 5 and 6 by now. While in the world at large, Qt4 is considered to be deprecated, and Qt6 is the bleeding edge under development, Linux software development has largely still been based on Qt4 and Qt5. So, what, you may ask?

Firstly, while it is still possible to develop applications using Qt4 on a Debian 8 or Debian 9 system, switching to using it is not as easy under Linux, as it is under Windows. When using the ‘Qt SDK’ under Windows, one installs whichever ‘Kit’ one wants, and then compiles their Qt project with that Kit. There could be a Qt4, a Qt5 and a Qt6 Kit, and they’d all work. In fact, there is more than one of each…

Under Linux, if one wants to switch to compiling code that was still written with Qt4, one actually needs to switch the configuration of one entire computer to Qt4 development, by installing the package ‘qt4-default‘. This will replace the package ‘qt5-default‘ and set everything up for (backwards-oriented) Qt4 development. Yet, Qt4 code can still be written and compiled.

And So, my reader might ask, what does this have to do with Aqsis, and potentially not being able to use it anymore?

Well, there is another resource known as ‘Boost’, which is a collection of Libraries that do many sophisticated things with data, that do not necessarily involve any sort of GUI. Aqsis happens to depend on Boost. But, there was one component of Aqsis, which was the program ‘piqslr‘, the sole purpose of which was, to provide a quick preview of a small part of a 3D Scene, so that an artist could make small adjustments to this scene, without having to re-render the whole scene every time. Such features might seem minor at first glance, but in fact they’re major, just as it’s major, to have a GUI to gestalt the scene, which in turn controls a ray-tracing program in the background, which may not have a GUI.

Well, ‘piqslr‘ was one program, that requires both Boost and Qt4. And, Qt4 is no longer receiving any sort of development time. Qt4’s present version is final. And, all versions of Qt need to feed their C++ source code through what they call a “Meta-Object Compiler” (its ‘MOC’). And ‘piqslr‘ needed to have both the header files for Boost included in its source code, as well as being programmed in Qt4.

Qt4’s MOC was still able to parse the header files of Boost v1.55 (which was standard with Debian 8) without getting lost in them. But if source code includes the corresponding header files, for Boost v1.62 (which became the standard with Debian 9), the Qt4 MOC just spits out a tangled snarl of error messages. I suppose that that is what one gets, when one wanted to modify the basic way C++ behaves, even in such a minor way.

And so, what modern versions of Aqsis, which are included in the repositories, offer, are the programs that do the actual ray-tracing, but no longer, that previewer, that had the program-name ‘piqslr‘. I suppose this development next invites the question, of who is supposed to do something about it. And the embarrassing answer to that question is, that in the world of Open-Source Software, it’s not the Debian – or any other, Linux – developers who should be patching that problem, but rather, the developers of Aqsis itself. They could, if they had the time and money, just rewrite ‘piqslr‘ to use Qt5, which can handle up-to-date Boost headers just fine. But instead, the Aqsis developers have been showing the world a Web-page for the past 5 years, that simply makes the vague statement, that a completely new version is around the corner, which will then also not be compatible with the old version anymore.

(Updated 5/21/2021, 19h15… )

## There can be curious gaps, in what some people understand.

One of the concepts which once dominated CGI was, that textures assigned to 3D models needed to include a “Normal-Map”, so that even early in the days of 3D gaming, textured surfaces would seem to have ‘bumps’, and these normal-maps were more significant, than displacement-maps – i.e., height- or depth-maps – because shaders were actually able to compute lighting subtleties more easily, using the normal-maps. But additionally, it was always quite common that ordinary 8x8x8 (R,G,B) texel-formats needed to store the normal-maps, just because images could more-easily be prepared and loaded with that pixel-format. (:1)

The old-fashioned way to code that was, that the 8-bit integer (128) was taken to symbolize (0.0), that (255) was taken to symbolize a maximally positive value, and that the integer (0) was decoded to (-1.0). The reason for this, AFAIK, was the use by the old graphics cards, of the 8-bit integer, as a binary fraction.

In the spirit of recreating that, and, because it’s sometimes still necessary to store an approximation of a normal-vector, using only 32 bits, the code has been offered as follows:


Out.Pos_Normal.w = dot(floor(normal * 127.5 + 127.5), float3(1 / 256.0, 1.0, 256.0));

float3 normal = frac(Pos_Normal.w * float3(1.0, 1 / 256.0, 1 / 65536.0)) * 2.0 - 1.0;



There’s an obvious problem with this backwards-emulation: It can’t seem to reproduce the value (0.0) for any of the elements of the normal-vector. And then, what some people do is, to throw their arms in the air, and to say: ‘This problem just can’t be solved!’ Well, what about:


//  Assumed:
normal = normalize(normal);

Out.Pos_Normal.w = dot(floor(normal * 127.0 + 128.5), float3(1 / 256.0, 1.0, 256.0));



A side effect of this will definitely be, that no uncompressed value belonging to the interval [-1.0 .. +1.0] will lead to a compressed series of 8 zeros.

Mind you, because of the way the resulting value was now decoded again, the question of whether zero can actually result, is not as easy to address. And one reason is the fact that, for all the elements except the first, additional bits after the first 8 fractional bits, have not been removed. But that’s just a problem owing to the one-line decoding that was suggested. That could be changed to:


float3 normal = floor(Pos_Normal.w * float3(256.0, 1.0, 1 / 256.0));
normal = frac(normal * (1 / 256.0)) * (256.0 / 127.0) - (128.0 / 127.0);



Suddenly, the impossible has become possible.

N.B.  I would not use the customized decoder, unless I was also sure, that the input floating-point value, came from my customized encoder. It can easily happen that the shader needs to work with texture images prepared by an external program, and then, because of the way their channel-values get normalized today, I might use this as the decoder:


float3 normal = texel.rgb * (255.0 / 128.0) - 1.0;



However, if I did, a texel-value of (128) would still be required, to result in a floating-point value of (0.0)

(Updated 5/10/2020, 19h00… )

One of the subjects which fascinate me is, Computer-Generated Images, CGI, specifically, that render a 3D scene to a 2D perspective. But that subject is still rather vast. One could narrow it by first suggesting an interest in the hardware-accelerated form of CGI, which is also referred to as “Raster-Based Graphics”, and which works differently from ‘Ray-Tracing’. And after that, a further specialization can be made, into a modern form of it, known a “Deferred Shading”.

What happens with Deferred Shading is, that an entire scene is Rendered To Texture, but in such a way that, in addition to surface colours, separate output images also hold normal-vectors, and a distance-value (a depth-value), for each fragment of this initial rendering. And then, the resulting ‘G-Buffer’ can be put through post-processing, which results in the final 2D image. What advantages can this bring?

• It allows for a virtually unlimited number of dynamic lights,
• It allows for ‘SSAO’ – “Screen Space Ambient Occlusion” – to be implemented,
• It allows for more-efficient reflections to be implemented, in the form of ‘SSR’s – “Screen-Space Reflections”.
• (There could be more benefits.)

One fact which people should be aware of, given traditional strategies for computing lighting, is, that by default, the fragment shader would need to perform a separate computation for each light source that strikes the surface of a model. An exception to this has been possible with some game engines in the past, where a virtually unlimited number of static lights can be incorporated into a level map, by being baked in, as additional shadow-maps. But when it comes to computing dynamic lights – lights that can move and change intensity during a 3D game – there have traditionally been limits to how many of those may illuminate a given surface simultaneously. This was defined by how complex a fragment shader could be made, procedurally.

(Updated 1/15/2020, 14h45 … )

## A little trick needed, to get Blender to smooth-shade an object.

I was recently working on a project in Blender, which I have little experience doing, and noticing that, after my project was completed, the exported results showed flat-shading of mesh-approximations of spheres. And my intent was, to use mesh-approximations of spheres, but to have them smooth-shaded, such as, Phong-Shaded.

Because I was exporting the results to WebGL, my next suspicion was, that the WebGL platform was somehow handicapped, into always flat-shading the surfaces of its models. But a problem with this very suspicion was, that according to something I had already posted, to convert a model which is designed to be smooth-shaded, into a model which is flat-shaded, is not only bad practice in modelling, but also difficult to do. Hence, whatever WebGL malfunction might have been taking place, would also need to be accomplishing something computationally difficult.

As it turns out, when one wants an object to be smooth-shaded in Blender, there is an extra setting one needs to select, to make it so:

Once that setting has been clicked on for every object to be smooth-shaded, they will turn out to be so. Not only that, but the exported file-size actually got smaller, once I had done this for my 6 spheroids, than it was, when they were to be flat-shaded. And this last observation reassures me that:

• Flat-Shading does in fact work as I had expected, and
• WebGL is not handicapped out of smooth-shading.

It should be pointed out that, while Blender allows Materials to be given different methods of applying Normal Vectors, one of which is “Lambert Shading”, it will not offer the user different methods of interpolating the normal vector, between vertex-normals, because this interpolation, if there is to be one, is usually defined by an external program, or, in many cases, by the GPU, if hardware accelerated graphics is to be applied.

Dirk