There can be curious gaps, in what some people understand.

One of the concepts which once dominated CGI was, that textures assigned to 3D models needed to include a “Normal-Map”, so that even early in the days of 3D gaming, textured surfaces would seem to have ‘bumps’, and these normal-maps were more significant, than displacement-maps – i.e., height- or depth-maps – because shaders were actually able to compute lighting subtleties more easily, using the normal-maps. But additionally, it was always quite common that ordinary 8x8x8 (R,G,B) texel-formats needed to store the normal-maps, just because images could more-easily be prepared and loaded with that pixel-format. (:1)

The old-fashioned way to code that was, that the 8-bit integer (128) was taken to symbolize (0.0), that (255) was taken to symbolize a maximally positive value, and that the integer (0) was decoded to (-1.0). The reason for this, AFAIK, was the use by the old graphics cards, of the 8-bit integer, as a binary fraction.

In the spirit of recreating that, and, because it’s sometimes still necessary to store an approximation of a normal-vector, using only 32 bits, the code has been offered as follows:

 


Out.Pos_Normal.w = dot(floor(normal * 127.5 + 127.5), float3(1 / 256.0, 1.0, 256.0));

float3 normal = frac(Pos_Normal.w * float3(1.0, 1 / 256.0, 1 / 65536.0)) * 2.0 - 1.0;

 

There’s an obvious problem with this backwards-emulation: It can’t seem to reproduce the value (0.0) for any of the elements of the normal-vector. And then, what some people do is, to throw their arms in the air, and to say: ‘This problem just can’t be solved!’ Well, what about:

 


//  Assumed:
normal = normalize(normal);

Out.Pos_Normal.w = dot(floor(normal * 127.0 + 128.5), float3(1 / 256.0, 1.0, 256.0));

 

A side effect of this will definitely be, that no uncompressed value belonging to the interval [-1.0 .. +1.0] will lead to a compressed series of 8 zeros.

Mind you, because of the way the resulting value was now decoded again, the question of whether zero can actually result, is not as easy to address. And one reason is the fact that, for all the elements except the first, additional bits after the first 8 fractional bits, have not been removed. But that’s just a problem owing to the one-line decoding that was suggested. That could be changed to:

 


float3 normal = floor(Pos_Normal.w * float3(256.0, 1.0, 1 / 256.0));
normal = frac(normal * (1 / 256.0)) * (256.0 / 127.0) - (128.0 / 127.0);

 

Suddenly, the impossible has become possible.

N.B.  I would not use the customized decoder, unless I was also sure, that the input floating-point value, came from my customized encoder. It can easily happen that the shader needs to work with texture images prepared by an external program, and then, because of the way their channel-values get normalized today, I might use this as the decoder:

 


float3 normal = texel.rgb * (255.0 / 128.0) - 1.0;

 

However, if I did, a texel-value of (128) would still be required, to result in a floating-point value of (0.0)

(Updated 5/10/2020, 19h00… )

Continue reading There can be curious gaps, in what some people understand.

Musing about Deferred Shading.

One of the subjects which fascinate me is, Computer-Generated Images, CGI, specifically, that render a 3D scene to a 2D perspective. But that subject is still rather vast. One could narrow it by first suggesting an interest in the hardware-accelerated form of CGI, which is also referred to as “Raster-Based Graphics”, and which works differently from ‘Ray-Tracing’. And after that, a further specialization can be made, into a modern form of it, known a “Deferred Shading”.

What happens with Deferred Shading is, that an entire scene is Rendered To Texture, but in such a way that, in addition to surface colours, separate output images also hold normal-vectors, and a distance-value (a depth-value), for each fragment of this initial rendering. And then, the resulting ‘G-Buffer’ can be put through post-processing, which results in the final 2D image. What advantages can this bring?

  • It allows for a virtually unlimited number of dynamic lights,
  • It allows for ‘SSAO’ – “Screen Space Ambient Occlusion” – to be implemented,
  • It allows for more-efficient reflections to be implemented, in the form of ‘SSR’s – “Screen-Space Reflections”.
  • (There could be more benefits.)

One fact which people should be aware of, given traditional strategies for computing lighting, is, that by default, the fragment shader would need to perform a separate computation for each light source that strikes the surface of a model. An exception to this has been possible with some game engines in the past, where a virtually unlimited number of static lights can be incorporated into a level map, by being baked in, as additional shadow-maps. But when it comes to computing dynamic lights – lights that can move and change intensity during a 3D game – there have traditionally been limits to how many of those may illuminate a given surface simultaneously. This was defined by how complex a fragment shader could be made, procedurally.

(Updated 1/15/2020, 14h45 … )

Continue reading Musing about Deferred Shading.

A little trick needed, to get Blender to smooth-shade an object.

I was recently working on a project in Blender, which I have little experience doing, and noticing that, after my project was completed, the exported results showed flat-shading of mesh-approximations of spheres. And my intent was, to use mesh-approximations of spheres, but to have them smooth-shaded, such as, Phong-Shaded.

Because I was exporting the results to WebGL, my next suspicion was, that the WebGL platform was somehow handicapped, into always flat-shading the surfaces of its models. But a problem with this very suspicion was, that according to something I had already posted, to convert a model which is designed to be smooth-shaded, into a model which is flat-shaded, is not only bad practice in modelling, but also difficult to do. Hence, whatever WebGL malfunction might have been taking place, would also need to be accomplishing something computationally difficult.

As it turns out, when one wants an object to be smooth-shaded in Blender, there is an extra setting one needs to select, to make it so:

Screenshot_20200104_124756c

Once that setting has been clicked on for every object to be smooth-shaded, they will turn out to be so. Not only that, but the exported file-size actually got smaller, once I had done this for my 6 spheroids, than it was, when they were to be flat-shaded. And this last observation reassures me that:

  • Flat-Shading does in fact work as I had expected, and
  • WebGL is not handicapped out of smooth-shading.

 

It should be pointed out that, while Blender allows Materials to be given different methods of applying Normal Vectors, one of which is “Lambert Shading”, it will not offer the user different methods of interpolating the normal vector, between vertex-normals, because this interpolation, if there is to be one, is usually defined by an external program, or, in many cases, by the GPU, if hardware accelerated graphics is to be applied.

Dirk

 

The role that Materials and Textures play in CGI

I once had a friend, who had asked me what the difference was, between a Texture, and a Material, in CGI. And as it was then, it’s difficult to provide a definitive answer which is true in all cases, because each graphics application framework, has a slightly different definition of what a material is.

What I had told my friend, was that in general, a material is a kind of node, which we can attach texture-images to, but that usually, the material allows the content-designer, additionally, to specify certain parameters, with which the textures are to be rendered. My friend next wanted to know what then, the difference was, between a material and a shader. And basically, when we use material nodes, we usually don’t code any shaders. But, if we did code a shader, then the logical place to tell our graphics application to load it, is as another parameter of the material. In fact, the subtle details of what a material does, are often defined by a shader of some sort, which the content designer doesn’t see.

But, materials will sometimes have a more-powerful GUI, which allows the content-designer to connect apparent nodes, which are shown in front of him visually, in order to decide how his texture images are to be combined, into the final appearance of a 3D model, and this GUI can go so far as to display Nodes, visible to the content-designer, that make this work easier.

My friend was not happy with this answer, because I had not defined what a material is, in a way that applies to ALL Graphics Applications. And the reason I did not, was the fact that each graphics application is slightly different in this regard.

Dirk