There can be curious gaps, in what some people understand.

One of the concepts which once dominated CGI was, that textures assigned to 3D models needed to include a “Normal-Map”, so that even early in the days of 3D gaming, textured surfaces would seem to have ‘bumps’, and these normal-maps were more significant, than displacement-maps – i.e., height- or depth-maps – because shaders were actually able to compute lighting subtleties more easily, using the normal-maps. But additionally, it was always quite common that ordinary 8x8x8 (R,G,B) texel-formats needed to store the normal-maps, just because images could more-easily be prepared and loaded with that pixel-format. (:1)

The old-fashioned way to code that was, that the 8-bit integer (128) was taken to symbolize (0.0), that (255) was taken to symbolize a maximally positive value, and that the integer (0) was decoded to (-1.0). The reason for this, AFAIK, was the use by the old graphics cards, of the 8-bit integer, as a binary fraction.

In the spirit of recreating that, and, because it’s sometimes still necessary to store an approximation of a normal-vector, using only 32 bits, the code has been offered as follows:

 


Out.Pos_Normal.w = dot(floor(normal * 127.5 + 127.5), float3(1 / 256.0, 1.0, 256.0));

float3 normal = frac(Pos_Normal.w * float3(1.0, 1 / 256.0, 1 / 65536.0)) * 2.0 - 1.0;

 

There’s an obvious problem with this backwards-emulation: It can’t seem to reproduce the value (0.0) for any of the elements of the normal-vector. And then, what some people do is, to throw their arms in the air, and to say: ‘This problem just can’t be solved!’ Well, what about:

 


//  Assumed:
normal = normalize(normal);

Out.Pos_Normal.w = dot(floor(normal * 127.0 + 128.5), float3(1 / 256.0, 1.0, 256.0));

 

A side effect of this will definitely be, that no uncompressed value belonging to the interval [-1.0 .. +1.0] will lead to a compressed series of 8 zeros.

Mind you, because of the way the resulting value was now decoded again, the question of whether zero can actually result, is not as easy to address. And one reason is the fact that, for all the elements except the first, additional bits after the first 8 fractional bits, have not been removed. But that’s just a problem owing to the one-line decoding that was suggested. That could be changed to:

 


float3 normal = floor(Pos_Normal.w * float3(256.0, 1.0, 1 / 256.0));
normal = frac(normal * (1 / 256.0)) * (256.0 / 127.0) - (128.0 / 127.0);

 

Suddenly, the impossible has become possible.

N.B.  I would not use the customized decoder, unless I was also sure, that the input floating-point value, came from my customized encoder. It can easily happen that the shader needs to work with texture images prepared by an external program, and then, because of the way their channel-values get normalized today, I might use this as the decoder:

 


float3 normal = texel.rgb * (255.0 / 128.0) - 1.0;

 

However, if I did, a texel-value of (128) would still be required, to result in a floating-point value of (0.0)

(Updated 5/10/2020, 19h00… )

Continue reading There can be curious gaps, in what some people understand.

Computing Pi

Nowadays, many people take the concept for granted, that they ‘know the meaning’ of certain Mathematical functions and constants, but that if they ever need a numerical equivalent, they can just tap on an actual calculator, or on a computer program that acts as a calculator, to obtain the correct result.

I am one such person, and an example of a Mathematical constant would be (π).

But, in the 1970s it was considered to be a major breakthrough, that Scientists were able to compute (π) to a million decimal places, using a computer.

And so the question sometimes bounces around my head, of what the simplest method might be, to compute it, even though I possess software which can do so, non-transparently to me. This is my concept, of how to do so:

http://dirkmittler.homeip.net/ComputingPi.pdf

http://dirkmittler.homeip.net/Pi_to_5000.pdf

(Update 08/28/2018 : )

The second link above points to a document, the textual contents of which were created simply, using the program ‘Yacas’. I instructed this program to print (π) to 5000 decimal places. Yet, if the reader was ever to count how many decimal places have been printed, he or she would find there are significantly more than 5000. The reason this happens is the fact that when ‘Yacas’ is so instructed, the first 5000 decimal places will be accurate, but will be followed by an uncertain number of decimal places, which are assumed to be inaccurate. This behavior can be thought related, to the fact that numerical precision, is not the same thing as numerical accuracy.

In fact, the answer pointed to in the second link above is accurate to 5000 decimal places, but printed with precision exceeding that number of decimal places.

Dirk

 

Revisiting HTML, this time, With CSS.

When I first taught myself HTML, it was in the 1990s, and not only has the technology advanced, but the philosophy behind Web-design has also changed. The original philosophy was, that the Web-page should only contain the information, and that each Web-browser should define in what style that information should be displayed. But of course, when Cascading Style-Sheets were invented – which in today’s laconic vocabulary are just referred to as “Styles” – they represented a full reversal of that philosophy, since by nature, they control the very appearance of the page, from the server.

My own knowledge of HTML has been somewhat limited. I’ve bought cuspy books about ‘CSS’ as well as about ‘JQuery’, but have never made the effort to read each book from beginning to end. I mainly focused on what some key concepts are, in HTML5 and CSS.

Well recently I’ve become interested in HTML5 and CSS again, and have found, that to buy the Basic license of a WYSIWYG-editor named “BlueGriffon“, proved informative. I do have access to some open-source HTML editors, but find that even if they come as a WYSIWIG-editor, they mainly tend to produce static pages, very similar to what Web-masters were already creating in the 1990s. In the open-source domain, maybe a better example would be “SeaMonkey“. Beyond that, ‘KompoZer‘ can no longer be made to run on up-to-date 64-bit systems, and while “BlueFish”, a pronouncedly KDE-centric solution available from the package-manager, does offer advanced capabilities, it only does so in the form of an IDE.

(Updated 03/09/2018, 17h10 : )

Continue reading Revisiting HTML, this time, With CSS.

Understanding ADPCM

One concept which exists in Computing, is a primary representation of audio streams, as samples with a constant sampling-rate, which is also called ‘PCM’ – or, Pulse-Code Modulation. this is also the basis for .WAV-Files. But, everybody knows that the files needed to represent even the highest humanly-audible frequencies in this way, become large. And so means have been pursued over the decades to compress this format after it has been generated, or to decompress it before reading the stream. And as early as in the 1970s, a compression-technique existed, which is called ‘DPCM’ today: Differential Pulse-Code Modulation. Back then, it was just not referred to as DPCM, but rather as ‘Delta-Modulation’, and it first formed a basis for the voice-chips, in ‘talking dolls’ (toys). Later it became the basis for the first solid-state (telephone) answering machines.

The way DPCM works, is that instead of each sample-value being stored or transmitted, only the exact difference between two consecutive sample-values is stored. And this subject is sometimes explained, as though software engineers had two ways to go about encoding it:

  1. Simply subtract the current sample-value from the previous one and output it,
  2. Create a local copy, of what the decoder would do, if the previous sample-differences had been decoded, and output the difference between the current sample-value, and what this local model regenerated.

What happens when DPCM is used directly, is that a smaller field of bits can be used as data, let’s say ‘4’ instead of ‘8’. But then, a problem quickly becomes obvious: Unless the uncompressed signal was very low in higher-frequency components – frequencies above 1/3 the Nyquist-Frequency – a step in the 8-bit sample-values could take place, which is too large to represent as a 4-bit number. And given this possibility, it would seem that only approach (2) will give the correct result, which would be, that the decoded sample-values will slew, where the original values had a step, but slew back to an originally-correct, low-frequency value.

But then we’d still be left with the advantage, of fixed field-widths, and thus, a truly Constant Bitrate (CBR).

But because according to today’s customs, the signal is practically guaranteed to be rich in its higher-frequency components, a derivative of DPCM has been devised, which is called ‘ADPCM’ – Adaptive Differential Pulse-Code Modulation. When encoding ADPCM, each sample-difference is quantized, according to a quantization-step – aka scale-factor – that adapts to how high the successive differences are at any time. But again, as long as we include the scale-factor as part of (small) header-information for an audio-format, that’s organized into blocks, we can achieve fixed field-sizes and fixed block-sizes again, and thus also achieve true CBR.

(Updated 03/07/2018 : )

Continue reading Understanding ADPCM