Revisiting HTML, this time, With CSS.

When I first taught myself HTML, it was in the 1990s, and not only has the technology advanced, but the philosophy behind Web-design has also changed. The original philosophy was, that the Web-page should only contain the information, and that each Web-browser should define in what style that information should be displayed. But of course, when Cascading Style-Sheets were invented – which in today’s laconic vocabulary are just referred to as “Styles” – they represented a full reversal of that philosophy, since by nature, they control the very appearance of the page, from the server.

My own knowledge of HTML has been somewhat limited. I’ve bought cuspy books about ‘CSS’ as well as about ‘JQuery’, but have never made the effort to read each book from beginning to end. I mainly focused on what some key concepts are, in HTML5 and CSS.

Well recently I’ve become interested in HTML5 and CSS again, and have found, that to buy the Basic license of a WYSIWYG-editor named “BlueGriffon“, proved informative. I do have access to some open-source HTML editors, but find that even if they come as a WYSIWIG-editor, they mainly tend to produce static pages, very similar to what Web-masters were already creating in the 1990s. In the open-source domain, maybe a better example would be “SeaMonkey“. Beyond that, ‘KompoZer‘ can no longer be made to run on up-to-date 64-bit systems, and while “BlueFish”, a pronouncedly KDE-centric solution available from the package-manager, does offer advanced capabilities, it only does so in the form of an IDE.

(Updated 03/09/2018, 17h10 : )

Continue reading Revisiting HTML, this time, With CSS.

Understanding ADPCM

One concept which exists in Computing, is a primary representation of audio streams, as samples with a constant sampling-rate, which is also called ‘PCM’ – or, Pulse-Code Modulation. this is also the basis for .WAV-Files. But, everybody knows that the files needed to represent even the highest humanly-audible frequencies in this way, become large. And so means have been pursued over the decades to compress this format after it has been generated, or to decompress it before reading the stream. And as early as in the 1970s, a compression-technique existed, which is called ‘DPCM’ today: Differential Pulse-Code Modulation. Back then, it was just not referred to as DPCM, but rather as ‘Delta-Modulation’, and it first formed a basis for the voice-chips, in ‘talking dolls’ (toys). Later it became the basis for the first solid-state (telephone) answering machines.

The way DPCM works, is that instead of each sample-value being stored or transmitted, only the exact difference between two consecutive sample-values is stored. And this subject is sometimes explained, as though software engineers had two ways to go about encoding it:

  1. Simply subtract the current sample-value from the previous one and output it,
  2. Create a local copy, of what the decoder would do, if the previous sample-differences had been decoded, and output the difference between the current sample-value, and what this local model regenerated.

What happens when DPCM is used directly, is that a smaller field of bits can be used as data, let’s say ‘4’ instead of ‘8’. But then, a problem quickly becomes obvious: Unless the uncompressed signal was very low in higher-frequency components – frequencies above 1/3 the Nyquist-Frequency – a step in the 8-bit sample-values could take place, which is too large to represent as a 4-bit number. And given this possibility, it would seem that only approach (2) will give the correct result, which would be, that the decoded sample-values will slew, where the original values had a step, but slew back to an originally-correct, low-frequency value.

But then we’d still be left with the advantage, of fixed field-widths, and thus, a truly Constant Bitrate (CBR).

But because according to today’s customs, the signal is practically guaranteed to be rich in its higher-frequency components, a derivative of DPCM has been devised, which is called ‘ADPCM’ – Adaptive Differential Pulse-Code Modulation. When encoding ADPCM, each sample-difference is quantized, according to a quantization-step – aka scale-factor – that adapts to how high the successive differences are at any time. But again, as long as we include the scale-factor as part of (small) header-information for an audio-format, that’s organized into blocks, we can achieve fixed field-sizes and fixed block-sizes again, and thus also achieve true CBR.

(Updated 03/07/2018 : )

Continue reading Understanding ADPCM

A Note on Ancient, Word-Aligned Architectures

On modern computers, including PCs and ARM-based machines, the assumption is that addresses are byte-aligned. Hence, if it is a 32-bit machine, and if its data in RAM is aligned to 32-bit boundaries, then the last two bits of any valid address are always ’00’. In some cases, 16-bit-word-aligned addresses are also allowed. But I think that modern PCs are not even able to fetch an odd byte-address from RAM anymore. In fact, modern RAM refreshes the cache, in longer units.

But back in the days of early mainframe computers, addresses corresponded to words, according to the architecture. Hence, in those days, the least significant bit of an address could be equal to ‘1’ as easily as ‘0’. Yet, when I have a memory from texts written in those days, my memories do not always come back to me with an upgrade.

Not only that, but old mainframes frequently used to have bus-widths that were multiples of 3 bits, instead of having multiples of 4, or powers of 2, bits. And so word-alignment of the addresses was  must. But this was also the original reason for octal notation, instead of hexadecimal notation more common today.

Dirk