Deriving a workable entropy-encoding scheme, based on the official explanation of CABAC.

One of the subjects which I recently blogged about, is that when encoding video-streams, some Codecs use 8×8 sample Discrete Cosine Transforms, but as with many DCTs, the coefficients produced tend to be values, which would take up too much space to store, in a fixed-length format. And so a family of techniques which gets applied, is loosely referred to as ‘Entropy Encoding’, with the key idea being, that the Entropy Encoding used for compressed video, is different again, from the Entropy Encoding used for compressed audio. And the scheme used for video has as advantage, that the encoding itself is lossless. Apparently, there are two variants actually used with H.264-encoded videos, which some people group together as MPEG-4:

  1. An unspecified form of variable-length encoding,
  2. CABAC,

The latter of which promises better compression, at the cost of greater CPU-power required, both to encode and to decode. I’m going to focus on ‘CABAC’ in this posting. There is an official explanation for how CABAC works, which I will refer to. In order to understand my posting here, the reader will need to have read the documentation I just linked to.

From first impressions – yesterday evening was the first day on which I examined CABAC – I’d say that the official explanation contains an error. And I’ll explain why, by offering a version of Entropy-Encoding, which I know can work, based on the link above, but different from it:

  • Integers are meant to be encoded, that are “Binarized”.
  • The probability with which the first “Bin” has become (1) instead of (0) can be analyzed as described, resulting in a Context Model of one out of (0, 1, 2), as described.
  • The next four Bins may not have individual probabilities computed, only resulting in Context Models (3, 4, 5, 6) when they are (1) instead of (0), which override the Context Model that the first Bin would generate.
  • The resulting, one Context Model could be averaged over the previous Values.
  • Using As a Pair of values, the Context Model (from the previous values) which was just computed, And the (present) Integer Value, a look-up can take place in a 2-dimensional table, of which sequence of bits to use, to encode (both).
  • Because the decoder has chosen the integer value out of a known row in the same look-up table, it can also update the Context Model being used, so that future look-ups when decoding remain unambiguous.

The main problem I see with the official explanation is, that because up to 6 Context Models can be computed, each of which supposedly has its own probability, based on that, the lookup-table in which binary values (entropy encodings) are to be found, would effectively need to be a 6-dimensional table ! Officially, all the Context-Models found, have equal meaning. Software is much-more probable, which uses a 2D table, than software which uses a 6-dimensional table, although according to Theoretical Math, 6-dimensional tables are also possible.

But then, a property of Variable Length Coding which has been observed for some time, was that small integers, such as (0), (1) and (2), were assigned very short bit-sequences to be recognized, while larger integers, such as (16) or (17), were assigned recognizable bit-sequences, which would sometimes have been impractically long, and which resulted in poor compression, when the probability of the integer actually being (0), (1) or (2) decreased.

So, because we know that we can have at least one Context-Model, based on the actual, local probabilities, when the probabilities of very small integers become smaller, a series of entropy-encodings can be selected in the table, the bit-length of which can be made more-uniform, resulting in smaller encodings overall, than what straight Variable-Length Encoding would have generated, CABAC instead being adapted to probable, larger integers.

The fact will remain, that the smaller integers will require fewer bits to encode, in general, than the larger integers. But when the smallest integers become very improbable, the bit-lengths for all the integers can be evened out. This will still result in longer streams overall, as larger integers become more-probable, but in shorter streams than the streams that would result, if the encodings for the smallest integers remained the shortest they could be.

Continue reading Deriving a workable entropy-encoding scheme, based on the official explanation of CABAC.

Web-Optimizing Lossless Images

One subject which has caught my interest in recent times, is how to publish lossless images on the Web – and of course, optimize memory-use.

It has been a kind of default setting on most of my desktop software, that I’ll either save images as JPEGs – that are lossy but offer excellent file-sizes – or as PNG-Format files – that are losssless, but that still offer much better file-sizes than utterly uncompressed .BMP-Files would offer. Yet, I might one day want even smaller file-sizes than what PNGs can offer, yet preserve the crisp exactitude required in, say, schematics.

The ideal solution to this problem would be, to publish the Web-embedded content directly in SVG-Files, which preserve exact curves literally at any level of magnification. But there are essentially two reasons fw this may not generally be feasible:

  1. The images need to be sourced as vector-images, not raster-images. There is no reasonable way to convert rasters into vector-graphics.
  2. There may be a lack of browser-support for SVG specifically. I know that up-to-date Firefox browsers have it, but when publishing Web-content, some consideration needs to be given to older or less-CPU-intensive browsers.

And so there seems to be an alternative which has re-emerged, but which has long been forgotten in the history of the Web, because originally, the emphasis was on reducing the file-size of photos. That alternative seems to exist in GIF-Images. But there is a basic concern which needs to be observed, when using them: The images need to be palletized, before they can be turned into GIFs – which some apps do automatically for the user when prompted to produce a GIF.

What this means is, that while quality photos have a minimum pixel-depth of either 24 or 32 Bits-Per-Pixel (‘BPP’), implying 8 Bits Per Channel, and while this gives us quality images, the set of colors needs to be reduced to a much-smaller set, in order for GIFs actually to become smaller in file-size, than what PNG-Files already exemplify. While 8-bit-palletized colors are possible, that offer 1/255 colors, my main interest would be in the 4-bit or the 1-bit pallets, that either offer the so-called ‘Web-optimized’ standard set of 16 colors, or that just offer either white or black pixels. And my interest in this format is due to the fact that the published images in question are either truly schematic, or what I would call quasi-schematic, i.e. schematic with colors.

What this means for me as a writer, is that I must open the images in question in GIMP, and change the ‘Image -> Mode’ to the Web-optimized, or the 1-bit Pallets, before I can ‘Export To GIF’, and when I do this, I take care to choose ‘Interlaced GIF’, to help browsers deal with the memory-consumption best.

In the case of a true 1-bpp schematic, the effect is almost lossless, as the example shown below has already occurred elsewhere in my blog, but appears as sharp here as the former, PNG-formatted variety appeared:

schem-1_8

In the case of a quasi-schematic, there is noticeable loss in quality, due to the reduction in color-accuracy, but a considerable reduction in file-size. The lossless, PNG-format example is this:

quasi-schem-1_1

While the smaller, GIF-format File would be this:

quasi-schem-1_9

There is some mention that for larger, more-complex schematics, GIFs take up too much memory. But when the image really has been large in the past, regardless of what I might like, I’ve been switching to JPEGs anyway.

There could be some controversy, as to whether this can be referred to as lossless in any way.

The answer to that would be, that this results in either 1 or 4 bit-planes, and that the transmission of each bit-plane will be without alteration of any kind – i.e., lossless. But there will be the mentioned loss in color-accuracy, when converting the original pixel-values to the simplified colors – i.e. lossy.

Continue reading Web-Optimizing Lossless Images

Sound Compression is also Possible, in the Time Domain.

One alternative type of sound compression which exists, is ‘FLAC’. This is a method which preserves all the information in the stream (later reproducing it exactly), which uses Linear Predictive Coding, and which Rice-Encodes the Residual, by which actual samples differ from the LPE-predicted values.

When I first tried to understand FLAC, my efforts were lacking, in that I did not know that LPE takes place in the time-domain, and not in the frequency-domain.

Dirk