Understanding why some e-Readers fall short of performing as Android tablets (Setting, Hidden Benefits).

There is a fact about modern graphics chips which some people may not be aware of – especially some Linux users – but which I was recently reminded of because I have bought an e-Reader that has the Android O/S, but that features the energy-saving benefits of “e-Ink” – an innovative technology that has a surface somewhat resembling paper, the brightness of which can vary between white and black, but that mainly uses available light, although back-lit and front-lit versions of e-Ink now exist, and that consumes very little current, so that it’s frequently possible to read an entire book on one battery-charge. With an average Android tablet that merely has an LCD, the battery-life can impede enjoying an e-Book.

An LCD still has in common with the old CRTs, being refreshed at a fixed frequency by something called a “raster” – a pattern that scans a region of memory and feeds pixel-values to the display sequentially, but maybe 60 times per second, thus refreshing the display that often. e-Ink pixels are sent a signal once, to change brightness, and then stay at the assigned brightness level until they receive another signal, to change again. What this means is that, at the hardware-level, e-Ink is less powerful than ‘frame-buffer devices’ once were.

But any PC, Mac or Android graphics card or graphics chip manufactured later than in the 1990s has a non-trivial GPU – a ‘Graphics Processing Unit’ – that acts as a co-processor, working in parallel with the computer’s main CPU, to take much of the workload off the CPU, associated with rendering graphics to the screen. Much of what a modern GPU does consists of taking as input, pixels which software running on the CPU wrote either to a region of dedicated graphics memory, or, in the case of an Android device, to a region of memory shared between the GPU and the CPU, but part of the device’s RAM. And the GPU then typically ‘transforms’ the image of these pixels, to the way they will appear on the screen, finally. This ends up modifying a ‘Frame-Buffer’, the contents of which are controlled by the GPU and not the CPU, but which the raster scans, resulting in output to the actual screen.

Transforming an image can take place in a strictly 2D sense, or can take place in a sense that preserves 3D perspective, but that results in 2D screen-output. And it gets applied to desktop graphics as much as to application content. In the case of desktop graphics, the result is called ‘Compositing’, while in the case of application content, the result is either fancier output, or faster execution of the application, on the CPU. And on many Android devices, compositing results in multiple Home-Screens that can be scrolled, and the glitz of which is proven by how smoothly they scroll.

Either way, a modern GPU is much more versatile than a frame-buffer device was. And its benefits can contribute in unexpected places, such as when an application outputs text to the screen, but when the text is merely expected to scroll. Typically, the rasterization of fonts still takes place on the CPU, but results in pixel-values being written to shared memory, that correspond to text to be displayed. But the actual scrolling of the text can be performed by the GPU, where more than one page of text, with a fixed position in the drawing surface the CPU drew it to, is transformed by the GPU to advancing screen-positions, without the CPU having to redraw any pixels. (:1) This effect is often made more convincing, by the fact that at the end of a sequence, a transformed image is sometimes replaced by a fixed image, in a transition of the output, but between two graphics that are completely identical. These two graphics would reside in separate regions of RAM, even though the GPU can render a transition between them.

(Updated 4/20/2019, 12h45 … )

Continue reading Understanding why some e-Readers fall short of performing as Android tablets (Setting, Hidden Benefits).

Web-Optimizing Lossless Images

One subject which has caught my interest in recent times, is how to publish lossless images on the Web – and of course, optimize memory-use.

It has been a kind of default setting on most of my desktop software, that I’ll either save images as JPEGs – that are lossy but offer excellent file-sizes – or as PNG-Format files – that are losssless, but that still offer much better file-sizes than utterly uncompressed .BMP-Files would offer. Yet, I might one day want even smaller file-sizes than what PNGs can offer, yet preserve the crisp exactitude required in, say, schematics.

The ideal solution to this problem would be, to publish the Web-embedded content directly in SVG-Files, which preserve exact curves literally at any level of magnification. But there are essentially two reasons fw this may not generally be feasible:

  1. The images need to be sourced as vector-images, not raster-images. There is no reasonable way to convert rasters into vector-graphics.
  2. There may be a lack of browser-support for SVG specifically. I know that up-to-date Firefox browsers have it, but when publishing Web-content, some consideration needs to be given to older or less-CPU-intensive browsers.

And so there seems to be an alternative which has re-emerged, but which has long been forgotten in the history of the Web, because originally, the emphasis was on reducing the file-size of photos. That alternative seems to exist in GIF-Images. But there is a basic concern which needs to be observed, when using them: The images need to be palletized, before they can be turned into GIFs – which some apps do automatically for the user when prompted to produce a GIF.

What this means is, that while quality photos have a minimum pixel-depth of either 24 or 32 Bits-Per-Pixel (‘BPP’), implying 8 Bits Per Channel, and while this gives us quality images, the set of colors needs to be reduced to a much-smaller set, in order for GIFs actually to become smaller in file-size, than what PNG-Files already exemplify. While 8-bit-palletized colors are possible, that offer 1/255 colors, my main interest would be in the 4-bit or the 1-bit pallets, that either offer the so-called ‘Web-optimized’ standard set of 16 colors, or that just offer either white or black pixels. And my interest in this format is due to the fact that the published images in question are either truly schematic, or what I would call quasi-schematic, i.e. schematic with colors.

What this means for me as a writer, is that I must open the images in question in GIMP, and change the ‘Image -> Mode’ to the Web-optimized, or the 1-bit Pallets, before I can ‘Export To GIF’, and when I do this, I take care to choose ‘Interlaced GIF’, to help browsers deal with the memory-consumption best.

In the case of a true 1-bpp schematic, the effect is almost lossless, as the example shown below has already occurred elsewhere in my blog, but appears as sharp here as the former, PNG-formatted variety appeared:

schem-1_8

In the case of a quasi-schematic, there is noticeable loss in quality, due to the reduction in color-accuracy, but a considerable reduction in file-size. The lossless, PNG-format example is this:

quasi-schem-1_1

While the smaller, GIF-format File would be this:

quasi-schem-1_9

There is some mention that for larger, more-complex schematics, GIFs take up too much memory. But when the image really has been large in the past, regardless of what I might like, I’ve been switching to JPEGs anyway.

There could be some controversy, as to whether this can be referred to as lossless in any way.

The answer to that would be, that this results in either 1 or 4 bit-planes, and that the transmission of each bit-plane will be without alteration of any kind – i.e., lossless. But there will be the mentioned loss in color-accuracy, when converting the original pixel-values to the simplified colors – i.e. lossy.

Continue reading Web-Optimizing Lossless Images