Understanding why some e-Readers fall short of performing as Android tablets (Setting, Hidden Benefits).

There is a fact about modern graphics chips which some people may not be aware of – especially some Linux users – but which I was recently reminded of because I have bought an e-Reader that has the Android O/S, but that features the energy-saving benefits of “e-Ink” – an innovative technology that has a surface somewhat resembling paper, the brightness of which can vary between white and black, but that mainly uses available light, although back-lit and front-lit versions of e-Ink now exist, and that consumes very little current, so that it’s frequently possible to read an entire book on one battery-charge. With an average Android tablet that merely has an LCD, the battery-life can impede enjoying an e-Book.

An LCD still has in common with the old CRTs, being refreshed at a fixed frequency by something called a “raster” – a pattern that scans a region of memory and feeds pixel-values to the display sequentially, but maybe 60 times per second, thus refreshing the display that often. e-Ink pixels are sent a signal once, to change brightness, and then stay at the assigned brightness level until they receive another signal, to change again. What this means is that, at the hardware-level, e-Ink is less powerful than ‘frame-buffer devices’ once were.

But any PC, Mac or Android graphics card or graphics chip manufactured later than in the 1990s has a non-trivial GPU – a ‘Graphics Processing Unit’ – that acts as a co-processor, working in parallel with the computer’s main CPU, to take much of the workload off the CPU, associated with rendering graphics to the screen. Much of what a modern GPU does consists of taking as input, pixels which software running on the CPU wrote either to a region of dedicated graphics memory, or, in the case of an Android device, to a region of memory shared between the GPU and the CPU, but part of the device’s RAM. And the GPU then typically ‘transforms’ the image of these pixels, to the way they will appear on the screen, finally. This ends up modifying a ‘Frame-Buffer’, the contents of which are controlled by the GPU and not the CPU, but which the raster scans, resulting in output to the actual screen.

Transforming an image can take place in a strictly 2D sense, or can take place in a sense that preserves 3D perspective, but that results in 2D screen-output. And it gets applied to desktop graphics as much as to application content. In the case of desktop graphics, the result is called ‘Compositing’, while in the case of application content, the result is either fancier output, or faster execution of the application, on the CPU. And on many Android devices, compositing results in multiple Home-Screens that can be scrolled, and the glitz of which is proven by how smoothly they scroll.

Either way, a modern GPU is much more versatile than a frame-buffer device was. And its benefits can contribute in unexpected places, such as when an application outputs text to the screen, but when the text is merely expected to scroll. Typically, the rasterization of fonts still takes place on the CPU, but results in pixel-values being written to shared memory, that correspond to text to be displayed. But the actual scrolling of the text can be performed by the GPU, where more than one page of text, with a fixed position in the drawing surface the CPU drew it to, is transformed by the GPU to advancing screen-positions, without the CPU having to redraw any pixels. (:1) This effect is often made more convincing, by the fact that at the end of a sequence, a transformed image is sometimes replaced by a fixed image, in a transition of the output, but between two graphics that are completely identical. These two graphics would reside in separate regions of RAM, even though the GPU can render a transition between them.

(Updated 4/20/2019, 12h45 … )

Continue reading Understanding why some e-Readers fall short of performing as Android tablets (Setting, Hidden Benefits).

I no longer have Compiz-Fusion running on the LXDE-based computer, named ‘Klexel’.

According to this earlier posting, I had run in to stability issues with my newly-reinstalled Linux computer, which I name ‘Klexel’. Well, the only sensible way, finally, to solve those problems, was to deactivate ‘Compiz Fusion’, which is a special window-manager / compositor, that creates a desktop cube animation, as well as certain other effects, chosen by the user out of a long menu of effects, but which needs to run on the graphics hardware, using OpenGL.

Even though Compiz Fusion is fancy and seems like a nice idea, I’ve run in to the following problems with it, in my own experience:

  • Compiz is incompatible with Plasma 5, which is still my preferred desktop manager,
  • If we have a weak graphics-chip, such as the one provided on the computer named ‘Klexel’ using ‘i915 Support’, trying to run Compiz on it forces the so-called GPU to jump through too many hoops, to display what it’s being asked to display.

Before a certain point in time, even a hardware-accelerated graphics chip, only consisted of X vertex pipelines and Y fragment pipelines, and had other strict limitations on what it could do. It was after a point in time, that the “Unified Shader Model” was introduced, whereby any GPU core could act, as a vertex shader core, as a fragment shader core, etc.. And after that point in time, the GPU also became capable of rendering its output to texture images, several stages deep… Well, programmers today tend to program for the eventuality, that the host machine has ‘a real GPU’, with Unified Shader Model and unlimited cores, as well as unlimited texture space.

The “HP Compaq DC7100 SFF”, that has become my computer ‘Klexel’, is an ancient computer whose graphics chip stems from ‘the old days’. That seems to have been an Intel 910, which has as hardware-capability, direct-rendering with OpenGL 1.4 , the Open-Source equivalent of DirectX 7 or 8 . Even though some Compiz effects only require OpenGL 1.4 , by default, I need to run the computer named ‘Klexel’ without compositing:

screenshot-from-2018-09-04-11-56-50

Also, before, when this was the computer ‘Walnut’, it actually still had KDE 3 on it! KDE 3 was essentially also, without compositing.

It should finally be stable again, now.

By comparison, the computer which acts as Web-server and hosts this blog, which I name ‘Phoenix’, has as graphics chip an Nvidia “GeForce 6150SE”, that is more powerful than the Intel ‘i915′ series was, is capable of OpenGL 2.1 , equivalent to DirectX 9 , but still predated the Unified Shader Model chips. Microsoft has even dropped support for this graphics chip, because according to Microsoft, it’s also not powerful enough anymore. And so up-to-date Windows versions won’t run on either of these two computers.

(Update 09/04/2018, 18h20 : )

Continue reading I no longer have Compiz-Fusion running on the LXDE-based computer, named ‘Klexel’.

I’m impressed with the Mesa drivers.

Before we install Linux on our computers, we usually try to make sure that we either have an NVIDIA or an AMD / Radeon  GPU  – the graphics chip-set – so that we can use either the proprietary NVIDIA drivers designed by their company to run under Linux, or so that we can use the proprietary ‘fglrx’ drivers provided by AMD, or so that we can use the ‘Mesa‘ drivers, which are open-source, and which are designed by Linux specialists. Because the proprietary drivers only cover one out of the available families of chip-sets, this means that after we have installed Linux, our choice boils down to a choice between either proprietary or Mesa drivers.

I think that the main advantage of the proprietary drivers remains, that they will offer our computers the highest version of OpenGL possible from the hardware – which could go up to 4.5 ! But obviously, there are also advantages to using Mesa , one of which is the fact that to install those doesn’t install a ‘blob’ – an opaque piece of binary code which nobody can analyze. Another is the fact that the Mesa drivers will provide ‘VDPAU‘, which the ‘fglrx’ drivers fail to implement. This last detail has to do with the hardware-accelerated playback of 2D video-streams, that have been compressed with one out of a very short list of Codecs.

But I would add to the possible reasons for choosing Mesa, the fact that its stated OpenGL version-number does not set a real limit, on what the graphics-chip-set can do. Officially, Mesa offers OpenGL 3.0 , and this could make it look at the surface, as though its implementation of OpenGL is somewhat lacking, as a trade-off against its other benefits.

One way in which ‘OpenGL’ seems to differ from its competitor in real-life: ‘DirectX’, is in the system by which certain DirectX drivers and hardware offer a numeric compute-level, and where if that compute-level has been achieved, the game-designer can count on a specific set of features being implemented. What seems to happen with OpenGL instead, is that 3.0 must first be satisfied. And if it is, the 3D application next checks individually, whether the OpenGL system available, offers specific OpenGL extensions by name. If the application is very-well-written, it will test for the existence of every extension it needs, before giving the command to load that extension. But in certain cases, a failure to test this can lead to the graphics card crashing, because the graphics card itself may not have the extension requested.

As an example of what I mean, my KDE / Plasma compositor settings, allow me to choose ‘OpenGL 3.1′ as an available back-end, and when I select it, it works, in spite of my Mesa drivers ‘only’ achieving 3.0 . I think that if the drivers had been stated to be 3.1 , then this could actually mean they lose backward-compatibility with 3.0 , while in fact they preserve that backward-compatibility as much as possible.

screenshot_20171127_185831

screenshot_20171127_185939

Continue reading I’m impressed with the Mesa drivers.

Quickie: How 2D Graphics is just a Special Case of 3D Graphics

I have previously written in-depth, about what the rendering pipeline is, by which 3D graphics are rendered to a 2D, perspective view, as part of computer games, or as part of other applications that require 3D, in real time. But one problem with my writing in-depth might be, that people fail to see some relevance in the words, if the word-count goes beyond 500 words. :-)

So I’m going to try to summarize it more-briefly.

Vertex-Positions in 3D can be rotated and translated, using matrices. Matrices can be composited, meaning that if a sequence of multiplications of position-vectors by known matrices accomplishes what we want, then a multiplication by a single, derived matrix can accomplish the same thing.

According to DirectX 9 or OpenGL 2.x , 3D objects consisted of vertices that formed triangles, the positions and normal-vectors of which were transformed and rotated, respectively, and where vertices additionally possessed texture-coordinates, which could all be processed by “Vertex Pipelines”. The output from Vertex Pipelines was then rasterized and interpolated, and fed to “Pixel Pipelines”, that performed per-screen-pixel computations on the interpolated values, and on how these values were applied to Texture Images which were sampled.

All this work was done by dedicated graphics hardware, which is now known as a GPU. It was not done by software.

One difference that exists today, is that the specialization of GPU cores into Vertex- and Pixel-Pipelines no longer exists. Due to something called Unified Shader Model, any one GPU-core can act either as a Vertex- or as a Pixel-Shader, and powerful GPUs possess hundreds of cores.

So the practical question does arise, how any of this applies to 2D applications, such as Desktop Compositing. And the answer would be, that it has always been possible to render a single rectangle, as though oriented in a 3D coordinate system. This rectangle, which is also referred to as a “Quad”, first gets Tessellated, which means that it receives a diagonal subdivision into two triangles, which are still references to the same 4 vertices as before.

When an application receives a drawing surface, onto which it draws its GUI – using CPU-time – the corners of this drawing surface have 2D texture coordinates that are combinations of [ 0 ] and ( +1 ) . The drawing-surfaces themselves can be input to the GPU as though Texture Images. And the 4 vertices that define the position of the drawing surface on the desktop, can simply result from a matrix, that is much simpler than any matrix would have needed to be, that performed rotation in 3D etc., before a screen-positioning could be formed from it. Either way, the Vertex Program only needs to multiply the (notional) positions of the corners of a drawing surface, by a single matrix, before a screen-position results. This matrix does not need to be computed from complicated trig functions in the 2D case.

And the GPU renders the scene to a frame-buffer, just as it rendered 3D games.

Continue reading Quickie: How 2D Graphics is just a Special Case of 3D Graphics