Understanding why some e-Readers fall short of performing as Android tablets (Setting, Hidden Benefits).

There is a fact about modern graphics chips which some people may not be aware of – especially some Linux users – but which I was recently reminded of because I have bought an e-Reader that has the Android O/S, but that features the energy-saving benefits of “e-Ink” – an innovative technology that has a surface somewhat resembling paper, the brightness of which can vary between white and black, but that mainly uses available light, although back-lit and front-lit versions of e-Ink now exist, and that consumes very little current, so that it’s frequently possible to read an entire book on one battery-charge. With an average Android tablet that merely has an LCD, the battery-life can impede enjoying an e-Book.

An LCD still has in common with the old CRTs, being refreshed at a fixed frequency by something called a “raster” – a pattern that scans a region of memory and feeds pixel-values to the display sequentially, but maybe 60 times per second, thus refreshing the display that often. e-Ink pixels are sent a signal once, to change brightness, and then stay at the assigned brightness level until they receive another signal, to change again. What this means is that, at the hardware-level, e-Ink is less powerful than ‘frame-buffer devices’ once were.

But any PC, Mac or Android graphics card or graphics chip manufactured later than in the 1990s has a non-trivial GPU – a ‘Graphics Processing Unit’ – that acts as a co-processor, working in parallel with the computer’s main CPU, to take much of the workload off the CPU, associated with rendering graphics to the screen. Much of what a modern GPU does consists of taking as input, pixels which software running on the CPU wrote either to a region of dedicated graphics memory, or, in the case of an Android device, to a region of memory shared between the GPU and the CPU, but part of the device’s RAM. And the GPU then typically ‘transforms’ the image of these pixels, to the way they will appear on the screen, finally. This ends up modifying a ‘Frame-Buffer’, the contents of which are controlled by the GPU and not the CPU, but which the raster scans, resulting in output to the actual screen.

Transforming an image can take place in a strictly 2D sense, or can take place in a sense that preserves 3D perspective, but that results in 2D screen-output. And it gets applied to desktop graphics as much as to application content. In the case of desktop graphics, the result is called ‘Compositing’, while in the case of application content, the result is either fancier output, or faster execution of the application, on the CPU. And on many Android devices, compositing results in multiple Home-Screens that can be scrolled, and the glitz of which is proven by how smoothly they scroll.

Either way, a modern GPU is much more versatile than a frame-buffer device was. And its benefits can contribute in unexpected places, such as when an application outputs text to the screen, but when the text is merely expected to scroll. Typically, the rasterization of fonts still takes place on the CPU, but results in pixel-values being written to shared memory, that correspond to text to be displayed. But the actual scrolling of the text can be performed by the GPU, where more than one page of text, with a fixed position in the drawing surface the CPU drew it to, is transformed by the GPU to advancing screen-positions, without the CPU having to redraw any pixels. (:1) This effect is often made more convincing, by the fact that at the end of a sequence, a transformed image is sometimes replaced by a fixed image, in a transition of the output, but between two graphics that are completely identical. These two graphics would reside in separate regions of RAM, even though the GPU can render a transition between them.

(Updated 4/20/2019, 12h45 … )

Continue reading Understanding why some e-Readers fall short of performing as Android tablets (Setting, Hidden Benefits).

How SDL Accelerates Video Output under Linux.

What we might know about the Linux, X-server, is that it offers pure X-protocol to render such features efficiently to the display, as Text with Fonts, Simple GUI-elements, and small Bitmaps such as Icons… But then, when it’s needed to send moving pictures to the display, we need extensions, which serious Linux-users take for granted. One such extension is the Shared-Memory extension.

Its premise is that the X-server shares a region of RAM with the client application, into which the client-application can draw pixels, which the X-server then transfers to Graphics Memory.

For moving pictures, this offers one way in which they can also be ~accelerated~, because that memory-region stays mapped, even when the client-application redraws it many times.

But this extension does not make significant use of the GPU, only of the CPU.

And so there exists something called SDL, which stands for Simple Direct Media Layer. And one valid question we may ask ourselves about this protocol, is how it achieves a speed improvement, if it’s only installed on Linux systems as a set of user-space libraries, not drivers.

(Updated 10/06/2017 : )

Continue reading How SDL Accelerates Video Output under Linux.

Understanding that The GPU Is Real

A type of graphics hardware which once existed, was an arrangement by which a region of memory was formatted to correspond directly to screen-pixels, and by which a primitive set of chips would rasterize that memory-region, sending the analog-equivalent of pixel-values to an output-device, such as a monitor, even while the CPU was writing changes to the same memory-region. This type of graphics arrangement is currently referred to as “A Framebuffer Device”. Since the late 1990s, these types of graphics have been replaced by graphics, that possess a ‘GPU’ – a Graphics Processing Unit. The acronym GPU follows similarly to how the acronym ‘CPU’ is formed, the latter of which stands for Central Processing Unit.

A GPU is essentially a kind of co-processor, which does a lot of the graphics-work that the CPU once needed to do, back in the days of framebuffer-devices. The GPU has been optimized, where present, to give real-time 2D, perspective-renderings of 3D scenes, that are fed to the GPU in a language that is either some version of DirectX, or in some version of OpenGL. But, modern GPUs are also capable of performing certain 2D tasks, such as to accelerate the playback of compressed video-streams at very high resolutions, and to do Desktop Compositing.

wayland_1

wayland_2

What they do is called raster-based rendering, as opposed to ray-tracing, where ray-tracing cannot usually be accomplished in real-time.

And modern smart-phones and tablets, also typically have GPUs, that give them some of their smooth home-screen effects and animations, which would all be prohibitive to program under software-based graphics.

The fact that some phone or computer has been designed and built by Apple, does not mean that it has no GPU. Apple presently uses OpenGL as its main language to communicate 3D to its GPUs.

DirectX is totally owned by Microsoft.

The GPU of a general-purpose computing device often possesses additional protocols for accepting data from the CPU, other than DirectX or OpenGL. The accelerated, 2D decompressed video-streams would be an example of that, which are possible under Linux, if a graphics-driver supports ‘vdpau‘ …

Dirk