Understanding why some e-Readers fall short of performing as Android tablets (Setting, Hidden Benefits).

There is a fact about modern graphics chips which some people may not be aware of – especially some Linux users – but which I was recently reminded of because I have bought an e-Reader that has the Android O/S, but that features the energy-saving benefits of “e-Ink” – an innovative technology that has a surface somewhat resembling paper, the brightness of which can vary between white and black, but that mainly uses available light, although back-lit and front-lit versions of e-Ink now exist, and that consumes very little current, so that it’s frequently possible to read an entire book on one battery-charge. With an average Android tablet that merely has an LCD, the battery-life can impede enjoying an e-Book.

An LCD still has in common with the old CRTs, being refreshed at a fixed frequency by something called a “raster” – a pattern that scans a region of memory and feeds pixel-values to the display sequentially, but maybe 60 times per second, thus refreshing the display that often. e-Ink pixels are sent a signal once, to change brightness, and then stay at the assigned brightness level until they receive another signal, to change again. What this means is that, at the hardware-level, e-Ink is less powerful than ‘frame-buffer devices’ once were.

But any PC, Mac or Android graphics card or graphics chip manufactured later than in the 1990s has a non-trivial GPU – a ‘Graphics Processing Unit’ – that acts as a co-processor, working in parallel with the computer’s main CPU, to take much of the workload off the CPU, associated with rendering graphics to the screen. Much of what a modern GPU does consists of taking as input, pixels which software running on the CPU wrote either to a region of dedicated graphics memory, or, in the case of an Android device, to a region of memory shared between the GPU and the CPU, but part of the device’s RAM. And the GPU then typically ‘transforms’ the image of these pixels, to the way they will appear on the screen, finally. This ends up modifying a ‘Frame-Buffer’, the contents of which are controlled by the GPU and not the CPU, but which the raster scans, resulting in output to the actual screen.

Transforming an image can take place in a strictly 2D sense, or can take place in a sense that preserves 3D perspective, but that results in 2D screen-output. And it gets applied to desktop graphics as much as to application content. In the case of desktop graphics, the result is called ‘Compositing’, while in the case of application content, the result is either fancier output, or faster execution of the application, on the CPU. And on many Android devices, compositing results in multiple Home-Screens that can be scrolled, and the glitz of which is proven by how smoothly they scroll.

Either way, a modern GPU is much more versatile than a frame-buffer device was. And its benefits can contribute in unexpected places, such as when an application outputs text to the screen, but when the text is merely expected to scroll. Typically, the rasterization of fonts still takes place on the CPU, but results in pixel-values being written to shared memory, that correspond to text to be displayed. But the actual scrolling of the text can be performed by the GPU, where more than one page of text, with a fixed position in the drawing surface the CPU drew it to, is transformed by the GPU to advancing screen-positions, without the CPU having to redraw any pixels. (:1) This effect is often made more convincing, by the fact that at the end of a sequence, a transformed image is sometimes replaced by a fixed image, in a transition of the output, but between two graphics that are completely identical. These two graphics would reside in separate regions of RAM, even though the GPU can render a transition between them.

(Updated 4/20/2019, 12h45 … )

Continue reading Understanding why some e-Readers fall short of performing as Android tablets (Setting, Hidden Benefits).

How to Add a Web-browser to GNURoot + XSDL.

In This earlier posting – out of several – I had explained, that I’ve installed the Android apps “GNURoot Debian” and “XSDL” to my old Samsung Galaxy Tab S (first generation). The purpose is, to install Linux software on that tablet, without requiring that I root it. This uses the Android variant of ‘chroot’, which is actually also called ‘proot’, and is quick and painless.

However, there are certain things which a ch-rooted Linux system cannot do. One of them is to start services to run in the background. Another is, to access hardware, as doing the latter would require access to the host’s ‘/dev’ folder, not the local, ch-root’s ‘/dev’ folder. Finally, because XSDL is acting as my X-server, when GNURoot’s guest-software tries to connect to one, there will be no hardware-acceleration, because this X-server is really just an Android app, and does not really correspond to a display device.

This last detail can be quite challenging, because in today’s world, even many Linux applications require, direct-rendering, and will not function properly, if left just to use X-server protocol, à la legacy-Unix. One such application is any serious Web-browser.

This does not result from any malfunction of either Android app, because it just follows from the logic, of what the apps are being asked to do.

But we’d like to have a Web-browser installed, and will find that “Firefox”, “Arora” etc., all fail over this issue. This initially leaves us in an untenable situation, because even if we were not to use our Linux guest-system for Web-browsing – because there is a ‘real’ Web-browser installed on the (Android) host-system – the happenstance can take place, by which a Web-document needs to be viewed anyway – let’s say, because we want to click on an HTML-file, that constitutes the online documentation for some Linux-application.

What can we do?

Continue reading How to Add a Web-browser to GNURoot + XSDL.

“Hardware Acceleration” is a bit of a Misnomer.

The term gets mentioned quite frequently, that certain applications offer to give the user services, with “Hardware Acceleration”. This terminology can in fact be misleading – in a way that has no consequences – because computations that are hardware-accelerated, are still being executed according to software that has either been compiled or assembled into micro-instructions. Only, those micro-instructions are not to be executed on the main CPU of the machine.

Instead, those micro-instructions are to be executed either on the GPU, or on some other coprocessor, which provide the accelerating hardware.

Often, the compiling of code meant to run on a GPU – even though the same, in theory as regular software – has its own special considerations. For example, this code often consists of only a few micro-instructions, over which great care must be taken to make sure that they run correctly on as many GPUs as possible. I.e., when we are coding a shader, KISS is often a main paradigm. And the possibility crops up often in practice, that even though the code is technically correct, it does not run correctly on a given GPU.

I do not really know how it is with SIMD coprocessors.

But this knowledge would be useful to have, in order to understand this posting of mine.

Of course, there exists a major contradiction to what I just wrote, in OpenCL and CUDA.

Continue reading “Hardware Acceleration” is a bit of a Misnomer.

Google Pixel C does not have NEON.

I have been thoroughly enjoying my Google Pixel C, which I ordered only recently, and which I ordered because the actual tablet I have been using before, was only a first-generation Samsung Galaxy Tab S.

Sometimes we obtain many new features, but also at the expense of losing some feature. Because the ARM CPU is a RISC-Chip, the manufacturers of Android devices have sometimes made up for this by including a coprocessor called NEON. NEON is an SIMD – a Single-Instruction, Multiple-Data – coprocessor – aka a Vector-Processor, which is often useful to allow the decoding of high-definition video streams in real-time, without placing the burden of doing so on the main CPU.

(Edit 04/08/2017 : I have given my own definition of what “Hardware Acceleration” means, Here. )

What has happened with the Pixel C, is that Google has decided to put a Tegra X1 CPU into it, which is an SoC that also has a big coprocessor – its mighty GPU. With this tablet, real-time video-decoding is meant to be performed by the GPU, which advertizes several system-installed Codecs. Therefore, watching videos in high definition should not require a NEON coprocessor, and the Tegra X1 does not have one. (And, when I scroll further down the list of Codecs, that list includes two of the corresponding Encoders, from Nvidia, not only the Decoders. )

foxy_149159474357

In fact, the Pixel C only has a 4-core main CPU!

Continue reading Google Pixel C does not have NEON.