Understanding why some e-Readers fall short of performing as Android tablets (Setting, Hidden Benefits).

There is a fact about modern graphics chips which some people may not be aware of – especially some Linux users – but which I was recently reminded of because I have bought an e-Reader that has the Android O/S, but that features the energy-saving benefits of “e-Ink” – an innovative technology that has a surface somewhat resembling paper, the brightness of which can vary between white and black, but that mainly uses available light, although back-lit and front-lit versions of e-Ink now exist, and that consumes very little current, so that it’s frequently possible to read an entire book on one battery-charge. With an average Android tablet that merely has an LCD, the battery-life can impede enjoying an e-Book.

An LCD still has in common with the old CRTs, being refreshed at a fixed frequency by something called a “raster” – a pattern that scans a region of memory and feeds pixel-values to the display sequentially, but maybe 60 times per second, thus refreshing the display that often. e-Ink pixels are sent a signal once, to change brightness, and then stay at the assigned brightness level until they receive another signal, to change again. What this means is that, at the hardware-level, e-Ink is less powerful than ‘frame-buffer devices’ once were.

But any PC, Mac or Android graphics card or graphics chip manufactured later than in the 1990s has a non-trivial GPU – a ‘Graphics Processing Unit’ – that acts as a co-processor, working in parallel with the computer’s main CPU, to take much of the workload off the CPU, associated with rendering graphics to the screen. Much of what a modern GPU does consists of taking as input, pixels which software running on the CPU wrote either to a region of dedicated graphics memory, or, in the case of an Android device, to a region of memory shared between the GPU and the CPU, but part of the device’s RAM. And the GPU then typically ‘transforms’ the image of these pixels, to the way they will appear on the screen, finally. This ends up modifying a ‘Frame-Buffer’, the contents of which are controlled by the GPU and not the CPU, but which the raster scans, resulting in output to the actual screen.

Transforming an image can take place in a strictly 2D sense, or can take place in a sense that preserves 3D perspective, but that results in 2D screen-output. And it gets applied to desktop graphics as much as to application content. In the case of desktop graphics, the result is called ‘Compositing’, while in the case of application content, the result is either fancier output, or faster execution of the application, on the CPU. And on many Android devices, compositing results in multiple Home-Screens that can be scrolled, and the glitz of which is proven by how smoothly they scroll.

Either way, a modern GPU is much more versatile than a frame-buffer device was. And its benefits can contribute in unexpected places, such as when an application outputs text to the screen, but when the text is merely expected to scroll. Typically, the rasterization of fonts still takes place on the CPU, but results in pixel-values being written to shared memory, that correspond to text to be displayed. But the actual scrolling of the text can be performed by the GPU, where more than one page of text, with a fixed position in the drawing surface the CPU drew it to, is transformed by the GPU to advancing screen-positions, without the CPU having to redraw any pixels. (:1) This effect is often made more convincing, by the fact that at the end of a sequence, a transformed image is sometimes replaced by a fixed image, in a transition of the output, but between two graphics that are completely identical. These two graphics would reside in separate regions of RAM, even though the GPU can render a transition between them.

(Updated 4/20/2019, 12h45 … )

(As of 4/16/2019 : )

What modern GPUs often imply is that an application can write changes to a bit-map quickly, in the form of a region of memory addresses, and can rely on the GPU subsequently sending the resulting changes to the screen, passively as far as the CPU is concerned. And this transfer of the changed bit-maps to the screen will continue to take place, even if some transformation of their coordinates is also taking place.

All this makes the task of a chip-set in an Android-based e-Reader non-trivial, the input of which by an already-programmed app assumes that a modern GPU is present, but the output of which needs to change the values of individual pixels of an e-Ink display. The work of such a chip-set is actually easiest, when the way the app was written uses a fixed memory-location as an old-style frame-buffer, so that every pixel changed by the app can be ‘sensed’ as having been modified, so that a sequential series of changes can be derived by this chip-set and sent to the e-Ink screen. And this situation describes all ‘software-rendered’, 2D animations.

The presence of such a software-rendered 2D animation can finally refresh all the pixels on the actual screen, while if the commands sent by the CPU define an array of pixels once, but next state, ‘Change the transformation of these pixels,’ then, in practice the e-Ink driver chip-set can’t comply, and the only way in which it could comply, would be by acting in a fashion as complex, as the fashion in which an actual GPU can be programmed. And the degree of complexity of possible GPU code, which makes up ‘Shaders’, has increased dramatically over the decades, to rival the complexity of simpler CPU-based programs. (:2) Those shaders would theoretically need to run on a GPU core, or on a CPU core of some kind, in order for an e-Ink -based device to be able to emulate a GPU fully.

But I think that practical chip-sets for e-Ink displays are not as complex, as actual, modern GPUs. And so the performance of the e-Ink displays is always lower than that, of energy-intensive displays.

If the user wishes to know why an older Android-powered e-Reader, such as my 13.3″ Onyx BOOX Max2, fails to redraw its e-Ink as effectively, when I’m using the Kindle Android app, than a slightly newer device by the same company, such as the ‘BOOX Note’ or the ‘BOOX Note Pro’, the answer I’d offer is not, that one e-Reader has a version of the Kindle app, which has been optimized better, but rather that the hardware – the actual chips – inside the BOOX Note, do a better job of emulating the GPU, which most Android apps expect the device to possess.


(Update 4/18/2019, 13h00 : )


In cases where several pages of a document are being rendered by the CPU at once (to a region of RAM), but scrolled by the GPU (to the frame-buffer), it’s usually not the case that an entire document is being rendered at once by the CPU. Instead, for example, the region of RAM may be large enough to hold (4) pages of text, for the sake of argument, but, if the user’s scrolling is nearing the last of the 4 rendered pages, a new region of RAM is allocated, and another 4 pages of text can be CPU-rendered to it, in such a way that the last page of the earlier region of RAM, overlaps identically with the first page, of the later region of RAM. Then, a transition can be animated, as I described earlier, concerning this overlapped page of text.

On my own ‘Onyx BOOX Max2′ e-Reader, when I have a Kindle book page visible in full-screen mode, I can give the command to advance by one page, which may or may not be effective the first time I give it. But eventually, after giving this command more than once, the page visible on the screen does not change, while the progress of the software has advanced, so that every (3) times that I’ve given the command to advance, the page displayed advances by (3) pages. After that, the page being displayed just corresponds to the ‘first view’ of the newly-allocated region of RAM, without any transformations being applied afterwards.

According to footage I’ve seen of a ‘BOOX Note’, when the command is given to advance by (1) page (full-screen), for the duration of the animation, nothing seems to happen on the screen, but after the page-sliding animation has finished, the screen is updated to the next page, not to the page that comes, (3) pages later.

What this would seem to suggest is the possibility, that the chip-set in the ‘BOOX Note’ has the improved behaviour, to wait until the values of the elements, of the Model-View Matrix, have stopped receiving changes from the CPU, and once they have, to update the screen. (:3)


(Update 4/18/2019, 12h30 : )

I suppose that some readers might ask, how this continues to be feasible, for reading a document which is hundreds of pages long. Surely, it would not be the case that as many times divided by 4, new graphics memory needs to be allocated, before we reach the end of a book.

But indeed a way exists, to handle this scenario. The hardware-accelerated pipeline may only take two texture images as input, and may alternate between displaying each transformed. But, the way in which the CPU writes new pages to both regions of memory can be mapped such, that they form an infinite loop of sorts, with the end of one region of memory leading to the beginning of the other, both times.

And, the beginning of each region of memory can lead back to the end of the other, when the user scrolls through pages backwards. (:4)


(Update 4/17/2019, 14h30 : )


Some people might point out the fact, that ‘the minimum requirement’ of the compositing capabilities of Android is only OpenGL(ES) 1.x, even though the OpenGL specifications go all the way up to 4.5 for state-of-the-art, PC graphics cards. Standard 1.x means that in place of complex Fragment Shaders, defined by the application, the GPU only applies a ‘Fixed-Function Pipeline’, a set of parameters that state how the coloration of screen-pixels is to be modified. But in reality, even the use of OpenGL 1.x doesn’t change the basic premises of hardware-accelerated graphics:

  • The GPU still controls the frame-buffer, and
  • Vertex coordinates are still transformed into screen coordinates variably. Only, with 1.x this is just defined with a Model-View Matrix, changeable by the CPU, while with OpenGL 2.y, it’s already possible to define a Vertex Shader, to modify this transformation of coordinates in a more versatile way.

In fact, some of the earliest, real-time 3D games were designed, just using the FFP.

But a conclusion which I would concede is, that in the design of the chip-set that controls an e-Ink display, the chances are much better of achieving the same level of functionality which OpenGL 1.x offered. This just hasn’t been achieved yet.


(Update 4/18/2019, 17h25 : )


Actually, the ‘BOOX Max2′ has a setting which can improve the behaviour of e-Book readers, which was not intentionally hidden, but which was somehow mislabelled in the translation of the User’s Manual. This is its “A2″ setting.

In the part of the UI that corresponds to an Android Notification Bar, there is an icon which I at first thought had as its only function, to force a redraw of the display once, when tapped. But in reality, this icon toggles the A2 Mode on and off. And according to the Manual, when on, this mode merely limits ‘the number of grey-levels’ that the display can show to 2. It’s designed to help the reading of text, but not advised for graphics that require the full scale of grey-levels that e-Ink can display.

But as a side effect this mode can also put an end to the problem that I’ve spent this entire posting to describe. My explanation for this is the possibility that when the display has been put into A2 Mode, this fact also gets communicated back to the app, which can then fall back to less-fancy rendering, based on that decision. The app can and does fall back to software rendering, which in the case of an e-Reader is just fine. (:5)


(Update 4/19/2019, 15h10 : )


I suppose that I should perform another plausibility test on my idea, of ‘GPU-scrolled text’. In the history of gaming, Textures were supposed to be square and of linear size which was a power of two. But this idea was mainly important, if they were to be Mip-Mapped. Further, historically, all that an OpenGL programmer could count on, was a maximum texture size of 4096×4096. But if I check the hardware info on my Samsung Galaxy S9 phone, under OpenGL 1.1 capabilities, its maximum texture size is 16384×16384, and it allows for a maximum of 2 Texture Units.

What this means is that the textures could be large enough, but that an issue might arise over the maximum number of textures that can be taken as input, for a single rendering pass.

According to available information, (n) Texture Objects can be bound to a single Texture Unit sequentially, if they belong to Models that are to be rendered, sequentially.

This suits me fine because the way in which I’d picture creating a transition is, to render the Texture-bearing quad first, that is to be blended out, and then to render the quad next, which is to be blended in. Only, the second quad would be given an alpha-value, to be alpha-blended with the existing frame-buffer, that becomes progressively more opaque.

In any case, alpha blending  was already available for the Fixed Function Pipeline.

So it seems that the idea passes casual inspection.


(Update 4/20/2019, 12h45 : )


When I set my ‘Onyx BOOX Max2′ to output ‘only 2 levels of grey’, this factually sets its output colour-format to a palletized format. That is because there will be only 4 possible colours in this case: ‘White’, ‘Grey 1′, ‘Grey 2′, and ‘Black’. Although 8-bit colour-pallets are possible, in this case, only 2 bits need to specify a possible colour.

When I use the (Free) “Hardware Info” app on my Android-based Samsung Galaxy S9 smart-phone, it tells me that in OpenGL 1.1 mode, one of the texture formats it still supports today is “Paletted”. Only, in this case “Palleted” refers to a texture colour-format, without answering what the resulting screen, output-colour format would be.

Well before the year 2000, I was experimenting with creating a 3D Game, using the commercial software known as ‘3D Game Studio‘. And that many years ago, it was still possible to put ‘3DGS’ into an 8-bit, Palleted mode. The way output colours resulted was like so:

Output Colour-Code = Input Colour-Code

It followed that all the texture-pallets needed to be equal, and that such effects as alpha-blending were not supported in this mode. A transition would need to be a sudden one.

But it was already my experience when using that early version of 3DGS, putting my game into 8-bit mode would switch off Hardware Rendering, and enable Software Rendering, even though DirectX 7 needed to be installed, and was still being used by my game.

I don’t really know how or why the DirectX 7 framework at the time included software-rendering code; I only know that this would be consistent with what I’ve written above about OpenGL 1.1 .

The game that I had created was based on version ‘A4′ of Conitec’s game engine, while I see that presently, they are selling version ‘A8′, which no longer supports any Software Rendering, but which does support programmed shaders.



Print Friendly, PDF & Email

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>