Trying to turn an ARM-64 -based, Android-hosted, prooted Linux Guest System, into a software development platform.

In a preceding posting I described, how I had used an Android app that does not require or benefit from having ‘root’, to install a Linux Guest System on a tablet, that has an ARM-64 CPU, which is referred to more precisely as an ‘aarch64-linux-gnu’ architecture. The Android app sets up a basic Linux system, but the user can use apt-get to extend it – if he chose a Debian 10 / Buster -based system as I did. And then, for the most part, the user’s ability to run software depends on how well the Debian package maintainers cross-compiled their packages to ‘AARCH64′. Yet, on some occasions, even in this situation, a user might want to write and then run his own code.

To make things worse, the main alternative to a pure text interface, is a VNC Session, based on ‘TightVNC’, by the choice of the developers of this app. On a Chromebook, I chose differently, by setting up a ‘TigerVNC’ desktop instead, but on this tablet, the choice was up to the Android developers alone. What this means is, that the Linux applications are forced to render purely in software mode.

Many factors work against writing one’s own code, that include, the fact that executables will result, that have been compiled for the ‘ARM’ CPU, and linked against Linux libraries! :-D

But one of the immediate handicaps could be, that the user might want to program in Python, but can’t get any good IDEs to run. Every free IDE I could try would segfault, and I don’t even believe that these segfaults are due to problems with my Python libraries. The IDEs were themselves written in Python, using Qt5, Gtk3 or wxWidgets modules. These types of libraries are as notorious as the Qt5 Library, for relying on GPU acceleration, which is nowhere to be found, and one reason I think this is most often the culprit, is the fact that one of the IDE’s – “Eric” – actually manages to report with a gasp, that it could not create an OpenGL rendering surface – and then Segfaults. (:3)

 

(Edit 9/15/2020, 13h50: )

I want to avoid any misinterpretations of what I just wrote. This does not happen out of nowhere, because an application developer decided to build his applications using ‘python3-pyqt5′ etc… When I give the command:

 


# apt install eric

 

Doing so pulls in many dependencies, including an offending package. (:1) Therefore, the application developer who wrote ‘Eric’ not only chose to use one of the Python GUI libraries, but chose to use OpenGL as well.

Of course, after I next give the command to remove ‘eric’, I also follow up with the command:

 


# apt autoremove

 

Just so that the offending dependencies are no longer installed.

 

(End of Edit, 9/15/2020, 13h50.)

 

Writing convoluted code is more agreeable, if at the very least we have an IDE in front of us, that can highlight certain syntax errors, and scan includes for code completion, etc. (:2)

Well, there is a Text Editor cut out for that exact situation, named “CudaText“. I must warn the reader though, that there is a learning curve with this text editor. But, just to prove that the AARCH64-ported Python 3.7 engine is not itself buggy, the text editor’s plug-in framework is written in Python 3, and as soon as the user has learned his first lesson in how to configure CudaText, the plug-in system comes to full life, and without any Segfaults, running the Guest System’s Python engine. I think CudaText is based on Gtk2.

Screenshot_20200914-124954_VNC Viewer

This might just turn out to be the correct IDE for that tablet.

 

(Updated 9/19/2020, 20h10… )

Continue reading Trying to turn an ARM-64 -based, Android-hosted, prooted Linux Guest System, into a software development platform.

Musing about Deferred Shading.

One of the subjects which fascinate me is, Computer-Generated Images, CGI, specifically, that render a 3D scene to a 2D perspective. But that subject is still rather vast. One could narrow it by first suggesting an interest in the hardware-accelerated form of CGI, which is also referred to as “Raster-Based Graphics”, and which works differently from ‘Ray-Tracing’. And after that, a further specialization can be made, into a modern form of it, known a “Deferred Shading”.

What happens with Deferred Shading is, that an entire scene is Rendered To Texture, but in such a way that, in addition to surface colours, separate output images also hold normal-vectors, and a distance-value (a depth-value), for each fragment of this initial rendering. And then, the resulting ‘G-Buffer’ can be put through post-processing, which results in the final 2D image. What advantages can this bring?

  • It allows for a virtually unlimited number of dynamic lights,
  • It allows for ‘SSAO’ – “Screen Space Ambient Occlusion” – to be implemented,
  • It allows for more-efficient reflections to be implemented, in the form of ‘SSR’s – “Screen-Space Reflections”.
  • (There could be more benefits.)

One fact which people should be aware of, given traditional strategies for computing lighting, is, that by default, the fragment shader would need to perform a separate computation for each light source that strikes the surface of a model. An exception to this has been possible with some game engines in the past, where a virtually unlimited number of static lights can be incorporated into a level map, by being baked in, as additional shadow-maps. But when it comes to computing dynamic lights – lights that can move and change intensity during a 3D game – there have traditionally been limits to how many of those may illuminate a given surface simultaneously. This was defined by how complex a fragment shader could be made, procedurally.

(Updated 1/15/2020, 14h45 … )

Continue reading Musing about Deferred Shading.

Understanding why some e-Readers fall short of performing as Android tablets (Setting, Hidden Benefits).

There is a fact about modern graphics chips which some people may not be aware of – especially some Linux users – but which I was recently reminded of because I have bought an e-Reader that has the Android O/S, but that features the energy-saving benefits of “e-Ink” – an innovative technology that has a surface somewhat resembling paper, the brightness of which can vary between white and black, but that mainly uses available light, although back-lit and front-lit versions of e-Ink now exist, and that consumes very little current, so that it’s frequently possible to read an entire book on one battery-charge. With an average Android tablet that merely has an LCD, the battery-life can impede enjoying an e-Book.

An LCD still has in common with the old CRTs, being refreshed at a fixed frequency by something called a “raster” – a pattern that scans a region of memory and feeds pixel-values to the display sequentially, but maybe 60 times per second, thus refreshing the display that often. e-Ink pixels are sent a signal once, to change brightness, and then stay at the assigned brightness level until they receive another signal, to change again. What this means is that, at the hardware-level, e-Ink is less powerful than ‘frame-buffer devices’ once were.

But any PC, Mac or Android graphics card or graphics chip manufactured later than in the 1990s has a non-trivial GPU – a ‘Graphics Processing Unit’ – that acts as a co-processor, working in parallel with the computer’s main CPU, to take much of the workload off the CPU, associated with rendering graphics to the screen. Much of what a modern GPU does consists of taking as input, pixels which software running on the CPU wrote either to a region of dedicated graphics memory, or, in the case of an Android device, to a region of memory shared between the GPU and the CPU, but part of the device’s RAM. And the GPU then typically ‘transforms’ the image of these pixels, to the way they will appear on the screen, finally. This ends up modifying a ‘Frame-Buffer’, the contents of which are controlled by the GPU and not the CPU, but which the raster scans, resulting in output to the actual screen.

Transforming an image can take place in a strictly 2D sense, or can take place in a sense that preserves 3D perspective, but that results in 2D screen-output. And it gets applied to desktop graphics as much as to application content. In the case of desktop graphics, the result is called ‘Compositing’, while in the case of application content, the result is either fancier output, or faster execution of the application, on the CPU. And on many Android devices, compositing results in multiple Home-Screens that can be scrolled, and the glitz of which is proven by how smoothly they scroll.

Either way, a modern GPU is much more versatile than a frame-buffer device was. And its benefits can contribute in unexpected places, such as when an application outputs text to the screen, but when the text is merely expected to scroll. Typically, the rasterization of fonts still takes place on the CPU, but results in pixel-values being written to shared memory, that correspond to text to be displayed. But the actual scrolling of the text can be performed by the GPU, where more than one page of text, with a fixed position in the drawing surface the CPU drew it to, is transformed by the GPU to advancing screen-positions, without the CPU having to redraw any pixels. (:1) This effect is often made more convincing, by the fact that at the end of a sequence, a transformed image is sometimes replaced by a fixed image, in a transition of the output, but between two graphics that are completely identical. These two graphics would reside in separate regions of RAM, even though the GPU can render a transition between them.

(Updated 4/20/2019, 12h45 … )

Continue reading Understanding why some e-Readers fall short of performing as Android tablets (Setting, Hidden Benefits).

How to Add a Web-browser to GNURoot + XSDL.

In This earlier posting – out of several – I had explained, that I’ve installed the Android apps “GNURoot Debian” and “XSDL” to my old Samsung Galaxy Tab S (first generation). The purpose is, to install Linux software on that tablet, without requiring that I root it. This uses the Android variant of ‘chroot’, which is actually also called ‘proot’, and is quick and painless.

However, there are certain things which a ch-rooted Linux system cannot do. One of them is to start services to run in the background. Another is, to access hardware, as doing the latter would require access to the host’s ‘/dev’ folder, not the local, ch-root’s ‘/dev’ folder. Finally, because XSDL is acting as my X-server, when GNURoot’s guest-software tries to connect to one, there will be no hardware-acceleration, because this X-server is really just an Android app, and does not really correspond to a display device.

This last detail can be quite challenging, because in today’s world, even many Linux applications require, direct-rendering, and will not function properly, if left just to use X-server protocol, à la legacy-Unix. One such application is any serious Web-browser.

This does not result from any malfunction of either Android app, because it just follows from the logic, of what the apps are being asked to do.

But we’d like to have a Web-browser installed, and will find that “Firefox”, “Arora” etc., all fail over this issue. This initially leaves us in an untenable situation, because even if we were not to use our Linux guest-system for Web-browsing – because there is a ‘real’ Web-browser installed on the (Android) host-system – the happenstance can take place, by which a Web-document needs to be viewed anyway – let’s say, because we want to click on an HTML-file, that constitutes the online documentation for some Linux-application.

What can we do?

Continue reading How to Add a Web-browser to GNURoot + XSDL.