Trying to turn an ARM-64 -based, Android-hosted, prooted Linux Guest System, into a software development platform.

In a preceding posting I described, how I had used an Android app that does not require or benefit from having ‘root’, to install a Linux Guest System on a tablet, that has an ARM-64 CPU, which is referred to more precisely as an ‘aarch64-linux-gnu’ architecture. The Android app sets up a basic Linux system, but the user can use apt-get to extend it – if he chose a Debian 10 / Buster -based system as I did. And then, for the most part, the user’s ability to run software depends on how well the Debian package maintainers cross-compiled their packages to ‘AARCH64′. Yet, on some occasions, even in this situation, a user might want to write and then run his own code.

To make things worse, the main alternative to a pure text interface, is a VNC Session, based on ‘TightVNC’, by the choice of the developers of this app. On a Chromebook, I chose differently, by setting up a ‘TigerVNC’ desktop instead, but on this tablet, the choice was up to the Android developers alone. What this means is, that the Linux applications are forced to render purely in software mode.

Many factors work against writing one’s own code, that include, the fact that executables will result, that have been compiled for the ‘ARM’ CPU, and linked against Linux libraries! :-D

But one of the immediate handicaps could be, that the user might want to program in Python, but can’t get any good IDEs to run. Every free IDE I could try would segfault, and I don’t even believe that these segfaults are due to problems with my Python libraries. The IDEs were themselves written in Python, using Qt5, Gtk3 or wxWidgets modules. These types of libraries are as notorious as the Qt5 Library, for relying on GPU acceleration, which is nowhere to be found, and one reason I think this is most often the culprit, is the fact that one of the IDE’s – “Eric” – actually manages to report with a gasp, that it could not create an OpenGL rendering surface – and then Segfaults. (:3)

 

(Edit 9/15/2020, 13h50: )

I want to avoid any misinterpretations of what I just wrote. This does not happen out of nowhere, because an application developer decided to build his applications using ‘python3-pyqt5′ etc… When I give the command:

 


# apt install eric

 

Doing so pulls in many dependencies, including an offending package. (:1) Therefore, the application developer who wrote ‘Eric’ not only chose to use one of the Python GUI libraries, but chose to use OpenGL as well.

Of course, after I next give the command to remove ‘eric’, I also follow up with the command:

 


# apt autoremove

 

Just so that the offending dependencies are no longer installed.

 

(End of Edit, 9/15/2020, 13h50.)

 

Writing convoluted code is more agreeable, if at the very least we have an IDE in front of us, that can highlight certain syntax errors, and scan includes for code completion, etc. (:2)

Well, there is a Text Editor cut out for that exact situation, named “CudaText“. I must warn the reader though, that there is a learning curve with this text editor. But, just to prove that the AARCH64-ported Python 3.7 engine is not itself buggy, the text editor’s plug-in framework is written in Python 3, and as soon as the user has learned his first lesson in how to configure CudaText, the plug-in system comes to full life, and without any Segfaults, running the Guest System’s Python engine. I think CudaText is based on Gtk2.

Screenshot_20200914-124954_VNC Viewer

This might just turn out to be the correct IDE for that tablet.

 

(Updated 9/19/2020, 20h10… )

Continue reading Trying to turn an ARM-64 -based, Android-hosted, prooted Linux Guest System, into a software development platform.

A bit of my personal history, experimenting in 3D game design.

I was wide-eyed and curious. And much before the year 2000, I only owned Windows-based computers, purchased most of my software for money, and also purchased a license of 3D Game Studio, some version of which is still being sold today. The version that I purchased well before 2000 was using the ‘A4′ game engine, where all the 3DGS versions have a game engine specified by the latter ‘A’ and a number.

That version of 3DGS was based on DirectX 7 because Microsoft owns and uses DirectX, and DirectX 7 still had as one of its capabilities to switch back into software-mode, even though it was perhaps one of the earliest APIs that offered hardware-rendering, provided that is, that the host machine had a graphics card capable of hardware-rendering.

I created a simplistic game using that engine, which had no real title, but which I simply referred to as my ‘Defeat The Guard Game’. And in so doing I learned a lot.

The API which is referred to as OpenGL, offers what DirectX versions offer. But because Microsoft has the primary say in how the graphics hardware is to be designed, OpenGL versions are frequently just catching up to what the latest DirectX versions have to offer. There is a loose correspondence in version numbers.

Shortly after the year 2000, I upgraded to a 3D Game Studio version with their ‘A6′ game engine. This was a game engine based on DirectX 9.0c, which was also standard with Windows XP, which no longer offered any possibility of software rendering, but which gave the customers of this software product their first opportunity to program shaders. And because I was playing with the ‘A6′ game engine for a long time, in addition owning a PC that ran Windows XP for a long time, the capabilities of DirectX 9.0c became etched in my mind. However, as fate would have it, I never actually created anything significant with this game engine version – only snippets of code designed to test various capabilities.

Continue reading A bit of my personal history, experimenting in 3D game design.

A fact about how software-rendering is managed in practice, today.

People of my generation – and I’m over 50 years old as I’m writing this – first learned about CGI – computer-simulated images – in the form of ‘ray-tracing’. What my contemporaries are slow to find out is that meanwhile, an important additional form of CGI has come into existence, which is referred to as ‘raster-based rendering’.

Ray-tracing has as advantage over raster-based rendering, better optical accuracy, which leads to photo-realism. Ray-tracing therefore still gets used a lot, especially in Hollywood-originated CGI for Movies, etc.. But ray-tracing still has a big drawback, which is, that it’s slow to compute. Typically, ray-tracing cannot be done in real-time, needs to be performed on a farm of computers, and typically, an hour of CPU-time may be needed, to render a sequence which might play for 10 seconds.

But, in order for consumers to be able to play 3D games, the CGI needs to be in real-time, for which reason the other type of rendering was invented in the 1990s, and this form of rendering is carried out by the graphics hardware, in real-time.

What this dichotomy has led to, is model- and scene-editors such as “Blender”, which allow complex editing of 3D content, often with the purpose that the content eventually be rendered by arbitrary, external methods, that include software-based, ray tracing. But such editing applications still themselves possess an Editing Preview window / rectangle, in which their power-users can see the technical details of what they’re editing, in real-time. And those editing preview windows are then hardware-rendered, using raster-based methods, instead of the final result being rendered using raster-based methods.

Continue reading A fact about how software-rendering is managed in practice, today.

Understanding why some e-Readers fall short of performing as Android tablets (Setting, Hidden Benefits).

There is a fact about modern graphics chips which some people may not be aware of – especially some Linux users – but which I was recently reminded of because I have bought an e-Reader that has the Android O/S, but that features the energy-saving benefits of “e-Ink” – an innovative technology that has a surface somewhat resembling paper, the brightness of which can vary between white and black, but that mainly uses available light, although back-lit and front-lit versions of e-Ink now exist, and that consumes very little current, so that it’s frequently possible to read an entire book on one battery-charge. With an average Android tablet that merely has an LCD, the battery-life can impede enjoying an e-Book.

An LCD still has in common with the old CRTs, being refreshed at a fixed frequency by something called a “raster” – a pattern that scans a region of memory and feeds pixel-values to the display sequentially, but maybe 60 times per second, thus refreshing the display that often. e-Ink pixels are sent a signal once, to change brightness, and then stay at the assigned brightness level until they receive another signal, to change again. What this means is that, at the hardware-level, e-Ink is less powerful than ‘frame-buffer devices’ once were.

But any PC, Mac or Android graphics card or graphics chip manufactured later than in the 1990s has a non-trivial GPU – a ‘Graphics Processing Unit’ – that acts as a co-processor, working in parallel with the computer’s main CPU, to take much of the workload off the CPU, associated with rendering graphics to the screen. Much of what a modern GPU does consists of taking as input, pixels which software running on the CPU wrote either to a region of dedicated graphics memory, or, in the case of an Android device, to a region of memory shared between the GPU and the CPU, but part of the device’s RAM. And the GPU then typically ‘transforms’ the image of these pixels, to the way they will appear on the screen, finally. This ends up modifying a ‘Frame-Buffer’, the contents of which are controlled by the GPU and not the CPU, but which the raster scans, resulting in output to the actual screen.

Transforming an image can take place in a strictly 2D sense, or can take place in a sense that preserves 3D perspective, but that results in 2D screen-output. And it gets applied to desktop graphics as much as to application content. In the case of desktop graphics, the result is called ‘Compositing’, while in the case of application content, the result is either fancier output, or faster execution of the application, on the CPU. And on many Android devices, compositing results in multiple Home-Screens that can be scrolled, and the glitz of which is proven by how smoothly they scroll.

Either way, a modern GPU is much more versatile than a frame-buffer device was. And its benefits can contribute in unexpected places, such as when an application outputs text to the screen, but when the text is merely expected to scroll. Typically, the rasterization of fonts still takes place on the CPU, but results in pixel-values being written to shared memory, that correspond to text to be displayed. But the actual scrolling of the text can be performed by the GPU, where more than one page of text, with a fixed position in the drawing surface the CPU drew it to, is transformed by the GPU to advancing screen-positions, without the CPU having to redraw any pixels. (:1) This effect is often made more convincing, by the fact that at the end of a sequence, a transformed image is sometimes replaced by a fixed image, in a transition of the output, but between two graphics that are completely identical. These two graphics would reside in separate regions of RAM, even though the GPU can render a transition between them.

(Updated 4/20/2019, 12h45 … )

Continue reading Understanding why some e-Readers fall short of performing as Android tablets (Setting, Hidden Benefits).