# Trying to turn an ARM-64 -based, Android-hosted, prooted Linux Guest System, into a software development platform.

In a preceding posting I described, how I had used an Android app that does not require or benefit from having ‘root’, to install a Linux Guest System on a tablet, that has an ARM-64 CPU, which is referred to more precisely as an ‘aarch64-linux-gnu’ architecture. The Android app sets up a basic Linux system, but the user can use apt-get to extend it – if he chose a Debian 10 / Buster -based system as I did. And then, for the most part, the user’s ability to run software depends on how well the Debian package maintainers cross-compiled their packages to ‘AARCH64′. Yet, on some occasions, even in this situation, a user might want to write and then run his own code.

To make things worse, the main alternative to a pure text interface, is a VNC Session, based on ‘TightVNC’, by the choice of the developers of this app. On a Chromebook, I chose differently, by setting up a ‘TigerVNC’ desktop instead, but on this tablet, the choice was up to the Android developers alone. What this means is, that the Linux applications are forced to render purely in software mode.

Many factors work against writing one’s own code, that include, the fact that executables will result, that have been compiled for the ‘ARM’ CPU, and linked against Linux libraries!

But one of the immediate handicaps could be, that the user might want to program in Python, but can’t get any good IDEs to run. Every free IDE I could try would segfault, and I don’t even believe that these segfaults are due to problems with my Python libraries. The IDEs were themselves written in Python, using Qt5, Gtk3 or wxWidgets modules. These types of libraries are as notorious as the Qt5 Library, for relying on GPU acceleration, which is nowhere to be found, and one reason I think this is most often the culprit, is the fact that one of the IDE’s – “Eric” – actually manages to report with a gasp, that it could not create an OpenGL rendering surface – and then Segfaults. (:3)

(Edit 9/15/2020, 13h50: )

I want to avoid any misinterpretations of what I just wrote. This does not happen out of nowhere, because an application developer decided to build his applications using ‘python3-pyqt5′ etc… When I give the command:


# apt install eric



Doing so pulls in many dependencies, including an offending package. (:1) Therefore, the application developer who wrote ‘Eric’ not only chose to use one of the Python GUI libraries, but chose to use OpenGL as well.

Of course, after I next give the command to remove ‘eric’, I also follow up with the command:


# apt autoremove



Just so that the offending dependencies are no longer installed.

(End of Edit, 9/15/2020, 13h50.)

Writing convoluted code is more agreeable, if at the very least we have an IDE in front of us, that can highlight certain syntax errors, and scan includes for code completion, etc. (:2)

Well, there is a Text Editor cut out for that exact situation, named “CudaText“. I must warn the reader though, that there is a learning curve with this text editor. But, just to prove that the AARCH64-ported Python 3.7 engine is not itself buggy, the text editor’s plug-in framework is written in Python 3, and as soon as the user has learned his first lesson in how to configure CudaText, the plug-in system comes to full life, and without any Segfaults, running the Guest System’s Python engine. I think CudaText is based on Gtk2.

This might just turn out to be the correct IDE for that tablet.

(Updated 9/19/2020, 20h10… )

(As of 9/14/2020, 14h00: )

There exists some possibility, that when the UserLAnd developers specified environment variables, like so many Linux users, they may not have known in which file to do so, in order for the variables actually to take hold within a TightVNC session. This is a common problem under Linux. In this situation, the following two variables should be set in the file ‘~/.vnc/xstartup‘, before the last line in that file, that in my case, actually launches LXDE:


export LIBGL_ALWAYS_SOFTWARE=1
export USE_ACCELERATE=0



It’s important not to interfere with the 3rd line of this file (in my case) that actually launches ‘/usr/bin/startlxde‘.

Now that I have done so, I can actually ‘echo’ the two variables in question, from within my session, and find them set correctly. This way, there may be fewer application crashes in my future, due to applications requesting OpenGL rendering.

(Update 9/14/2020, 16h05: )

I felt that I should add an opinion, of why the use of ‘TigerVNC’ should really be preferred, over the use of ‘TightVNC’, in today’s computing era.

TightVNC resides on a smaller footprint than TigerVNC, but for the same reason, only implements a small subset of X-server extensions, that include ‘MIT-SHM’ (the Shared Memory Extension), but that do not include ‘RANDR’, or, for that matter, hardware-acceleration.

TigerVNC specifically includes an OpenGL / GLX module. The following is the output that I get when I run the command ‘xdpyinfo’ in a terminal emulator, inside a TigerVNC session, on my Chromebook, that TigerVNC version being up-to-date as of Debian 9 / Stretch:

And the way it works is such, that an application that wants to hardware-render a scene, is able to do so, to whatever extent the GPU of the (possibly remote) Host Computer allows it to. But what TigerVNC does differently from what an actual X-server did, is, to instruct the GPU to render the results to ‘an output image’, which is a familiar procedure also known as ‘Render To Texture’, or, ‘RTT’. The application still thinks it’s hardware-rendering to the screen. But instead, the GPU’s output has been redirected within the VNC Server’s session, to such an output buffer, so that the TigerVNC Server can then transmit the resulting graphics to whichever VNC Viewer is being used (-using the CPU to do so).

Why is this useful? Because, even under Linux, more and more graphics output is based on GPU acceleration, which in this way, TigerVNC is providing, in a genuine way.

(Update 9/14/2020, 17h25: )

I should also try to explain why in some situations, it’s useless to set global environment variables either in the file ‘/etc/profile’, or in the directory ‘/etc/profile.d’ (the latter being, where some packages will try to set them).

Those scripts are only run, if the Linux system launches all its system services. In certain mini-systems, that run within ‘TightVNC’, the only process launch that takes place is, of this VNC Server, and its one call to ‘/usr/bin/startlxde’ (in my case; your desktop manager may vary). And, this behaviour would not even change if ‘TigerVNC’ was being used instead. However, this behaviour will change when running a true VM, such as ‘Crostini’ under ChromeOS, in which case at least some of the ‘systemd’ services start.

Specifically in the case of VNC Sessions, environment variables need to be set in configuration files that are specific to one type of VNC. They could be set in the file ‘~/.bashrc’, if the user is going to use ‘bash’, but if they are, they will only affect programs which the user launches from within a terminal window. And, there is also a corresponding, system-wide configuration file within which ‘bash’ variables can be set.

The way users tend to use Linux 90% of the time is not such, that they open a terminal window, to launch one specific program. Having to do so actually makes the existence of a desktop manager pointless. And so, what the user or the sysadmin need to do, every time they set up a VNC Server, is find out in which configuration file exactly, that VNC Server launches the desktop manager, and prepend that launch with the setting of variables. In my case, I’ve identified which file that needed to be edited.

Some VNC Servers use the ‘~/.Xsession’ file, which can be a kind of catch-all place to configure them. But then, if the configurations are put there, the danger also exists that some types of VNC Servers fail to load configurations from there.

This subject is also related to the question, of why Linux which is running on my tablet in that way, will never have a recycling bin. Under ‘LXDE’, like under ‘GNOME’, the ‘gvfs’ daemon needs to be running, in order for the recycling bin to appear. This daemon needs to be started by ‘systemd’, and not, by the desktop manager itself.

On my Chromebook, I have the recycling bin, no problem, as long as I do launch a VNC Session, because, every time the Chromebook launches its Linux subsystem, it also launches ‘systemd’. The Chromebook is using a real VM, within which processes can become root, without becoming root on the Host System, etc.

(Update 9/15/2020, 15h00: )

1:)

I just performed some careful digging, into the question of, which package tries to perform hardware-accelerated graphics, when I install the IDE named ‘eric’. My conclusion is, that the Qt5 ‘scintilla’ packages, both Python and non-Python variety, do this.

This is the Qt5 package that performs syntax highlighting, gives the application user input of code with syntax, etc.

It would seem that, regardless of which IDE I tried to install (except for ‘CudaText’), doing so eventually invoked H/W-accelerated graphics. And, I imagine that simply to disable H/W acceleration as I have done, would not give satisfactory results with this software because, the ability to highlight and complete code with tooltips etc., is really only correct on the screen, with H/W acceleration. Therefore, what I did was in fact the best thing I could have done.

(Update 9/15/2020, 16h45: )

Long story short, you may write Qt5 GUI programs on such a tablet, and not use the ‘QScintilla’ widget class (JavaScript required to view the link), as long as you are not proposing to write an IDE, for your end-user to use, in writing his own code.

(Update 9/16/2020, 7h00: )

2:)

One of the IDEs which work just fine, on the prooted, ARM-64 platform, is “Geany”, and, surprisingly, Geany recognizes Python scripts. Not only that, but a feature which Geany has, but CudaText does not, is a “Run” button, that runs the script. However, under Debian 10 / Buster, Geany’s configuration should be updated, to use the ‘python3′ command, instead of the default ‘python’ command, since Python 2 is being deprecated.

Unfortunately, in this environment, when a Python script tries to import (from) the module ‘random’, the result is a segfault, and I can guess why. Ordinarily, the way ‘random’ would generate random numbers on any Linux system, is to read data from the file ‘/dev/urandom’. Because of the way this Linux Guest System is sandboxed however, this (Host) device file is not made accessible to the Guest System.

Actually, the Guest System has its own, local version of ‘/dev/urandom’ and ‘/dev/random’, so that this should not be an issue…

(…)

For that reason, my tests to try generating large prime numbers also fail.

What this highlights, is the fact that there are more reasons why many Python scripts will simply segfault, beyond the possibility that they could be trying to invoke hardware accelerated graphics. And also highlighted then is the apparent fact, that such a tablet is not a good software development platform, in spite of the best efforts by its user.

(Update 9/16/2020, 8h40: )

3:)

I have discovered the most important reason, why many Python executions just segfault, when run within UserLAnd. It would seem that the environment variable:

LD_LIBRARY_PATH

Defaults to:

/data/user/0/tech.ula/files/support

Which is where UserLAnd overrides certain libraries. This variable can instead be set to:

/usr/lib/aarch64-linux-gnu

Which is where ARM64-Linux provides its genuine libraries. When this variable has been ‘normalized’, then Python can also be instructed to import the module ‘random’, so that my scripts run as normal:

Important!

This did not only affect how my personal Python scripts could run, but also, whether Python IDEs written in Python would (not) run!

(Update 9/17/2020, 15h25: )

If I want ‘geany’ to be able to compile C and C++ programs using this environment, then there is an additional step I need to take. My own C++ program will compile fine into an Object File, but will then fail to link to an executable, giving me the following error message on a naive attempt to do so:

undefined reference to __libc_csu_init

This error message can be alleviated, by setting Geany’s C++ Build Command to:


g++ -Wall -o "%e" /usr/lib/aarch64-linux-gnu/libc_nonshared.a "%f"



Explanation:

The static library ‘libc_nonshared.a‘ is generally installed, when we install the development packages, that are also pulled in, by installing the C and C++ compilers. It’s a very basic library needed to compile any code. However, the variable ‘LD_LIBRARY_PATH‘ only tells a program where to look for shared libraries when finally run; it has no effect on where a compiler and/or linker will look for static libraries. Therefore, to state this static library with a full path-name at link-time, can resolve the basic error message. And the following is what my sample C++ program outputs when run with command-line parameters:

So, what I can also see now is, that when I use floating-point numbers of type ‘long double’, I actually obtain (30+) decimal places of precision, meaning that I actually obtain true, quadruple-precision, 128-bit floating-point numbers!   This is different from what I obtain with Intel-family CPUs.

(Edit 9/17/2020, 20h25: )

The screen-shot above shows my custom-designed program, approximating the solutions to:

(1)x3 + (0)x2 + (0)x  (- 1) = 0

While I can know that the long numerals belonging to the two complex roots, should correspond to (±) the cosine of 30⁰, I have no offhand proof that all 30 base-10 digits are accurate. This is also due to the fact that I wrote the program, to factor out a complex solution, and then to attempt to factor out its conjugate, without re-estimating the value of the conjugate.

(Update 9/17/2020, 16h30: )

Much as I expected, simply setting the environment variable ‘LIBRARY_PATH‘ to match the other one, does not solve the problem, because the linker does not only need to be told, in which directory to find ‘libc_nonshared.a‘. It actually needs to be given the command to do so, which would normally be given by the following, shared library:

/lib/aarch64-linux-gnu/libc.so.6

However, that library was never asked, what the compiler or linker were supposed to do. If the problem was in fact, that either the compiler or linker could not find the static library, then a different error-message would have resulted, signalling this possibility. But the error message that we got, simply told us that certain symbols were left undefined, apparently after all libraries sought-after were linked to.

Hence, the complex work-around.

I can offer a hint, as to what’s really going wrong there. I can see that a different version of ‘libc.so.6‘ resides in the directory:

/data/user/0/tech.ula/files/support

Which is also the directory, from which UserLAnd – the Android app which is hosting Linux – derives numerous basic libraries. It would only make sense, if the version of ‘libc.so.6‘ that resides in this latter directory, was also being preloaded by all the Guest System applications. Because UserLAnd insists so strongly on doing things as this posting describes them, an additional assumption of mine would be, that if I tried to dissociate the Guest System from these library versions, in favour of the genuine Linux libraries, I’d probably break the Guest System in some way.

(Update 9/17/2020, 20h40: )

As the screen-shot below attempts to demonstrate, I’d say that Python is much easier to use on this platform, than C++ would be:

In the screen-shot above, I have loaded the Python module named ‘gmpy2′, which can be installed under Debian 10 / Buster with the command:


# apt install python3-gmpy2



And doing so also installs the C dependencies that are, the ‘GNU Multiprecision Library’ (libgmp… and libmpfr…), which Debian Maintainers split into two packages for integer and real-number functions, but which are then recombined into one Python3 package. The screen-shot is displaying the value (1.2) first, to 1000 bits of precision, and then, as a regular, double-precision floating-point number. Apparently, Python3 floating-point numbers are only double-precision, even on this ARM-64 platform.

(Update 9/17/2020, 21h00: )

Yet, what I can do includes, To create a Python3 terminal session on my tower computer named ‘Phosphene’, where a multiprecision, 1000-bit floating-point number should equally be 1000 bits accurate, as the number that I can create on my tablet. And then, I can compute the square root of (3) divided by (2) as shown below:

And, what the reader should be able to recognize while reading my blog on a Laptop or PC – not on a tablet – is, that the highlighted, base-10 digits of the answer that ‘python3-gmpy2′ gave me, match the base-10 digits that my own C++ program output…

(Update 9/17/2020, 22h25: )

Even mainstream, Scientific Computation Software cannot be trusted blindly. The ‘python3-gmpy2′ package under Debian 9 / Stretch, was already supposed to offer support for the (linked) ‘libcmpc3′, C library, the purpose of which is, to extend multiprecision Math to Complex Numbers. However, when instructed to compute:

e(π/6 i)

The use of this tool can produce an inaccurate, as well as an accurate result. The following screen-shot shows, what these results are, on my Debian 9 / Stretch tower computer named ‘Phosphene':

The first result was computed in ~the Algebraically more-formal way~, where the arc-cosine of (0) is ~supposed to be~ (π/2). Notice how its imaginary exponent does not get close enough to ‘0.5j’. The second result was computed, by raising (i) to the power of (1/3). Both results should really be identical, but they are not.

What the first, inaccurate result means is that, either in their implementation of the ‘exp()’ function, or in their implementation of the ‘acos()’ function, the developers fudged the results, by resorting to double-precision Math, while 1000-bit Math was called for.

I’d say they more-probably fudged their implementation of ‘acos()’ (the trig function), because if they had cheated in their ‘exp()’ function, the result would also have been, that the ‘pow()’ function should not give, 1000-bit-accurate results.

(Update 9/17/2020, 22h40: )

I’ve observed that Debian 10 / Buster misbehaves in exactly the same way, on my tablet.

(Update 9/17/2020, 23h15: )

I owe the developers an apology. When I gave the Python statement:


gmpy2.exp(mpc('1.0j') * gmpy2.acos(0) / 3)



I was making the mistake. Feeding-in the numeral (0) set up the calculation to be done in double-precision format. What I needed to do was to cast that zero, to a 1000+1000 -bit complex number first, like so:


gmpy2.exp(mpc('1.0j') * gmpy2.acos(mpc('0')) / 3)



Doing that feeds the ‘acos()’ function a multiprecision object, in turn leading to a 1000-bit accurate result. (:4) But, don’t take my word for it. Even though it’s harder to take screen-shots from the tablet, I have just prepared the following demonstration, of how the problem can be solved 100%:

This solution works equally well under Debian 9 / Stretch, as it does under Debian 10 / Buster.

(Update 9/18/2020, 3h00: )

Assuming that a startup script has been written, that sets the environment variable ‘LD_LIBRARY_PATH’ correctly, another Python IDE that seems to work well is called ‘thonny':

Under Debian 10 / Buster, it can be installed easily from the package-manager:


# apt install thonny



(Update 9/19/2020, 20h10: )

4:)

There’s an underlying implication to the code above, that might intimidate some readers:

• The trig and inverse-trig functions of complex numbers are defined!

However, I make an assumption about those, that assumption being, that the real numbers merely form a subset of the complex numbers, and that the trig and inverse-trig functions of complex numbers, the imaginary components of which are zero, should still equal the functions of the corresponding real numbers. Using the expression cited above, simply spares me the need to import ‘mpfr’ from ‘gmpy2′ as well.

Dirk