Trying to turn an ARM-64 -based, Android-hosted, prooted Linux Guest System, into a software development platform.

In a preceding posting I described, how I had used an Android app that does not require or benefit from having ‘root’, to install a Linux Guest System on a tablet, that has an ARM-64 CPU, which is referred to more precisely as an ‘aarch64-linux-gnu’ architecture. The Android app sets up a basic Linux system, but the user can use apt-get to extend it – if he chose a Debian 10 / Buster -based system as I did. And then, for the most part, the user’s ability to run software depends on how well the Debian package maintainers cross-compiled their packages to ‘AARCH64′. Yet, on some occasions, even in this situation, a user might want to write and then run his own code.

To make things worse, the main alternative to a pure text interface, is a VNC Session, based on ‘TightVNC’, by the choice of the developers of this app. On a Chromebook, I chose differently, by setting up a ‘TigerVNC’ desktop instead, but on this tablet, the choice was up to the Android developers alone. What this means is, that the Linux applications are forced to render purely in software mode.

Many factors work against writing one’s own code, that include, the fact that executables will result, that have been compiled for the ‘ARM’ CPU, and linked against Linux libraries! :-D

But one of the immediate handicaps could be, that the user might want to program in Python, but can’t get any good IDEs to run. Every free IDE I could try would segfault, and I don’t even believe that these segfaults are due to problems with my Python libraries. The IDEs were themselves written in Python, using Qt5, Gtk3 or wxWidgets modules. These types of libraries are as notorious as the Qt5 Library, for relying on GPU acceleration, which is nowhere to be found, and one reason I think this is most often the culprit, is the fact that one of the IDE’s – “Eric” – actually manages to report with a gasp, that it could not create an OpenGL rendering surface – and then Segfaults. (:3)

 

(Edit 9/15/2020, 13h50: )

I want to avoid any misinterpretations of what I just wrote. This does not happen out of nowhere, because an application developer decided to build his applications using ‘python3-pyqt5′ etc… When I give the command:

 


# apt install eric

 

Doing so pulls in many dependencies, including an offending package. (:1) Therefore, the application developer who wrote ‘Eric’ not only chose to use one of the Python GUI libraries, but chose to use OpenGL as well.

Of course, after I next give the command to remove ‘eric’, I also follow up with the command:

 


# apt autoremove

 

Just so that the offending dependencies are no longer installed.

 

(End of Edit, 9/15/2020, 13h50.)

 

Writing convoluted code is more agreeable, if at the very least we have an IDE in front of us, that can highlight certain syntax errors, and scan includes for code completion, etc. (:2)

Well, there is a Text Editor cut out for that exact situation, named “CudaText“. I must warn the reader though, that there is a learning curve with this text editor. But, just to prove that the AARCH64-ported Python 3.7 engine is not itself buggy, the text editor’s plug-in framework is written in Python 3, and as soon as the user has learned his first lesson in how to configure CudaText, the plug-in system comes to full life, and without any Segfaults, running the Guest System’s Python engine. I think CudaText is based on Gtk2.

Screenshot_20200914-124954_VNC Viewer

This might just turn out to be the correct IDE for that tablet.

 

(Updated 9/19/2020, 20h10… )

Continue reading Trying to turn an ARM-64 -based, Android-hosted, prooted Linux Guest System, into a software development platform.

A Hypothetical Way, to Generate Bigger Random Numbers, using the GMP Library.

In recent days I wrote some Python scripts, which generate 1024-bit prime numbers. But the next stage in my own thinking is, to try to accomplish the same thing in C++, using the GMP Multi-Precision Library, because GMP seems to be a well-supported and overall favorite C++ Multi-Precision Librrary. But when I explored this subject further, I noticed something which surprised me:

GMP is still using the ‘Linear Congruent algorithm’, as its main source of strong, pseudo-random numbers. The reason this fact surprises me is the fact that the Linear Congruent algorithm was invented as early as in the 1970s, as a cheap way to achieve pseudo-randomness, that would be good enough for games to surprise players, but which was never meant to provide crypto-quality random numbers. Actually, back in the 1970s, the registers on which this algorithm was used, may have been 16-bit or 32-bit registers, while today they are 256-bit registers, for which reason a careful and random-looking choice for the two constants is important. In fact, GMP defines the following functions, to initialize a ‘state_t’ object, to become a Linear Congruent RNG:


 

int gmp_randinit_lc_2exp_size (gmp_randstate_t state, mp_bitcnt_t
size)

void gmp_randinit_lc_2exp (gmp_randstate_t state, const_mpz_t a,
unsigned long c, mp_bitcnt_t m2exp)


 

For people who did not know, the generality of the algorithm is:

m2exp == 2 * size

X := aX + c mod 2m2exp

The first of the two initializations above uses the ‘size’ parameter, in order to look up in a static, known table, what the ‘ideal’ values for the constants (a) and (c) are, to achieve maximum randomness. The second initialization allows the programmer to specify those constants himself, and poses no restrictions on what ‘m2exp’ will be.

One of the first approaches a cryptographic programmer might want to pursue, in order to generate a prime number eventually, is to read some random bits from the device-file ‘/dev/random’ (on a Linux computer), use the first initialization above, which will lead to an RNG, and then seed this RNG once from the system-provided random number, with which the programmer can then suggest both prime candidates and witnesses to determine whether the candidates are prime, until one prime number is ‘proven’.

But I see a potential ambition for any programmer who may want to go that route:

  • Given that (a) and (c) are to be chosen from a known table, this presents a vulnerability, because a hypothetical attacker against this crypto-system may use these constants to gain knowledge about the internal state of the ‘state_t’ object, and therefore become aware of a limited number of prime numbers that can result, thereby narrowing his attack against eventual public keys, by only trying to prime-factorize or otherwise decrypt, using the narrowed set of primes.
  • Even if the constants (a) and (c) are secure in nature and not themselves hacked, the table presently only extends to a ‘size’ of 128 bits, which will actually mean that the modulus ‘m2exp’ is 2256. And so, ‘the maximum amount of randomness’ – i.e., the Entropy – which even a 2048-bit public-key modulus can achieve, will be 256 bits. And this would also mean that the strength of the key-pair is only equivalent to a 128-bit, symmetrical AES key, regardless of how complex it is.
  • Some programmers might actually want to work with a modulus of 2512.

At the same time, there are reasons why the obvious solution, just to read all random bits from the device-file ‘/dev/urandom’, poses its own problems. One of the reasons is the fact that potentially, 300 (+) prime-number candidates may need to be generated, each of which will be 1024 bits long, and tested 200 (+) times, and that the quality of the randomness ‘/dev/urandom’ provides under those conditions may also be sub-optimal, because that source, too, is pseudo-random, and will only become minimally based on the physically-measured randomness which ‘/dev/random’ represents. And yet, ‘/dev/random’ will typically block if more than ~2048 bits are to be read from it.

I can think of an approach to solving this problem, which may overcome most of the hurdles…

(Updated 5/12/2019, 22h30 … )

Continue reading A Hypothetical Way, to Generate Bigger Random Numbers, using the GMP Library.

Hybrid Encryption

If the reader is the sort of person who sometimes sends emails to multiple recipients, and who uses the public key of each recipient, thereby actively using encryption, he may be wondering why it’s possible for him to specify more than one encryption key, for the same mass-mailing.

The reason this happens, is a practice called Hybrid Encryption. If the basis for encryption was only RSA, let’s say with a 2048-bit modulus, then one problem which should become apparent immediately, is that not all possible 2048-bit blocks of cleartext can be encrypted, because even if we assume that the highest bit of the modulus was a 1, many less-significant bits would be zeroes, which means that eventually a 2048-bit block will arise, that exceeds the modulus. And at that point, the value that’s mathematically meaningful only within the modulus will get wrapped around. As soon as we try to encode the number (m) in the modulus of (m), what we obtain is (zero), in the modulus of (m).

But we know that strong, symmetrical encryption techniques exist, which may only have 256-bit blocks of data, which have 256-bit keys, and which are at least as strong as a 2048-bit RSA key-pair.

What gets applied in emails is, that the sender generates a 256-bit symmetrical encryption key – which does not need to be the highest-quality random-number, BTW – and that only this encryption key is encrypted, using multiple recipients’ public keys, once per recipient, but that the body of the email is only encrypted once, using the symmetrical key. Each recipient can then use his private key, to decrypt the symmetrical key, and can then decrypt the message, using the symmetrical key.

This way, it is also easy to recognize whether the decryption was successful or not, because if the private key used was incorrect, a 2048- or a 2047-bit binary number would result, and the correct decryption is supposed to reveal a 256-bit key, prepended by another 1792 zeroes. I think that if most crypto-software recognizes the correct number of leading zeroes, the software will assume that what it has obtained is a correct symmetrical key, which could also be called a temporary, or per-message key.

Now, the reader might think that this subject is relevant to nothing else, but quite to the contrary. A similar scheme exists in many other contexts, such as SSL and Bluetooth Encryption, by which complex algorithms such as RSA are being used, to generate a temporary, or per-session key, or to generate a per-pairing key, which can then be applied in a consistent way by a Bluetooth Chip, if that’s the kind of Bluetooth Chip that speeds up communication by performing the encryption of the actual stream by itself.

What all this means is that even if hardware-encryption is being used, the actual I/O chip is only applying the per-session or per-pairing key to the data-stream, so that the chip can have logic circuits which only ‘know’ or implement one strong, symmetrical encryption algorithm. The way in which this temporary key is generated, could be made complicated to the n-th degree, even using RSA if we like. But then this Math, to generate one temporary or per-pairing key, will still take place on the CPU, and not on the I/O chip.

(Corrected 10/05/2018, 13h45 … )

Continue reading Hybrid Encryption