A Hypothetical Way, to Generate Bigger Random Numbers, using the GMP Library.

In recent days I wrote some Python scripts, which generate 1024-bit prime numbers. But the next stage in my own thinking is, to try to accomplish the same thing in C++, using the GMP Multi-Precision Library, because GMP seems to be a well-supported and overall favorite C++ Multi-Precision Librrary. But when I explored this subject further, I noticed something which surprised me:

GMP is still using the ‘Linear Congruent algorithm’, as its main source of strong, pseudo-random numbers. The reason this fact surprises me is the fact that the Linear Congruent algorithm was invented as early as in the 1970s, as a cheap way to achieve pseudo-randomness, that would be good enough for games to surprise players, but which was never meant to provide crypto-quality random numbers. Actually, back in the 1970s, the registers on which this algorithm was used, may have been 16-bit or 32-bit registers, while today they are 256-bit registers, for which reason a careful and random-looking choice for the two constants is important. In fact, GMP defines the following functions, to initialize a ‘state_t’ object, to become a Linear Congruent RNG:


 

int gmp_randinit_lc_2exp_size (gmp_randstate_t state, mp_bitcnt_t
size)

void gmp_randinit_lc_2exp (gmp_randstate_t state, const_mpz_t a,
unsigned long c, mp_bitcnt_t m2exp)


 

For people who did not know, the generality of the algorithm is:

m2exp == 2 * size

X := aX + c mod 2m2exp

The first of the two initializations above uses the ‘size’ parameter, in order to look up in a static, known table, what the ‘ideal’ values for the constants (a) and (c) are, to achieve maximum randomness. The second initialization allows the programmer to specify those constants himself, and poses no restrictions on what ‘m2exp’ will be.

One of the first approaches a cryptographic programmer might want to pursue, in order to generate a prime number eventually, is to read some random bits from the device-file ‘/dev/random’ (on a Linux computer), use the first initialization above, which will lead to an RNG, and then seed this RNG once from the system-provided random number, with which the programmer can then suggest both prime candidates and witnesses to determine whether the candidates are prime, until one prime number is ‘proven’.

But I see a potential ambition for any programmer who may want to go that route:

  • Given that (a) and (c) are to be chosen from a known table, this presents a vulnerability, because a hypothetical attacker against this crypto-system may use these constants to gain knowledge about the internal state of the ‘state_t’ object, and therefore become aware of a limited number of prime numbers that can result, thereby narrowing his attack against eventual public keys, by only trying to prime-factorize or otherwise decrypt, using the narrowed set of primes.
  • Even if the constants (a) and (c) are secure in nature and not themselves hacked, the table presently only extends to a ‘size’ of 128 bits, which will actually mean that the modulus ‘m2exp’ is 2256. And so, ‘the maximum amount of randomness’ – i.e., the Entropy – which even a 2048-bit public-key modulus can achieve, will be 256 bits. And this would also mean that the strength of the key-pair is only equivalent to a 128-bit, symmetrical AES key, regardless of how complex it is.
  • Some programmers might actually want to work with a modulus of 2512.

At the same time, there are reasons why the obvious solution, just to read all random bits from the device-file ‘/dev/urandom’, poses its own problems. One of the reasons is the fact that potentially, 300 (+) prime-number candidates may need to be generated, each of which will be 1024 bits long, and tested 200 (+) times, and that the quality of the randomness ‘/dev/urandom’ provides under those conditions may also be sub-optimal, because that source, too, is pseudo-random, and will only become minimally based on the physically-measured randomness which ‘/dev/random’ represents. And yet, ‘/dev/random’ will typically block if more than ~2048 bits are to be read from it.

I can think of an approach to solving this problem, which may overcome most of the hurdles…

(Updated 10/13/2018, 13h10 … )

Continue reading A Hypothetical Way, to Generate Bigger Random Numbers, using the GMP Library.

Debiasing Bit-Streams

One subject which has received a lot of attention in popular computing, is how to generate random numbers, that are not just old-fashioned pseudo-random numbers from the old days, but that are truly random, and of sufficiently good quality to be used in cryptography.

In order to achieve this goal, there exist certain packages such as

  • ‘bit-babbler’ (Requires specialized USB-keys that can be bought.)
  • ‘rng-tools’ (Assumes a special CPU-feature, which only the most recent CPUs will have.)
  • ‘haveged’ (Is only present in Debian / Jessie repos, and later. Not present in Debian / Lenny repos.)
  • ‘randomsound’ (An age-old solution, present in all the older repos.)

An interesting observation about ‘rng-tools’ that I will make, is that as of Kernel-version 3.17, this daemon doesn’t strictly need to be installed, because the kernel has intrinsic support for the CPU, random-number generator, if any is detected. The higher kernel-versions will incorporate its output into ‘/dev/random’ automatically, but not exclusively. Whether the reader has this H/W feature can be determined by installing the package ‘cpuid’ and then running:

cpuid | grep DRAND

The only real advantage which might remain to using ‘rng-tools‘, would be the configurability of this package for user-defined sources of random data. In other words, a clever user could write a custom script, which looks for random data anywhere to his liking, which hashes that, and which then offers the hash as a source in his configuration of ‘rng-tools’.

If the user has the ALSA sound-system installed, then the following script might work:

 

#!/bin/bash

NOISE_CMD="sudo -u user arecord --format=S16_LE --duration=1 -q"

if [ -e /opt/entropy ]
then
  rm -f /opt/entropy || exit 1
fi

mkfifo /opt/entropy || exit 1
exec 3<> /opt/entropy

head -c 256 /dev/urandom >/opt/entropy

sleep 300

while true
do

  read -n1 -t 1 FIFO_HAS </opt/entropy

  if [ $? -eq 0 ]
  then
    sleep 2

  else
    $NOISE_CMD >>/dev/null || exit 1

    n=1
    while [ $n -le 8 ]
    do
      $NOISE_CMD | \
        shasum -b -a 512 - | cut -c -128 | xxd -r -p >/opt/entropy
      n=$(( n+1 ))
    done
  fi
done


 

Because this script would run as root, especially on PulseAudio-based systems, it’s important to replace ‘user’ above with the main user’s username.

(Edit 01/04/2018 : One tricky fact which needs to be considered, if the above script is to be of any practical use, is that it needs to be started before ‘rng-tools’ is, so that when the later daemon starts, the object ‘/opt/entropy’ will already exist.

If my script was just to delete an existing object by that name, and create a new one, after ‘rng-tools’ was running, then the later script will simply be left with an invalid handle, for the deleted object. The newly-created pipe will not replace it, within the usage by ‘rng-tools’.

By now I have debugged the script and tested it, and it works 100%.

I leave it running all day, even though I have no use for the generated ‘/opt/entropy’ .

Further, I’ve added a test, before running the command 8 times, to verify that accessing the sound device does not result in an error-condition. The default-behavior of ‘arecord’ is blocking, which is ideal for me, because it means that if the device is merely busy, my script’s invocation of ‘arecord’ will simply wait, until the device is available again. )

(Edit 01/06/2018 : If an external application starts to read from the named pipe at an unpredicted point in time, and depletes it, the maximum amount of time the above script will wait, before replenishing the pipe, is 5 seconds, assuming that the CPU has the necessary amount of idle-time.

The script may spend 2 seconds ‘sleep’ing, then, 1 second trying to read a byte from the pipe, then, 1 second testing the function that’s supposed to generate the ‘Noise’, and then 1 more second, actually generating the first block of random bits, out of 8 consecutive blocks.


 

I should also add, that the OSS sound system still exists, although its use has largely been made obsolete on mainstream PCs, due to either ‘PulseAudio’, or ‘ALSA’, which coexist just fine. But there is a simple reason to avoid trying to access the device-files of the sound-device directly:

On-board sound devices usually default to 8-bit mode. And in that mode, the probability is much too high, that sequences of samples will actually have an unchanging value, because there is actually a limit to how poor the behavior of the analog amplifiers can be, that act as input to the A/D converter. And so it is critical that the sound-device be switched into some 16-bit mode. This is much easier to do using the ‘arecord’ command, than it would be with direct access to device-files.

Given nothing but sequences of 8-bit sound-samples, it should come as no surprise if eventually, the resulting hash-codes actually repeat. :-)  )


 

I’m going to focus my attention on ‘randomsound’ during the rest of this posting, even though on most of my computers, I have better solutions installed. What the package ‘randomsound’ does, is put the sound-input device, that presumably has a typical, cheap A/D converter, into Unsigned, 16-bit, Mono, 8kHz mode, and to extract only the Least-Significant Bit from each sound sample. In order for that to work well, the bits extracted in this way need to be “debiased”.

What this means is that on old sound cards, the A/D converter is so bad, that even the LSB does not have a probability of exactly 50%, of being either a 1 or a 0. Thus, the bits obtained from the hardware in this way need to be measured first, for what the probability is of this bit being 1, and must then be processed further in a way that compensates for a non-50% probability, before bits are derived, which are supposedly random.

I’ve given some private thought, on how the debiasing of this bit-stream might work, and can think of two ways:

  1. A really simple way, that only uses the XOR function, between the current input-bit, and the most-recent output-bit,
  2. A more-complex way, based on the idea that if 8 biased bits are combined arbitrarily into a byte, on the average, the value of this byte will not equal 128.

The first approach would have as a disadvantage, that if the bias was strong, a sequence of bits would result, which would still not be adequately random.

Continue reading Debiasing Bit-Streams

Hybrid Encryption

If the reader is the sort of person who sometimes sends emails to multiple recipients, and who uses the public key of each recipient, thereby actively using encryption, he may be wondering why it’s possible for him to specify more than one encryption key, for the same mass-mailing.

The reason this happens, is a practice called Hybrid Encryption. If the basis for encryption was only RSA, let’s say with a 2048-bit modulus, then one problem which should become apparent immediately, is that not all possible 2048-bit blocks of cleartext can be encrypted, because even if we assume that the highest bit of the modulus was a 1, many less-significant bits would be zeroes, which means that eventually a 2048-bit block will arise, that exceeds the modulus. And at that point, the value that’s mathematically meaningful only within the modulus will get wrapped around. As soon as we try to encode the number (m) in the modulus of (m), what we obtain is (zero), in the modulus of (m).

But we know that strong, symmetrical encryption techniques exist, which may only have 256-bit blocks of data, which have 256-bit keys, and which are at least as strong as a 2048-bit RSA key-pair.

What gets applied in emails is, that the sender generates a 256-bit symmetrical encryption key – which does not need to be the highest-quality random-number, BTW – and that only this encryption key is encrypted, using multiple recipients’ public keys, once per recipient, but that the body of the email is only encrypted once, using the symmetrical key. Each recipient can then use his private key, to decrypt the symmetrical key, and can then decrypt the message, using the symmetrical key.

This way, it is also easy to recognize whether the decryption was successful or not, because if the private key used was incorrect, a 2048- or a 2047-bit binary number would result, and the correct decryption is supposed to reveal a 256-bit key, prepended by another 1792 zeroes. I think that if most crypto-software recognizes the correct number of leading zeroes, the software will assume that what it has obtained is a correct symmetrical key, which could also be called a temporary, or per-message key.

Now, the reader might think that this subject is relevant to nothing else, but quite to the contrary. A similar scheme exists in many other contexts, such as SSL and Bluetooth Encryption, by which complex algorithms such as RSA are being used, to generate a temporary, or per-session key, or to generate a per-pairing key, which can then be applied in a consistent way by a Bluetooth Chip, if that’s the kind of Bluetooth Chip that speeds up communication by performing the encryption of the actual stream by itself.

What all this means is that even if hardware-encryption is being used, the actual I/O chip is only applying the per-session or per-pairing key to the data-stream, so that the chip can have logic circuits which only ‘know’ or implement one strong, symmetrical encryption algorithm. The way in which this temporary key is generated, could be made complicated to the n-th degree, even using RSA if we like. But then this Math, to generate one temporary or per-pairing key, will still take place on the CPU, and not on the I/O chip.

(Corrected 10/05/2018, 13h45 … )

Continue reading Hybrid Encryption