## Understanding the long long data-type with C++

One of the conventions which I was taught when first learning C++, was that, when declaring an integer variable, the designations were to be used, such as:


short int a = 0;  // A 16-bit integer
int b = 0;  // A 32-bit integer
long int c = 0;  // Another, 32-bit integer



But, since those ancient days, many developments have come to computers, that include 64-bit CPUs. And so the way the plain-looking declarations now work, is:


short int a = 0;  // A 16-bit integer
int b = 0;  // A 32-bit integer
long int c = 0;  // Usually, either a 32-bit or a 64-bit, depending on the CPU.
long long d = 0;  // Almost always, a 64-bit integer



The fact is not so surprising, that even on 32-bit CPUs, modern C++ runtimes will support 64-bit values partially, because support for longer fields, than the CPU registers, can be achieved, using subroutines that treat longer numbers similarly to how in Elementary School, we were taught to perform multiplication, long division, etc., except that where people were once taught how to perform these operations in Base-10 digits, the subroutine can break longer fields into 32-bit words.

In order to avoid any such confusion, the code is sometimes written in C or C++, like so:


uint16_t a = 0;
uint32_t b = 0;
uint64_t c = 0;



The ability to put these variables into the source-code does not pose as much of a problem, as the need which sometimes exists, to state literals, which can be assigned to these variables. Hence, a developer on a 64-bit machine might be tempted to put something like this:


uint64_t c = 1L << 32;



Which literally means, ‘Compute a constant expression, in which a variable of type (long) is left-shifted 32 bits.’ The problem here would be, that if compiled on a 32-bit platform, the literal ‘1L’ just stands for a 32-bit long integer, and usually, some type of compiler warning will ensue, about the fact that the constant expression is being left-shifted, beyond the size of the word in question, which means that the value ‘0’ would result.

If we are to write source-code that can be compiled equally-well on 32-bit and 64-bit machines, we really need to put:


uint64_t c = 1ULL << 32;



So that the literal ‘1ULL’ will start out, as having a compatible word-size. Hence, the following source-code will become plausible, in that at least it will always compile:

#include <cstdlib>
#include <iostream>

using std::cout;
using std::endl;

int main(int argc, char* argv[]) {

long int c = 1234567890UL;

if (sizeof(long int) > 7) {
c = (long int) 1234567890123456ULL;
}

cout << c << endl;

return 0;
}



The problem that needed to be mitigated, was not so much whether the CPU has 32-bit or 64-bit registers at run-time, but rather, that the source-code needs to compile either way.

Dirk

## What Is A Plasma?

The fact that blood plasma exists in Medicine, should not be confused with the fact that Plasmas exist, that are defined in Physics, and which all matter can be converted to. In short, a Plasma is what becomes of a gas, when its temperature is too hot, for it to be a gas.

The long form of the answer is a bit more complex. In Elementary School, Students are taught that there exist three familiar phases of a given substance: Solid, Liquid and Gas. But according to slightly more advanced knowledge in Physics, there is no real guarantee, that there will always be these three phases. A gas first results when the thermal agitation between molecules becomes stronger – i.e. their temperature hotter – than the force that holds individual molecules together. At that point, the molecules separate and a gas results, the physical behavior of which is approximately what one would obtain, if a swarm of particles was to exist through collisions but through few other interactions.

Similarly, Liquids will form, when the molecules are forced from occupying fixed positions, but when they still don’t expand.

Well, as the degree of thermal agitation (of a Gas) is increased further, first, molecules become separated into atoms, and then, the electrons get separated from their nuclei, as a result of ordinary collisions with other atoms. This results in the negative particles – electrons – following different trajectories than the positive particles – the nuclei. And the result of that is that the collective behavior of the fluid changes, from that of a gas.

When a charged particle crosses the lines of force, of a magnetic field, a force is generated which is perpendicular to both the velocity vector and the magnetic field vector. As a result, the particles can travel without restriction along the lines of magnetic force, but their motion at right angles to it is deflected, and becomes helical. Not only that, but the direction in which the paths of the particles becomes curved, is opposite for the negative and positive particles.

For this reason, Plasmas can be confined by magnetic fields, except along the lines of the magnetic field. Increasing the strength of an applied field will also cause a Plasma to become compressed, as these helices become narrower.

A good natural example of this type of Plasma, is what becomes of the substance of the Sun. Its temperatures are easily hot enough to cause the transition from Gas to Plasma, especially since the temperature inside the Sun is much higher, than the temperatures which are observed at its surface. At 5000K, gasses are still possible. But at hundreds of thousands Kelvin, or at a Million degrees Kelvin, the bulk of the Sun’s substance becomes a Plasma.

Now, if the reader is a skeptic, who has trouble believing that ‘other phases’ can exist, than Solid, Liquid and Gas, there is an example that takes place at lower temperatures, and that involves Oxygen, namely, O2. We’re aware of gaseous O2 as well as liquid O2 that gets used in rocketry. But as the O2 is cooled further, to 54.36K at 1 atmosphere, it solidifies. Thus, it has already demonstrated the 3 phases which we’re taught about in Elementary School. But, if we cool already-solid O2 below an even lower temperature, 43.8K at 1 atmosphere, its phase changes again, into yet another phase, which is also a solid one. It’s currently understood that solid O2 has 6 phases. (:1)

At the same time, many fluids are known to exhibit Supercritical Behavior, which is most commonly, a behavior of a fluid which is normally differentiated between Liquid and Gaseous, losing this differentiation, due to its critical pressure being exceeded, but at temperatures at which fluids are commonly boiled. This has nothing to do with Plasmas, but without any distinction between Liquid and Gaseous, a substance which is ordinarily though to have three phases – such as water – ends up demonstrating only two: Fluid and Non-Fluid.

So there is no ultimate reason for which matter needs to be in one out of three phases.

(Updated 10/14/2018, 10h25 … )

## How To Compute a Base-Change in Logarithms

One of the problems people may face in Computer Science, is a CPU only capable of computing a logarithm in one base, say 2, but the need to compute a logarithm in another base, say (e). The way to convert is as follows:

c log2(t) == logb(t)

If (t == b), it follows that:

c log2(b) == logb(b) == 1

Hence,

c = 1 / log2(b)

Dirk

## A Hypothetical Way, to Generate Bigger Random Numbers, using the GMP Library.

In recent days I wrote some Python scripts, which generate 1024-bit prime numbers. But the next stage in my own thinking is, to try to accomplish the same thing in C++, using the GMP Multi-Precision Library, because GMP seems to be a well-supported and overall favorite C++ Multi-Precision Librrary. But when I explored this subject further, I noticed something which surprised me:

GMP is still using the ‘Linear Congruent algorithm’, as its main source of strong, pseudo-random numbers. The reason this fact surprises me is the fact that the Linear Congruent algorithm was invented as early as in the 1970s, as a cheap way to achieve pseudo-randomness, that would be good enough for games to surprise players, but which was never meant to provide crypto-quality random numbers. Actually, back in the 1970s, the registers on which this algorithm was used, may have been 16-bit or 32-bit registers, while today they are 256-bit registers, for which reason a careful and random-looking choice for the two constants is important. In fact, GMP defines the following functions, to initialize a ‘state_t’ object, to become a Linear Congruent RNG:

int gmp_randinit_lc_2exp_size (gmp_randstate_t state, mp_bitcnt_t
size)

void gmp_randinit_lc_2exp (gmp_randstate_t state, const_mpz_t a,
unsigned long c, mp_bitcnt_t m2exp)

For people who did not know, the generality of the algorithm is:

m2exp == 2 * size

X := aX + c mod 2m2exp

The first of the two initializations above uses the ‘size’ parameter, in order to look up in a static, known table, what the ‘ideal’ values for the constants (a) and (c) are, to achieve maximum randomness. The second initialization allows the programmer to specify those constants himself, and poses no restrictions on what ‘m2exp’ will be.

One of the first approaches a cryptographic programmer might want to pursue, in order to generate a prime number eventually, is to read some random bits from the device-file ‘/dev/random’ (on a Linux computer), use the first initialization above, which will lead to an RNG, and then seed this RNG once from the system-provided random number, with which the programmer can then suggest both prime candidates and witnesses to determine whether the candidates are prime, until one prime number is ‘proven’.

But I see a potential ambition for any programmer who may want to go that route:

• Given that (a) and (c) are to be chosen from a known table, this presents a vulnerability, because a hypothetical attacker against this crypto-system may use these constants to gain knowledge about the internal state of the ‘state_t’ object, and therefore become aware of a limited number of prime numbers that can result, thereby narrowing his attack against eventual public keys, by only trying to prime-factorize or otherwise decrypt, using the narrowed set of primes.
• Even if the constants (a) and (c) are secure in nature and not themselves hacked, the table presently only extends to a ‘size’ of 128 bits, which will actually mean that the modulus ‘m2exp’ is 2256. And so, ‘the maximum amount of randomness’ – i.e., the Entropy – which even a 2048-bit public-key modulus can achieve, will be 256 bits. And this would also mean that the strength of the key-pair is only equivalent to a 128-bit, symmetrical AES key, regardless of how complex it is.
• Some programmers might actually want to work with a modulus of 2512.

At the same time, there are reasons why the obvious solution, just to read all random bits from the device-file ‘/dev/urandom’, poses its own problems. One of the reasons is the fact that potentially, 300 (+) prime-number candidates may need to be generated, each of which will be 1024 bits long, and tested 200 (+) times, and that the quality of the randomness ‘/dev/urandom’ provides under those conditions may also be sub-optimal, because that source, too, is pseudo-random, and will only become minimally based on the physically-measured randomness which ‘/dev/random’ represents. And yet, ‘/dev/random’ will typically block if more than ~2048 bits are to be read from it.

I can think of an approach to solving this problem, which may overcome most of the hurdles…

(Updated 10/13/2018, 13h10 … )