Hybrid Encryption

If the reader is the sort of person who sometimes sends emails to multiple recipients, and who uses the public key of each recipient, thereby actively using encryption, he may be wondering why it’s possible for him to specify more than one encryption key, for the same mass-mailing.

The reason this happens, is a practice called Hybrid Encryption. If the basis for encryption was only RSA, let’s say with a 2048-bit modulus, then one problem which should become apparent immediately, is that not all possible 2048-bit blocks of cleartext can be encrypted, because even if we assume that the highest bit of the modulus was a 1, many less-significant bits would be zeroes, which means that eventually a 2048-bit block will arise, that exceeds the modulus. And at that point, the value that’s mathematically meaningful only within the modulus will get wrapped around. As soon as we try to encode the number (m) in the modulus of (m), what we obtain is (zero), in the modulus of (m).

But we know that strong, symmetrical encryption techniques exist, which may only have 256-bit blocks of data, which have 256-bit keys, and which are at least as strong as a 2048-bit RSA key-pair.

What gets applied in emails is, that the sender generates a 256-bit symmetrical encryption key – which does not need to be the highest-quality random-number, BTW – and that only this encryption key is encrypted, using multiple recipients’ public keys, once per recipient, but that the body of the email is only encrypted once, using the symmetrical key. Each recipient can then use his private key, to decrypt the symmetrical key, and can then decrypt the message, using the symmetrical key.

This way, it is also easy to recognize whether the decryption was successful or not, because if the private key used was incorrect, a 2048- or a 2047-bit binary number would result, and the correct decryption is supposed to reveal a 256-bit key, prepended by another 1792 zeroes. I think that if most crypto-software recognizes the correct number of leading zeroes, the software will assume that what it has obtained is a correct symmetrical key, which could also be called a temporary, or per-message key.

Now, the reader might think that this subject is relevant to nothing else, but quite to the contrary. A similar scheme exists in many other contexts, such as SSL and Bluetooth Encryption, by which complex algorithms such as RSA are being used, to generate a temporary, or per-session key, or to generate a per-pairing key, which can then be applied in a consistent way by a Bluetooth Chip, if that’s the kind of Bluetooth Chip that speeds up communication by performing the encryption of the actual stream by itself.

What all this means is that even if hardware-encryption is being used, the actual I/O chip is only applying the per-session or per-pairing key to the data-stream, so that the chip can have logic circuits which only ‘know’ or implement one strong, symmetrical encryption algorithm. The way in which this temporary key is generated, could be made complicated to the n-th degree, even using RSA if we like. But then this Math, to generate one temporary or per-pairing key, will still take place on the CPU, and not on the I/O chip.

(Corrected 10/05/2018, 13h45 … )

(As of 06/10/2018 : )

Now, I suppose that with the Bluetooth Specification, an issue can arise, by which older, obsolete protocols (and therefore, paired devices) use DES encryption as their symmetrical method, while we’d want our Bluetooth Implementation to be compatible with the older, and the newer specifications as well.

This problem can easily be solved, firstly by recognizing that any Bluetooth Chip is still capable of transmitting and receiving data, without applying any encryption to it, exactly as it is being sent and received by the CPU.

Then we’d compare the two encryption methods, to see which one consumes more computational capacity, and we’d discover that by far, AES does. And so what we’d do next, is design the actual I/O chip to hardware-encrypt AES, while leaving it up to the CPU to do any DES encryption.


 

Some people ask, ‘Is AES-128 secure enough, or should we use AES-256?’ My answer would be that they are both essentially secure, and that for real-time purposes, AES-128 might be better.

I’ve found that switching my server to AES-256 slows down some of my clients, but maybe only because these clients still need to do the encryption on the CPU. And if a Web-browser is slowed down too much, there is a risk of socket-timeouts, specifically in the case of Web-sockets and SSL / TLS.

Generally, SSL and TLS are still implemented on the CPU, while some WiFi Chips now offer hardware-encryption, for the connection to a secure WiFi Access-Point.

OTOH, In the case of encrypted emails, the symmetrical encryption is as much of a long-term consideration, as the permanent RSA keys were. Further, if the encryption of an email we’re sending is slow, there is no harm in that. So for such deliberate, durable examples, I’d choose AES-256.

 

(Update 09/13/2018, 22h45 : )

I suppose that there’s another observation to add about this subject. The straight numerical approach to exponentiating a message, of squaring it, and optionally multiplying it with the result multiple times, in a 2048-bit modulus, will only work, if the message is coprime with the modulus.

But, in the case of RSA encryption, if the modulus is in fact 2048-bit, then I expect it’s also the product of two 1024-bit prime numbers, more or less. And, if the smallest prime factor is in fact 1024 bits long, then any 256-bit or 128-bit message, will be coprime with it. And in that case, any exponent of that message will also be so.

I suppose that some laxity might even exist, by which the smallest prime-factor of a 2048-bit modulus is allowed to be a few bits shorter than 1024 bits. But it would then still be considerably longer than 256 bits.

(Erratum 10/05/2018, 13h45 : )

I have just learned of what common methods are used, actually to generate 1024-bit prime numbers. One such method, is called Miller-Rabin / Antoine Prudhomme. And what gives is, that essentially the power exists to generate candidates directly, that are just pseudo-random numbers. These candidates can be altered and put into whatever range needed, after which a probabilistic test of primeness is carried out on them. I.e., it’s feasible to set the MSB to (1) to make sure of a 1024-bit prime, as well as to set the LSB to (1), just to make sure that an odd number is being used, before the test of primeness is undertaken. If that test fails, a new random number is generated. If it succeeds, the probability of the number being prime is high but not certain.

I had underestimated how high the probability actually is, of any given 1024-bit number being prime. Apparently, it’s just (1 / ln(n)), where (n) is the maximum value sought.

One implication this has on our practical software, which I was not aware of before, is that the capacity must exist ‘simply to generate 300 (+) pseudo-random sequences of 1024 bits’. There exists more than one approach to accomplish this. If the multi-precision C++ library ‘GMP’ is being used, then likely, an initial random number seed is being read from ‘/dev/random’, and all the random-values after that could be pseudo-random numbers based on that one initial seed.

This could have negative consequences, if a cryptographic attacker could obtain some sort of knowledge, about the inner workings of the random-number-generator states. For example, If the ‘Linear Congruent’ algorithm was being used to initialize such a ‘state’ object (X = (aX + c) mod 2^length), then known or altered tables of what values of (a) and (c) to use, could completely compromise the process.

Some form of the Linear Congruent algorithm has been known since the 1970s, when it was used with much-shorter word-sizes to create pseudo-random numbers for some early games. But the fact has caught me off-guard, that if the constants (a) and (c) are well-chosen, it can actually lead to high-quality results.

The maximum ‘length’ for which GMP has tables of (a) and (c), to use with the Linear Congruent algorithm, is 256 bits. This is invoked by passing-in a ‘size’ parameter, of only (128). If this was used to generate 1024-bit primes, then the maximum ‘amount of randomness’, i.e., Entropy of the resulting key-pairs, would be 256 bits, if the process was not compromised, which would then also be the maximum key-strength, by which to compare a 2048-bit modulus RSA key-pair, to an AES key. A 256-bit AES key counts as ‘maximally secure’ by today’s standards.

Also, GMP allows for the seeding of random-number states to be hierarchical if the programmer so chooses. I.e., the prime-number candidates could all be first-level pseudo-random numbers from one state, but the witnesses used to test the primeness of each one could be re-seeded from the candidate-value, effectively, thus forming a second level of pseudo-random number generation. But this idea is just speculative on my part.

Unfortunately however, The ‘Mersenne Twister’ algorithm is “Completely Unsuitable for Cryptography” (see link below), which actually means that the strongest algorithm which the GMP library offers for this application, is still the Linear Congruent algorithm.

When programming students are using the random number generation built-in to Python, that one is non-blocking, and may in fact draw from ‘/dev/urandom’. Even though this device-file is also a pseudo-random generator based on ‘/dev/random’, I see no guarantees about the quality of random numbers that would be generated, and, no guarantees that Python uses it.


 

(As of 09/13/2018, 22h45 : )

Now, encryption schemes exist that are not RSA, but that use 2048-bit-message exponentiation. And those schemes actually have to apply their own constraints to make sure that the exponentiation will still work. And in cases where the exponentiation might not work, such as with El-Gamal encryption, an alternative to exponentiation must also be provided.

(Update 09/22/2018, 16h50 : )

There exists a procedure in cryptography which is called “Padding”. What it involves is filling bits randomly, that would otherwise just be fields of zeroes. After decryption the padded bits are ignored, but they make the task harder in certain cases to break the encryption, because at certain stages in that task, the cryptanalyst will just perceive randomness, with no clue as to eventual, meaningful data.

Well, if an attempt is made to pad the fields of zeroes, that belong to the public-key message, but not to the symmetrical key, the firstly, a small number of most-significant bytes will nevertheless need to remain zeroes, let’s say 4 bytes out of 256 – ? – Just to make sure that the message is actually smaller than the modulus in any case. But in addition, the goal should be worked towards explicitly to assure the resulting message is coprime with the modulus. If this attempt fails, then the attempt needs to be repeated, to pad the message. In most cases, the condition of a GCD > 1 – failing, simply because a randomized 2016-bit message having a non-trivial GCD due to a 1024-bit modulus prime factor, actually has a low probability. But the probability is not zero.

In the case of RSA encryption to a public key, the prime factorization was never known, so actually to try dividing by one of the prime factors is not feasible. But other ways are feasible.

I should point out that if the nearly-random bit-field is ever not coprime, in the case of RSA, we’ve also just broken the RSA key, just by sheer accident. Whether to proceed with the intention of just sending an encrypted message, or to switch goals to follow through with breaking the encryption, is up to the whim of the programmer. But this observation just illustrates how low the probability ever was, for the message actually not to be coprime so.

(Update 09/24/2018, 11h30 : )

Things Not To Do:

It would be convenient if the thesis was provably true, that to add two numbers that are both coprime with a third, results in a sum that is also coprime with this same, originally third number. But the closest theory that I can find to that, merely states that The sum of two numbers is coprime with either of them, iff the two numbers were coprime with each other. For me, to state that this satisfies my needs, would be equivalent to Klangverstand.

If this was true, then a way to proceed, that requires far fewer computations than The Euclidean Algorithm I linked to above, would be just to make sure that the bits which are being added, to pad the message, are themselves coprime with the modulus. And this in turn could be accomplished, by giving our software a ‘golden prime number’ which is constant, which is never used to encrypt anything, but which is itself 1040 bits long. Because the number that we would be adding is maximally 2048 bits long, this number would only have other prime factors 1008 bits long or shorter.

To left-shift such a number by 256 bits, simply means to multiply it by 2256. Therefore, a pseudo-random number 720 bits long could be generated, and simply multiplied by the golden prime, then left-shifted by 256 bits, to achieve 2016-bit padding. Even though the resulting bits would then contain a high prime factor, they would be ‘random enough’, to act as padding.

However, I cannot find any references which suggest that the underlying assumption is true, which I just wrote started as Klangverstand. In fact, I can trivially prove that this is not true:

3 is coprime with 10.

2 is coprime with 10.

2 + 3 = 5

5 is not coprime with 10.


 

It might work if we were to find a number (n ∈ {128, 256}) such that (2n + 1) is prime, or, if this term fails to be prime, that it’s at least coprime with the modulus ! If we know this about (n), then to multiply the symmetrical key by the term several times, would end up repeating the symmetrical key somewhere in the padded bits, but not in the 128 or 256 most-significant bits which should be padded, but would preserve the symmetrical key in the 128 or 256 least-significant bits. And then, the result would be just as coprime with the modulus, as the symmetrical key was to begin with, and as this term was.

If we wanted to apply this term several times, then this would eventually require summing the most-significant bits being padded, not just appending them.

But alas, if the attacker knows that this is being done, they can also just compute (2n + 1)6, they can raise the result to the power of 65537 in the known modulus, and can compute the multiplicative inverse of the result, in the known modulus, not needing to do so in the modulus that would be the totient product. Then, the attacker could multiply the cypher-text by the result, and effectively undo what has been done, in this effort to pad, and what has been done before encryption. The attacker would still have an encrypted message, but one which if decrypted, would no longer be padded !


 

Something which can be done:

The goal of the padding could be changed to a compromise. Instead of wanting to fill 7/8 or 15/16 of the message with random bits, the goal could be to fill only 1/2 the message minus 16 bits with random bits. I.e., if the modulus is 2048 bits long and each prime factor ostensibly 1024 bits long, then the goal might be to pad up to 1008 bits of the message. While this is not as strong as to pad most of the message, it will preserve coprimeness with the modulus, and require far fewer computations, than The Euclidean Algorithm would.

Dirk

 

Print Friendly, PDF & Email

5 thoughts on “Hybrid Encryption”

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

This site uses Akismet to reduce spam. Learn how your comment data is processed.