Hash Tables

There is already a lot of work published, that details numerous established ways to implement hash tables. The WiKiPedia article on the broad class of hash-tables, is already a good starting point to become familiar with them.

But just to make this interesting, I undertook the mental exercise of designing a hash table, which when grown, does not require that the contents of the original hash table be rearranged in any way, but in which the expansion is achieved, by simply appending zeroes to the original hash table, and then changing one of the parameters, with which the new, bigger hash table is accessed.

My exercise led me to the premise, that each size of my hash table be a power of two, and it seems reasonable to start with 2^16 entries, aka 65536 entries, giving the hash table an initial Power Level (L1) of 16. There would be a pointer I name P to a hash-table position, and two basic Operations to arrive at P:

  1. The key can be hashed, by multiplying by the highest prime number smaller than 2^L1 – This would be the prime number 65521 – within the modulus of 2^L1, and yielding Position P0.
  2. The current Position can have added the number A, which is intentionally some power of two lower than L1, within the modulus of 2^L, successively yielding P1, P2, P3, etc…

Because the original multiplier is a prime number lower than 2^L1, and the latter, the initial modulus of the hash table, a power of two, consecutive key-values will only lead to a repetition in P0 after 2^L key-values. But, because actual keys are essentially random, a repetition of P0 is possible by chance, and the resolution of the resulting hash-table collision is the main subject in the design of any hash table.

Default-Size Operation

By default, each position of the hash table contains a memory address, which is either NULL, meaning that the position is empty, or which points to an openly-addressable data-structure, from which the exact key can be retrieved again. When this retrieved exact key does not match the exact key used to perform the lookup, then Operation (2) above needs to be performed on P, and the lookup-attempt repeated.

But, because A is a power of 2 which also fits inside the modulus 2^L, we know that the values of P will repeat themselves exactly, and how small A is, will determine how many lookup-attempts can be undertaken, before P repeats, and this will also determine how many entries can be found at any one bucket. Effectively, if A = 2^14, then 2^L / A = 2^2, so that a series of 4 positions would form the maximum bucket-size. If none of those entries is NULL, the bucket is full, and if an attempt is made to insert a new entry to a full bucket, the hash-table must be expanded.

Finding the value for a key will predictably require, that all the positions following from one value of P0 be read, even if some of them were NULL, until an iteration of P reveals the original key.


 

It is a trivial fact that eventually, some of the positions in this series will have been written to by other buckets, because keys will be random again, thus leading to (their own values of P) which coincide with (current nA + P0) . But, because the exact key will be retrieved from non-NULL addresses before those are taken to have arisen from the key being searched for, those positions will merely reflect a performance-loss, not an accuracy-loss.

Because it is only legal, for 1 key to lead to a maximum of 1 value, as is the case with all hash tables, before an existing key can be set to a new value, the old key must be found and be deleted. In order for this type of hash table to enforce that policy, its own operation to insert a key would need to be preceded by an operation to retrieve it, if only to return the appropriate error-code upon a success in retrieving it. During this precursory scan, the existence and position of the first NULL address can also be stored, so that the actual insertion of the new key can take place in one step.

Any defined operation to delete a key would follow according to the same logic – It must be found first, and if not found, cause the appropriate error-code to be returned. Any hash table with more than one entry for the same key is corrupted, but can conceivably be cleaned, if the program that uses it sends multiple commands to delete the same key, or one command to purge it…

Growing the Hash Table

Continue reading Hash Tables

Finding the Multiplicative Inverse within a Modulus

The general concept in RSA Cryptography is, there exists a product of two prime numbers, call them (p) and (q), such that

T ^ E ^ D mod (p)(q) = T

where,

T is an original plaintext document,

E is an encryption key,

D is the corresponding decryption key,

E and D are successively-applied exponents, of T,

(p)(q) is the original modulus.

A famous Mathematician named Euler found, that in order for exponent operations on T to be consistent in the modulus (p)(q), the exponents’ product itself must be consistent in the modulus (p-1)(q-1). This latter value is also known as Euler’s Totient Product of (p)(q).

There is a little trick to this form of encryption, which I do not see explained often, but which is important. Since E is also the public key, a relatively small prime number is used in practice, that being (2^16 + 1), or 65537. D could be a 2048 or a 4096-bit exponent. The public key consists of the exponent E and the modulus (p)(q) packaged together.

Thus, when the key-pair is created, (p) and (q) must be known separately, so that D can be computed, and then (p-1)(q-1), which hopefully never touched the hard-drive, can be discarded, after D is saved, along with (p)(q) again, this time forming the private key.

Therefore, D must be the multiplicative inverse of E, in the modulus of (p-1)(q-1). But how does one compute that?

Continue reading Finding the Multiplicative Inverse within a Modulus

The Original RSA Trapdoor Function

I once ran into laypeople, who were able to understand what a modulus was – so that the output of a series of computations would never equal or exceed that modulus – and who were able to understand what the exponent function is, but who were incredulous, when I told them that it was possible to compute the result of raising a 2048-bit number, to a 2048-bit exponent, on the basis that the result only needs to fit inside a 2048-bit modulus.

I believe that the most common way in which this is done, is based on the assumption that two 2048-bit numbers can be multiplied, and the result brought back down to a 2048-bit modulus. This notion can be extended, to mean that a 2048-bit number can also be squared, and the result written in the 2048-bit modulus…

Well to achieve the exponent-function, one needs a base-register, an accumulator-register, and the exponent. The accumulator is initialized to the value (1).

If the exponent has 2048 bits, then the operation can be repeated 2048 times:

  1. Square the value in the accumulator.
  2. Left-Shift the Most-Significant Bit of the Exponent out, into a bit-register that can be examined.
  3. If that bit-register is equal to (1) and not (0), multiply the base-register into the accumulator an extra time.

Because the Most-Significant Bit of the Exponent was shifted out first, its being (1) would mean that the value in the Accumulator was multiplied by the Base earlier, so that this exponent of the Base will also be squared, as a factor of the Accumulator, by the highest number of iterations, thus effectively raising the Base to the power of 1, 2, 4, 8, 16, 32, … 2^2047 , in combinations.

Dirk