The Advantages of using a Slab Allocator

When people take their first C programming courses, they are taught about the standard allocator named ‘malloc()‘, while when learning C++, we were first taught about its standard allocator, named ‘new‘.

These allocators work on the assumption that a program is running in user space, and may not always be efficient at allocating smaller chunks of memory. They assume that a standard method of managing the heap is in-place, where the heap of any one process is a part of that process’s memory-image, and partially managed by the kernel.

Not only that, but when we tell either of these standard operators to allocate a chunk of memory, the allocator recognizes the size of that chunk, prepends to the chunk of memory a binary representation of its size, and before returning a pointer to the allocated memory, subtracts the size of the binary representation, of the size originally requested by the programmer. Thus, the pointer returned by either of these allocators points directly to the memory which the programmer can use, even though the allocated chunk is larger, and preceded by a binary representation of its own size. That way, when the command is given to deallocate, all the deallocation-function needs to receive in principle, is a pointer to the allocated chunk, and the deallocation-function can then find the header that was inserted from there, to derive how much memory to delete.

I suppose that one conclusion to draw from this is, that even though it looks like a good exercise to teach programming students, the exercise of always allocating a 32-bit or a 64-bit object – i.e., a 4-byte or an 8-byte object – such as an integer, to obtain an 8-byte pointer to that integer, is actually not a good one, because in addition to the requested 8 bytes, an additional header is always being allocated, which may add 4 bytes if the maximum allocated size is a 32-bit number, or add 8 bytes if the maximum allocated size (of one chunk) is a 64-bit number.

Additionally, these allocators assume the support of the kernel, to a user-space process, the latter of which has a heap. On 64-bit systems that are ‘vmalloc‘-based, this requires the user-space application try to access virtual address ‘0x0000 0000 0000 0000‘, which intentionally results in a page-fault, and stops the process. The kernel then needs to examine why the page-fault occurred, and since this was a legitimate reason, needs to set up the virtual page-frame, of an address returned to the (restarted) user-space process, via the usual methods for returning values.

And so means also needed to exist, by which a kernel can manage memory more-efficiently, even under the assumption that the kernel does not have the sort of heap, that a user-space process does. And one main mechanism for doing so, is to use a slab allocator. It will allocate large numbers of small chunks, without requiring as much overhead to do so, as the standard user-space allocators did. In kernel-space, these slabs are the main replacement for a heap.

(Updated 06/20/2017 … )

Continue reading The Advantages of using a Slab Allocator

And Now, Memcached Contributes to This Site Again!

According to this earlier posting, I had just uninstalled a WordPress plugin from my server, which uses the ‘memcached‘ daemon as a back-end, to cache blog content, namely, content most-frequently requested by readers. My reason for uninstalling that one, was the warning from my WordFence security suite, that that plugin had been abandoned by its author.

Well, it’s not as if everything was a monopoly. Since then, I have found another caching plugin, that again uses the ‘memcached‘ daemon. It is now up and running.

(Screenshot Updated 06/19/2017 : )

memcached_7

One valid question which readers might ask would be, ‘Why does memcached waste a certain amount of memory, and then allocate more, even if all the allocated memory is not being used?’

(Posting Updated 06/21/2017 … )

Continue reading And Now, Memcached Contributes to This Site Again!

Memcached no longer contributes, to how this site works… For the moment.

One of the facts which I had mentioned some time ago, was that on my Web-server I have a daemon running, which acts as a caching mechanism to any client-programs, that have the API to connect to it, and that daemon is called ‘memcached‘.

And, in order for this daemon to speed up the retrieval of blog-entries specifically, that reside in this blog, and that by default, need to be retrieved from a MySQL database, I had also installed a WordPress.org plugin named “MemcacheD Is Your Friend”. This WordPress plugin added a PHP script, to the PHP scrips that generally generate my HTML, but this plugin accelerated doing so in certain cases, by avoiding the MySQL database look-up.

In general, ‘memcached‘ is a process which I can install at will, because my server is my own computer, and which stores Key-Value pairs. Certain keys belong to WordPress look-ups by name, so that the most recent values, resulting from those keys, were being cached on my server (not on your browser), which in turn could make the retrieval of the most-commonly-asked-for postings – faster, for readers and their browsers.

Well, just this morning, my WordFence security suite reported the sad news to me, that this WordPress plugin has been “Abandoned” by its developer, who for some time was doing no maintenance or updates to it, and the use of which is now advised against.

If the plugin has in fact been abandoned in this way, it becomes a mistake for me to keep using it for two reasons:

  1. Updates to the core files of WordPress could create compatibility issues, which only the upkeep of the plugin by its developer could remedy.
  2. Eventually, security flaws can exist in its use, which hackers find, but which the original developer fails to patch.

And so I have now disabled this plugin, from my WordPress blog. My doing so could affect how quickly readers can retrieve certain postings, but should leave the retrieval time uniform for all postings, since WordPress can function fine without any caching, thank you.

memcached_1

Dirk

 

Finding the Multiplicative Inverse within a Modulus

The general concept in RSA Cryptography is, there exists a product of two prime numbers, call them (p) and (q), such that

T ^ E ^ D mod (p)(q) = T

where,

T is an original plaintext document,

E is an encryption key,

D is the corresponding decryption key,

E and D are successively-applied exponents, of T,

(p)(q) is the original modulus.

A famous Mathematician named Euler found, that in order for exponent operations on T to be consistent in the modulus (p)(q), the exponents’ product itself must be consistent in the modulus (p-1)(q-1). This latter value is also known as Euler’s Totient Product of (p)(q).

There is a little trick to this form of encryption, which I do not see explained often, but which is important. Since E is also the public key, a relatively small prime number is used in practice, that being (2^16 + 1), or 65537. D could be a 2048 or a 4096-bit exponent. The public key consists of the exponent E and the modulus (p)(q) packaged together.

Thus, when the key-pair is created, (p) and (q) must be known separately, so that D can be computed, and then (p-1)(q-1), which hopefully never touched the hard-drive, can be discarded, after D is saved, along with (p)(q) again, this time forming the private key.

Therefore, D must be the multiplicative inverse of E, in the modulus of (p-1)(q-1). But how does one compute that?

Continue reading Finding the Multiplicative Inverse within a Modulus