New Policy Regarding memcached

One of the facts which I’ve written about at length, is that while my home-computer ‘Phoenix’ has acted well as my Web-server, I am aiming to improve its performance as much as possible through caching, and through the daemon named ‘memcached‘ specifically. I have a new plugin installed on my blogging engine, which connects to memcached, and this plugin tries to cache a larger set of categories, of object-types that exist within the WordPress blogging engine.

One of the less-fortunate side-effects of this is, that every time, after I update my other plugins, there are malfunctions, due to WordPress not recognizing that the plugins in question have been updated to their new version.

There is a logical solution to this problem, which should work, which is to restart memcached after each plugin-update. This should be harmless and flush the cache.

Again, a less-fortunate side-effect of this policy will be, that the site will seem a bit slow, after each plugin-update, and that indeed, I will not announce memcached restarts anymore.

A fortunate side-effect of how much the new plugin caches is, that it seems to be improving the performance of the site more, than the old caching plugin did. It almost seems to have become possible, that WordPress is serving out the blog, while accessing the actual hard-drive infrequently.

Dirk

 

The Advantages of using a Slab Allocator

When people take their first C programming courses, they are taught about the standard allocator named ‘malloc()‘, while when learning C++, we were first taught about its standard allocator, named ‘new‘.

These allocators work on the assumption that a program is running in user space, and may not always be efficient at allocating smaller chunks of memory. They assume that a standard method of managing the heap is in-place, where the heap of any one process is a part of that process’s memory-image, and partially managed by the kernel.

Not only that, but when we tell either of these standard operators to allocate a chunk of memory, the allocator recognizes the size of that chunk, prepends to the chunk of memory a binary representation of its size, and before returning a pointer to the allocated memory, subtracts the size of the binary representation, of the size originally requested by the programmer. Thus, the pointer returned by either of these allocators points directly to the memory which the programmer can use, even though the allocated chunk is larger, and preceded by a binary representation of its own size. That way, when the command is given to deallocate, all the deallocation-function needs to receive in principle, is a pointer to the allocated chunk, and the deallocation-function can then find the header that was inserted from there, to derive how much memory to delete.

I suppose that one conclusion to draw from this is, that even though it looks like a good exercise to teach programming students, the exercise of always allocating a 32-bit or a 64-bit object – i.e., a 4-byte or an 8-byte object – such as an integer, to obtain an 8-byte pointer to that integer, is actually not a good one, because in addition to the requested 8 bytes, an additional header is always being allocated, which may add 4 bytes if the maximum allocated size is a 32-bit number, or add 8 bytes if the maximum allocated size (of one chunk) is a 64-bit number.

Additionally, these allocators assume the support of the kernel, to a user-space process, the latter of which has a heap. On 64-bit systems that are ‘vmalloc‘-based, this requires the user-space application try to access virtual address ‘0x0000 0000 0000 0000‘, which intentionally results in a page-fault, and stops the process. The kernel then needs to examine why the page-fault occurred, and since this was a legitimate reason, needs to set up the virtual page-frame, of an address returned to the (restarted) user-space process, via the usual methods for returning values.

And so means also needed to exist, by which a kernel can manage memory more-efficiently, even under the assumption that the kernel does not have the sort of heap, that a user-space process does. And one main mechanism for doing so, is to use a slab allocator. It will allocate large numbers of small chunks, without requiring as much overhead to do so, as the standard user-space allocators did. In kernel-space, these slabs are the main replacement for a heap.

(Updated 06/20/2017 … )

Continue reading The Advantages of using a Slab Allocator

And Now, Memcached Contributes to This Site Again!

According to this earlier posting, I had just uninstalled a WordPress plugin from my server, which uses the ‘memcached‘ daemon as a back-end, to cache blog content, namely, content most-frequently requested by readers. My reason for uninstalling that one, was the warning from my WordFence security suite, that that plugin had been abandoned by its author.

Well, it’s not as if everything was a monopoly. Since then, I have found another caching plugin, that again uses the ‘memcached‘ daemon. It is now up and running.

(Screenshot Updated 06/19/2017 : )

memcached_7

One valid question which readers might ask would be, ‘Why does memcached waste a certain amount of memory, and then allocate more, even if all the allocated memory is not being used?’

(Posting Updated 06/21/2017 … )

Continue reading And Now, Memcached Contributes to This Site Again!

Memcached no longer contributes, to how this site works… For the moment.

One of the facts which I had mentioned some time ago, was that on my Web-server I have a daemon running, which acts as a caching mechanism to any client-programs, that have the API to connect to it, and that daemon is called ‘memcached‘.

And, in order for this daemon to speed up the retrieval of blog-entries specifically, that reside in this blog, and that by default, need to be retrieved from a MySQL database, I had also installed a WordPress.org plugin named “MemcacheD Is Your Friend”. This WordPress plugin added a PHP script, to the PHP scrips that generally generate my HTML, but this plugin accelerated doing so in certain cases, by avoiding the MySQL database look-up.

In general, ‘memcached‘ is a process which I can install at will, because my server is my own computer, and which stores Key-Value pairs. Certain keys belong to WordPress look-ups by name, so that the most recent values, resulting from those keys, were being cached on my server (not on your browser), which in turn could make the retrieval of the most-commonly-asked-for postings – faster, for readers and their browsers.

Well, just this morning, my WordFence security suite reported the sad news to me, that this WordPress plugin has been “Abandoned” by its developer, who for some time was doing no maintenance or updates to it, and the use of which is now advised against.

If the plugin has in fact been abandoned in this way, it becomes a mistake for me to keep using it for two reasons:

  1. Updates to the core files of WordPress could create compatibility issues, which only the upkeep of the plugin by its developer could remedy.
  2. Eventually, security flaws can exist in its use, which hackers find, but which the original developer fails to patch.

And so I have now disabled this plugin, from my WordPress blog. My doing so could affect how quickly readers can retrieve certain postings, but should leave the retrieval time uniform for all postings, since WordPress can function fine without any caching, thank you.

memcached_1

Dirk