And Now, Memcached Contributes to This Site Again!

According to this earlier posting, I had just uninstalled a WordPress plugin from my server, which uses the ‘memcached‘ daemon as a back-end, to cache blog content, namely, content most-frequently requested by readers. My reason for uninstalling that one, was the warning from my WordFence security suite, that that plugin had been abandoned by its author.

Well, it’s not as if everything was a monopoly. Since then, I have found another caching plugin, that again uses the ‘memcached‘ daemon. It is now up and running.

(Screenshot Updated 06/19/2017 : )


One valid question which readers might ask would be, ‘Why does memcached waste a certain amount of memory, and then allocate more, even if all the allocated memory is not being used?’

(Posting Updated 06/21/2017 … )

memcached uses a Slab Allocator, which allocates slabs corresponding to fixed object-sizes. These object-sizes lead to chunk-sizes. Each slab represents a chunk-size, and memory is allocated to it one 1MB page at a time, so that for each slab-class, there is a number of chunks per page. And, memcached starts to allocate one page to any slab-class, as soon as even a single object has been requested, of the corresponding chunk-size.

According to the screen-shot below, there would be a slab-class 1, with even smaller chunks than 120B, but it has received zero pages, so that no corresponding slab exists yet.

Well the client program – WordPress – may often be storing 480B objects for the sake of argument, but seldom storing 600B or 64.7KB objects. Once the slab of allocated 480B chunks is full, but the limit of allocated memory has not been reached (in the present case, 256MB), memcached will allocate more pages to the slab offering 480B chunks, while the 600B and 64.7KB -chunk slabs… are still mainly unused, but continue to consume 1MB of memory each.

(Screenshot Updated 06/17/2017 : )



(Edit 06/20/2017 :

I’ve found an interesting article on the Web, explaining how memcached interacts with the slab-allocator.

According to that article, memcached actually needs to store its key and flag data in 48 bytes belonging to each chunk, so that in my own case, where the slab-class 1 chunk-size starts out as 96 bytes, actually only 48 bytes are available to store the value. This actually seems like a more-reasonable starting-size, than what I first read out of the data.

Also, according to what I read elsewhere, the type of hash-table used is such, that memcached can resize it quickly. This implies that The hash-table does not use linked lists. )

I can offer another observation, about why memcached does not want to use Huge Pages on my Linux machine, even though I have Huge-Page support, and even though it has a command-line option that would enable this feature.

By default, the page-size of the Slab Allocator is 1MB. If it was to use the Huge Pages offered by the O/S, it would use advanced Memory-Control functions, to ask for a custom page-size, for those Huge Pages, of 1MB. But on my system – as on most Debian systems – the page-size is 2MB. And so, the request for custom-sized Huge Pages is rejected by my kernel.

In consequence, memcached will simply have to use default memory, which is organized into 4KB pages on the O/S.

If a suggestion was made, to allocate 2MB O/S Huge Pages, but to convert them into 1MB Slab Allocator Pages, then it would no longer be obvious that a considerable performance (speed) increase would follow. And so this suggestion is not followed with much enthusiasm from the developers.

There does exist the command-line flag ‘-I‘, which changes the page-size used in the Slab Allocator.

But if we were to give ‘-I 2m -L‘, to try to match the Huge-Page size of the O/S according to Human Common Sense, then the only result will be, that ‘memcached‘ nevertheless calls this special Memory-manipulation function, to request a Huge-Page Size of 2MB, when the O/S already has set it to 2MB. And then, if the kernel does not support it, doing so again causes the program to fail with an error message.

So at that point it becomes clear to me, that on my machine, ‘memcached‘ will just not be able to use Huge Pages.

(Edit 06/18/2017 : )

By achieving that ‘memcachedshould pin its pages of memory into physical memory, using the ‘-k‘ option, I have accomplished whatever I might otherwise have accomplished, by telling it to use Huge Pages, via the ‘-L‘ option, namely, that the pages never get swapped out, either way.

And, because of the way virtual memory works, if 256, 4K pages of it are allocated to a user-space process contiguously, forming a 1MB allocation, then any location within this range can be addressed directly, using its virtual address, at no performance-penalty, providing that none of the 4K pages have been swapped out.

(Edit 06/21/2017 : )

Because the slab information shown above does not change much, even after the WordPress instance has been running for several days, I’ve concluded some new parameters for ‘memcached‘, that should improve its functioning on my system.

I’ve decided that from now on, the daemon is to have a maximum allocated (pinned) slab usage of 128MB again (not 256), and that its adjusted, customized page-size is to be 512KB (not the default of 1MB).

But, these settings will not go into effect, until I have a reason to flush the cache again / to restart the daemon.



3 thoughts on “And Now, Memcached Contributes to This Site Again!”

Leave a Reply

Your email address will not be published. Required fields are marked *

Please Prove You Are Not A Robot * Time limit is exhausted. Please reload the CAPTCHA.

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>