WordPress.org needs to be adapted to Debian.

I guess that when many people blog, they may go directly to a (paid) hosting service for blogs. WordPress.com would be an example of that. But the assumption that seems to come next, is that people have a Web-hosting service already, with the privileges to FTP files up to it, and privileges to run (server-side) CGI scripts written in PHP. One type of software which they can then upload to this service, can be WordPress.org.

Hence, a lot of the assumptions made in the packaging of WordPress.org, are relevant to this usage scenario, including perhaps the need that can come up to solve technical issues, by tolerating possible configuration changes on the part of the blogger.

What I’m doing is a bit more out of the ordinary, to host my site on my own server, including to host WordPress.org. The way this Web-application is set up by default, it has a root or base directory, plus a content directory that underlies the root. Because many people simply upload this package to their Web-host, and then tinker with it until it works, it’s not an urgent need as seen by the WordPress.org devs, to fend off the root directory as having a username different from the username of the content directory.

But in my setup that’s how it is. And so, entirely because I, too, was tinkering with my setup, I did trigger an attempted core update, which would have rewritten the contents of my root directory. But, because this site’s root has different permissions set on my computer, the WordPress version I use was also unable to give itself a complete update. This is a good thing, because mine was initially installed via my (Debian / Linux) package manager, and I need to keep it compatible with any future updates the package-manager might give it. OTOH, sub-directories of my content directory are supposed to contain more-personalized information.

My enforcement of this was such, that the WordPress.org Web-application was unable to put the site into “Maintenance Mode”, in order to do its core-update. But the fact remains, that even if the site wanted only to update plug-ins or the theme that I happen to be using, this Web-application would still need to put the site into maintenance mode first, which by default it can’t. So by default, even updates to plug-ins would fail.

But my past practice, of just deleting a plug-in from the content directory, and of copying the new version into that content directory – which might be called ‘a manual update’ – will no longer continue to serve me well. From now on, I’ll want to run my updates in-place. And this required some further modification – I.e. some more fiddling on my part, to prepare.

As it stands an attempted in-place update even just for a plug-in, will still fail harmlessly, unless I now do something specific with my root password first. And I’m not going to be more specific than that. I did post some questions on the WordPress.org forums about the issue that I ran in to before, but then just posted my own answer to this issue. It’s just that since nobody did reply to my questions within a few hours of my posting them – until I marked my own issue as Resolved – I also feel that I shouldn’t be wasting their time with more of my details. And without these details, they may not be confident that I really do understand this subject. It’s just that nobody over there even bothered to follow up on that thought, let’s say with any questions of their own.



Print Friendly, PDF & Email

My Site mainly Requires Web-Sockets.

The origins of HTTP were essentially ‘sessionless’. This meant that with a server always listening on Port 80, a client could request one URL at a time, in response to which the server would return the page in question directly to the client’s port number. This included the CGI-scripts’ FORM data. But as the early Internet evolved, Web-sites started to become ‘session-aware’. I explained this to my friends in the past as follows:

The client connects to the assigned port number 80 on the server, and requests a session, which causes the server to start listening on another port number, this forming a ‘session socket’. The one listening on port 80 was the ‘server socket’. The server’s session socket was dedicated to one client and to one session.

My friends did not acknowledge this description of how TCP works, I think mainly, because I did not use the right terminology. What I had referred to as a ‘session socket’, is officially termed a “Web-Socket” in the case of HTTP. It turns out that with an Apache server, many sub-processes can bear these Web-Sockets. They don’t exclusively exist in order to output Web-pages at a faster rate, in response to individual requests made by the clients, to the process still listening on port 80.

One fact to know about my site, is that for such purposes as viewing this blog, the use of Web-Sockets is required. In the case of certain other sections of my site, such as http://dirkmittler.homeip.net/GallIndex.htm, the use of Web-Sockets is not required, because those Web-pages exist mainly in a sessionless way – they can be fetched one at a time without error.

Certain proxy-servers will not allow a Web-Socket to get forwarded. These are logically also proxies which don’t allow SSL connections to be forwarded, because the encrypted SSL data is also sent via Web-Sockets or their equivalent. If you are connecting to the Internet via such a proxy, I’m afraid you won’t be able to navigate my blog correctly. I apologize for this, but there is little I can do about that. I think that you should still be able to fetch a single posting of mine, that comes up through a search engine.


(Edit: ) This may also apply if you’re trying to connect to my IPv6 address, because my IPv6 is being provided by a Teredo proxy, which might have just assigned reduced privileges to my client:

Previous Post


Print Friendly, PDF & Email

An Aspect to Hugetlbfs, which Many Sites Omit

I was recently troubleshooting the question of Huge Pages under Linux, which are pages of virtual memory with a size of 2MB each, instead of 4KB. And I was interested in the question of statically-allocated ones, even though modern, 64-bit Linux systems also offer Transparent Huge Pages. There was a gap in what was posted online, concerning the possibility or need to mount an actual ‘hugetlbfs’ file system, according to whether specific programs use one.

Under systems such as my own, as soon as the system boots with hugetlbfs enabled – i.e. a non-zero number of them – the kernel automatically creates a mount-point at ‘/dev/hugepages’. This mount-point is created without any administrator ‘fstab’ entry, and belongs to user and group ‘root’. According to some needs, such a mount-point would better be created as belonging to a specific group, and as having the option ‘mode=1770′ set. And so before checking the default behavior of my kernel, I also created a suitable mount-point at ‘/mnt/hugepages’. The question remained in my mind, of whether any way exists to give the automatic mount-point at ‘/dev/hugepages’ my custom options. And the answer seems to be No.

Here’s what the online documentation forgets to mention:

If you have a program which requires such a mount-point, there will be a line in its config file, that states where it’s located. Any files created in such a mount-point will then consume statically-allocated huge pages of RAM.

Because MySQL is not an example of a program that requires this, its config file also requires no line, to tell it which mount-point to use. MySQL uses memory-allocation functions directly, to ask the kernel for huge pages.

I suppose that it does no real harm, if there is more than one hugetlbfs mounted at any time, as long as the unneeded one is not wasting any pages, let’s say as long as absolutely no files are being created in the unused mount-point. And then if we need to, we can in fact give our custom hugetlbfs mount-point whatever properties we think it should have, via the mount options or the fstab.

Because I didn’t need the one I had created, I simply got rid of it again. Besides which, the fact that one is automatically created at ‘/dev/hugepages’ these days, suggests that future programs that need it, will already be configured to look for it there. And then it would also make sense, if those programs were able to deal with the fact that that one belongs to user and group ‘root’.



Print Friendly, PDF & Email