Switched In a Replacement Keyboard

This is a situation which brings back memories of my Late Father, who repatriated to Germany in the late 1990s and early 2000s. He had a habit of giving me gifts that included computers and computer accessories.

During a visit which I made to Germany in 2010, he gave me a laptop that he didn’t need anymore, and which is in fact the Acer Aspire 5020 I mentioned before.

Link to Previous Posting

But his gifts that year included a good laptop bag, and also a keyboard. What I found peculiar about Dad, was that it had seemed very important to him, to find a keyboard in Germany which emulated the standard 105-key, US keyboard layout. Evidently he had been away from Canada and the USA for so long, that he could not remember what the US keyboard layout was. And so specifically for me, he had purchased a new K.B., which had the U.K. layout. This is entirely logical, because of the geographic proximity Germany has to Great Britain.

But Dad seemed struck so sad, when he learned that the K.B. he had bought for me wasn’t really a US-layout board. I tried to explain to him, that a Linux computer can easily be switched from one keyboard layout to another, and that the only challenge we faced, was to identify which layout this keyboard had. Because, we had actually failed to find this out, for which reason I did not make immediate use of it.

And, we did not have the time to solve the puzzle either, because we had his laptop to set up, as well as numerous other things to do in Germany, while my Father was still alive.

But, Not knowing what the keyboard layout is, can do far more damage, than simply having a layout from a nearby country. But Dad felt it was the other way around.

The computer I name ‘Pheonix’ possessed a Hewlett-Packard keyboard which had always served me well. But now the time came to retire that old K.B., which was my occasion this evening to switch in the one which my Late Father had given to me in 2010.

This K.B. is of the brand-name “Cherry”, and just by Googling that, I found out that Cherry was a later generation, high-quality keyboard brand. I tend to appreciate keyboards that have good tactile properties, and high-quality switches. Even though Cherries are not genuine “Clickety Keyboards”, they come close. I expect that this one will last me a long time. It’s quite robust.

Dirk

 

How I typically Solve my Kleopatra Start-Up Delay Problem

Both under Linux and under Windows, I use “Kleopatra”, which is a GUI for the ‘GnuPG’ system – the “GNU Privacy Guard”. In case the reader does not know, GnuPG or ‘GPG’ is one software alternative for providing ‘Public Key Cryptography’, which can be used in practice to sign and/or encrypt emails, as well as to validate digital signatures made by other people’s computers.

Using GPG does not strictly require that we use Kleopatra, because there exists the capability which some power-users have, to use GPG from the command-line, and Kleopatra is a distinctly KDE-based front-end, even though there exist Windows ports of it.

One problem which I eventually run in to, and which has been reported elsewhere on the Internet, is that at first after installation, Kleopatra seems to run fine, but that after some point in time we encounter a strange delay, when we start up this program, which can last for several minutes or even longer, during which the program does not respond properly to user commands. Our GPG installation does not seem to be compromised.

In my case, this seems to take place entirely, because Kleopatra has been instructed to check the revocation status of some certificates, but no ‘OCSP Server’ has been specified in its settings. According to some other reports on the Web, this is a problem specific to “CACert” certificates, and in my case also, the problem seems to set in, after I’ve added a CACert certificate to my key-ring. Yet, AFAIK, this problem could just as easily occur after we’ve added other certificates.

The way I eventually solve this problem – on every computer I own – is to open Kleopatra somehow, and then to go into Settings -> Configure Kleopatra -> S/MIME Validation , and then to look at the field which says “OCSP responder URL”. By default, this field will be blank.

Since in my case the problem starts after I’ve added my CACert certificate, I actually add the OCSP Server which is provided by CACert there, which is currently “http://ocsp.cacert.org/”. After that, I find that when I open Kleopatra, a narrow and subtle progress-bar in the lower right of the application window, sweeps to completion within one second, and the program opens fine.

I need to explain why this solution works for me, so that anybody who may be having the same problem, but not with a CACert certificate, can also solve this problem.

Certificates which are not self-signed, are signed by a ‘Certificate Authority’, such as CACert. When Kleopatra starts, one of the functions which it automatically performs is to check its certificates against a ‘Revocation List’, in case the Certificate Authority has decided to revoke it.

The actual certificate which I received from CACert, has the detail encoded into its plain-text data, that its revocation status must always be checked. But what I’ve found happens with Kleopatra specifically, is that if no OCSP Server has been specified, instead of somehow recognizing the fact that it cannot check the revocation status, this program goes into some type of infinite loop, never actually connecting to any server, but also never seeming to exit this state.

I choose to put this OCSP, because in my case, I know that it is the CACert certificate which has this need set with a high priority. It should be possible to put some other OCSP Server into the same field, because ultimately they should all be synchronized. But finally, the OCSP Server provided by the same Certificate Authority, also provides the fastest response time, for validating its own certificates.

As I see it, there was a problem in priorities somewhere, in programming this application. There was the bureaucratic priority, which states that the status of this certificate must always be checked. but then there was also the programming priority, which states that an attempt to connect to a server, without any specification of which server, will lead to some sort of malfunction eventually. And between these two, the bureaucratic priority won out.

There are some people on the Web who choose to solve this problem, by simply deactivating the feature, of online revocation checking. This can be done within the same settings tab, by unchecking the first check-box in that tab. This check-box is located directly before the setting, to “Check certificate validity every Hour” (on my setup, with a drop-down window set to “hour”). I prefer to let my software do everything it’s supposed to do, including to check the revocation status of my certificates. And the way to do the latter is to specify an OCSP Server. The fact that this problem can apparently be solved both ways, affirms the quality of the programming.

Dirk

 

How to Set Up Unattended Upgrades under Linux

I have a Debian / Jessie Linux system, with KDE 4 as my desktop manager. I have set up unattended upgrades on this machine, and would like to share with my community, the best way to do so.

Even though KDE possesses a feature called “Apper”, I do not recommend that we use this in order to perform unattended upgrades. The reason has to do with how Apper fails to deal with authentication properly.

In preparation, I would recommend that people install the package

apt-get install apt-listbugs

What this package will do is to query the Debian bug lists for any package, prior to upgrading or installing it. If the known bug-level exceeds a certain threshold, ‘apt-listbugs’ will ask the user for confirmation interactively, before installing the package. This inserts a hook into the package installation routine, which will also be invoked by unattended upgrades.

apt-listbugs is to be configured in the file

/etc/apt/apt.conf.d/10apt-listbugs

The critical line here is the one that reads

AptListbugs::Severities “critical,grave,serious”;

As the severity level which should be stopped. For a list of available severity levels, read ‘man apt-listbugs’. I found that the default was adequate. If the severity level is set too low, this mechanism will be triggered too often, but if it allows bugs to be installed which are too severe, the automated upgrades will do so.

I have not spent time pondering on how to straighten out an apt-get cache, if the apt-listbugs mechanism was triggered by a batch operation, but suspect that doing so would be easier, than to deal with installed, severe bugs. I.e., this mechanism has been triggered on my machine, only by manual installation of packages. I keep it as a precaution, that will also work with ‘unattended-upgrades’.

Finally, we can install

apt-get install unattended-upgrades

This package is first to be enabled in this file

/usr/share/unattended-upgrades/20auto-upgrades

And then configured in this file

/etc/apt/apt.conf.d/50unattended-upgrades

It is important that this package not be left at its default configuration! Please read each entry in this file, and consider what settings are appropriate. Mainly, the bundled blacklist is not adequate. Further, while to have upgrades installed may be fine, to allow your system to reboot afterward might not be fine.

And one reason for this could be, that we have a desktop manager running, which will not end the session cleanly, if the system command ‘/sbin/reboot’ is simply given. And ‘unattended-upgrades’ will use this command, in complete disregard for what desktop manager is being used.

Therefore, all packages should be identified, which when upgraded will also require a reboot, such as

Kernel Images

Kernel Header Files

Graphics Drivers

etc., etc., etc.. And then having forbidden those, the automatic reboot feature should, in my opinion, be disabled as well.

Under KDE 4, Apper will next show me which packages have not been upgraded yet, so that I may do so through Apper if I like, at which point I can manage the procedure as well. And I expect that Apper will also show me any dialogs which ‘apt-listbugs’ might display.

This system has been working splendidly for me, since June 2, 2015. I have no complaints.

Dirk

(Edit 03/14/2016 : ) Apparently, there was a crucial configuration step, which I forgot to mention, because I had forgotten setting it up. It is necessary to create a configuration file within ‘/etc/apt/apt.conf.d‘ . This config file may be named like so: ‘02periodic‘ . In my case, this file actually enables the service with the following content:


// Enable automatic upgrades of security packages

APT::Periodic::Enable "1";
APT::Periodic::Update-Package-Lists "1";
APT::Periodic::Unattended-Upgrade "1";

 

WordPress.org needs to be adapted to Debian.

I guess that when many people blog, they may go directly to a (paid) hosting service for blogs. WordPress.com would be an example of that. But the assumption that seems to come next, is that people have a Web-hosting service already, with the privileges to FTP files up to it, and privileges to run (server-side) CGI scripts written in PHP. One type of software which they can then upload to this service, can be WordPress.org.

Hence, a lot of the assumptions made in the packaging of WordPress.org, are relevant to this usage scenario, including perhaps the need that can come up to solve technical issues, by tolerating possible configuration changes on the part of the blogger.

What I’m doing is a bit more out of the ordinary, to host my site on my own server, including to host WordPress.org. The way this Web-application is set up by default, it has a root or base directory, plus a content directory that underlies the root. Because many people simply upload this package to their Web-host, and then tinker with it until it works, it’s not an urgent need as seen by the WordPress.org devs, to fend off the root directory as having a username different from the username of the content directory.

But in my setup that’s how it is. And so, entirely because I, too, was tinkering with my setup, I did trigger an attempted core update, which would have rewritten the contents of my root directory. But, because this site’s root has different permissions set on my computer, the WordPress version I use was also unable to give itself a complete update. This is a good thing, because mine was initially installed via my (Debian / Linux) package manager, and I need to keep it compatible with any future updates the package-manager might give it. OTOH, sub-directories of my content directory are supposed to contain more-personalized information.

My enforcement of this was such, that the WordPress.org Web-application was unable to put the site into “Maintenance Mode”, in order to do its core-update. But the fact remains, that even if the site wanted only to update plug-ins or the theme that I happen to be using, this Web-application would still need to put the site into maintenance mode first, which by default it can’t. So by default, even updates to plug-ins would fail.

But my past practice, of just deleting a plug-in from the content directory, and of copying the new version into that content directory – which might be called ‘a manual update’ – will no longer continue to serve me well. From now on, I’ll want to run my updates in-place. And this required some further modification – I.e. some more fiddling on my part, to prepare.

As it stands an attempted in-place update even just for a plug-in, will still fail harmlessly, unless I now do something specific with my root password first. And I’m not going to be more specific than that. I did post some questions on the WordPress.org forums about the issue that I ran in to before, but then just posted my own answer to this issue. It’s just that since nobody did reply to my questions within a few hours of my posting them – until I marked my own issue as Resolved – I also feel that I shouldn’t be wasting their time with more of my details. And without these details, they may not be confident that I really do understand this subject. It’s just that nobody over there even bothered to follow up on that thought, let’s say with any questions of their own.

Dirk