How I typically Solve my Kleopatra Start-Up Delay Problem

Both under Linux and under Windows, I use “Kleopatra”, which is a GUI for the ‘GnuPG’ system – the “GNU Privacy Guard”. In case the reader does not know, GnuPG or ‘GPG’ is one software alternative for providing ‘Public Key Cryptography’, which can be used in practice to sign and/or encrypt emails, as well as to validate digital signatures made by other people’s computers.

Using GPG does not strictly require that we use Kleopatra, because there exists the capability which some power-users have, to use GPG from the command-line, and Kleopatra is a distinctly KDE-based front-end, even though there exist Windows ports of it.

One problem which I eventually run in to, and which has been reported elsewhere on the Internet, is that at first after installation, Kleopatra seems to run fine, but that after some point in time we encounter a strange delay, when we start up this program, which can last for several minutes or even longer, during which the program does not respond properly to user commands. Our GPG installation does not seem to be compromised.

In my case, this seems to take place entirely, because Kleopatra has been instructed to check the revocation status of some certificates, but no ‘OCSP Server’ has been specified in its settings. According to some other reports on the Web, this is a problem specific to “CACert” certificates, and in my case also, the problem seems to set in, after I’ve added a CACert certificate to my key-ring. Yet, AFAIK, this problem could just as easily occur after we’ve added other certificates.

The way I eventually solve this problem – on every computer I own – is to open Kleopatra somehow, and then to go into Settings -> Configure Kleopatra -> S/MIME Validation , and then to look at the field which says “OCSP responder URL”. By default, this field will be blank.

Since in my case the problem starts after I’ve added my CACert certificate, I actually add the OCSP Server which is provided by CACert there, which is currently “http://ocsp.cacert.org/”. After that, I find that when I open Kleopatra, a narrow and subtle progress-bar in the lower right of the application window, sweeps to completion within one second, and the program opens fine.

I need to explain why this solution works for me, so that anybody who may be having the same problem, but not with a CACert certificate, can also solve this problem.

Certificates which are not self-signed, are signed by a ‘Certificate Authority’, such as CACert. When Kleopatra starts, one of the functions which it automatically performs is to check its certificates against a ‘Revocation List’, in case the Certificate Authority has decided to revoke it.

The actual certificate which I received from CACert, has the detail encoded into its plain-text data, that its revocation status must always be checked. But what I’ve found happens with Kleopatra specifically, is that if no OCSP Server has been specified, instead of somehow recognizing the fact that it cannot check the revocation status, this program goes into some type of infinite loop, never actually connecting to any server, but also never seeming to exit this state.

I choose to put this OCSP, because in my case, I know that it is the CACert certificate which has this need set with a high priority. It should be possible to put some other OCSP Server into the same field, because ultimately they should all be synchronized. But finally, the OCSP Server provided by the same Certificate Authority, also provides the fastest response time, for validating its own certificates.

As I see it, there was a problem in priorities somewhere, in programming this application. There was the bureaucratic priority, which states that the status of this certificate must always be checked. but then there was also the programming priority, which states that an attempt to connect to a server, without any specification of which server, will lead to some sort of malfunction eventually. And between these two, the bureaucratic priority won out.

There are some people on the Web who choose to solve this problem, by simply deactivating the feature, of online revocation checking. This can be done within the same settings tab, by unchecking the first check-box in that tab. This check-box is located directly before the setting, to “Check certificate validity every Hour” (on my setup, with a drop-down window set to “hour”). I prefer to let my software do everything it’s supposed to do, including to check the revocation status of my certificates. And the way to do the latter is to specify an OCSP Server. The fact that this problem can apparently be solved both ways, affirms the quality of the programming.

Dirk

 

How to Set Up Unattended Upgrades under Linux

I have a Debian / Jessie Linux system, with KDE 4 as my desktop manager. I have set up unattended upgrades on this machine, and would like to share with my community, the best way to do so.

Even though KDE possesses a feature called “Apper”, I do not recommend that we use this in order to perform unattended upgrades. The reason has to do with how Apper fails to deal with authentication properly.

In preparation, I would recommend that people install the package

apt-get install apt-listbugs

What this package will do is to query the Debian bug lists for any package, prior to upgrading or installing it. If the known bug-level exceeds a certain threshold, ‘apt-listbugs’ will ask the user for confirmation interactively, before installing the package. This inserts a hook into the package installation routine, which will also be invoked by unattended upgrades.

apt-listbugs is to be configured in the file

/etc/apt/apt.conf.d/10apt-listbugs

The critical line here is the one that reads

AptListbugs::Severities “critical,grave,serious”;

As the severity level which should be stopped. For a list of available severity levels, read ‘man apt-listbugs’. I found that the default was adequate. If the severity level is set too low, this mechanism will be triggered too often, but if it allows bugs to be installed which are too severe, the automated upgrades will do so.

I have not spent time pondering on how to straighten out an apt-get cache, if the apt-listbugs mechanism was triggered by a batch operation, but suspect that doing so would be easier, than to deal with installed, severe bugs. I.e., this mechanism has been triggered on my machine, only by manual installation of packages. I keep it as a precaution, that will also work with ‘unattended-upgrades’.

Finally, we can install

apt-get install unattended-upgrades

This package is first to be enabled in this file

/usr/share/unattended-upgrades/20auto-upgrades

And then configured in this file

/etc/apt/apt.conf.d/50unattended-upgrades

It is important that this package not be left at its default configuration! Please read each entry in this file, and consider what settings are appropriate. Mainly, the bundled blacklist is not adequate. Further, while to have upgrades installed may be fine, to allow your system to reboot afterward might not be fine.

And one reason for this could be, that we have a desktop manager running, which will not end the session cleanly, if the system command ‘/sbin/reboot’ is simply given. And ‘unattended-upgrades’ will use this command, in complete disregard for what desktop manager is being used.

Therefore, all packages should be identified, which when upgraded will also require a reboot, such as

Kernel Images

Kernel Header Files

Graphics Drivers

etc., etc., etc.. And then having forbidden those, the automatic reboot feature should, in my opinion, be disabled as well.

Under KDE 4, Apper will next show me which packages have not been upgraded yet, so that I may do so through Apper if I like, at which point I can manage the procedure as well. And I expect that Apper will also show me any dialogs which ‘apt-listbugs’ might display.

This system has been working splendidly for me, since June 2, 2015. I have no complaints.

Dirk

(Edit 03/14/2016 : ) Apparently, there was a crucial configuration step, which I forgot to mention, because I had forgotten setting it up. It is necessary to create a configuration file within ‘/etc/apt/apt.conf.d‘ . This config file may be named like so: ‘02periodic‘ . In my case, this file actually enables the service with the following content:


// Enable automatic upgrades of security packages

APT::Periodic::Enable "1";
APT::Periodic::Update-Package-Lists "1";
APT::Periodic::Unattended-Upgrade "1";

 

WordPress.org needs to be adapted to Debian.

I guess that when many people blog, they may go directly to a (paid) hosting service for blogs. WordPress.com would be an example of that. But the assumption that seems to come next, is that people have a Web-hosting service already, with the privileges to FTP files up to it, and privileges to run (server-side) CGI scripts written in PHP. One type of software which they can then upload to this service, can be WordPress.org.

Hence, a lot of the assumptions made in the packaging of WordPress.org, are relevant to this usage scenario, including perhaps the need that can come up to solve technical issues, by tolerating possible configuration changes on the part of the blogger.

What I’m doing is a bit more out of the ordinary, to host my site on my own server, including to host WordPress.org. The way this Web-application is set up by default, it has a root or base directory, plus a content directory that underlies the root. Because many people simply upload this package to their Web-host, and then tinker with it until it works, it’s not an urgent need as seen by the WordPress.org devs, to fend off the root directory as having a username different from the username of the content directory.

But in my setup that’s how it is. And so, entirely because I, too, was tinkering with my setup, I did trigger an attempted core update, which would have rewritten the contents of my root directory. But, because this site’s root has different permissions set on my computer, the WordPress version I use was also unable to give itself a complete update. This is a good thing, because mine was initially installed via my (Debian / Linux) package manager, and I need to keep it compatible with any future updates the package-manager might give it. OTOH, sub-directories of my content directory are supposed to contain more-personalized information.

My enforcement of this was such, that the WordPress.org Web-application was unable to put the site into “Maintenance Mode”, in order to do its core-update. But the fact remains, that even if the site wanted only to update plug-ins or the theme that I happen to be using, this Web-application would still need to put the site into maintenance mode first, which by default it can’t. So by default, even updates to plug-ins would fail.

But my past practice, of just deleting a plug-in from the content directory, and of copying the new version into that content directory – which might be called ‘a manual update’ – will no longer continue to serve me well. From now on, I’ll want to run my updates in-place. And this required some further modification – I.e. some more fiddling on my part, to prepare.

As it stands an attempted in-place update even just for a plug-in, will still fail harmlessly, unless I now do something specific with my root password first. And I’m not going to be more specific than that. I did post some questions on the WordPress.org forums about the issue that I ran in to before, but then just posted my own answer to this issue. It’s just that since nobody did reply to my questions within a few hours of my posting them – until I marked my own issue as Resolved – I also feel that I shouldn’t be wasting their time with more of my details. And without these details, they may not be confident that I really do understand this subject. It’s just that nobody over there even bothered to follow up on that thought, let’s say with any questions of their own.

Dirk

 

There has been a Dist-Upgrade on my Server.

This server is hosted on a Debian / Jessie (Linux) computer which I own and run myself. Under Debian – Linux systems, the most thorough kind of update which can be carried out is called a ‘dist-upgrade’ or a ‘d-u’ for short. Just this evening, I saw that suddenly there were 93 software packages, which all did need an upgrade, and saw, that I could not just leave this type of upgrade to the usual, automated services. Therefore, I decided to administer the 93 package-upgrades given, via a dist-upgrade command. This can be stressful, or exciting, or both, because it can give a Linux user an improvement, or it can in some cases actually cripple our systems. I’m glad to say that this Linux box I name ‘Phoenix’ did not get crippled. It’s still fully bootable.

But due to this procedure, the Web-server was also down, from 20h15 through until 20h40 or so. I see that my blog is still here though, after the d-u .

I think that most software updates can be fun and games. But this particular upgrade also chose to include my graphics driver, which I was particularly fussy about. The past version of the graphics driver on this box was extremely stable, and I was trying to avoid doing any sort of upgrade to it, but now doing so was the only way to keep my box compatible with future upgrades.

It has sometimes happened to me, that the screen might just freeze – even though it’s a Linux computer – due to stability problems with other graphics drivers – especially with the ‘mesa’ driver, which tries to software-render an OpenGL equivalent. But what has been most stable for me in recent months, was the ‘GLX’ driver, which does full hardware, OpenGL rendering as it’s supposed to, and which under modern Linux systems is even capable of a ‘TDR’ equivalent, a Timeout Detection and Recovery, which will restart a crashed GPU without harming the active session.

If in the near future I find that my screen does freeze, or that there are TDR issues, a sinking feeling will go through my heart, because that would signal that a completely stable graphics driver has been replaced unnecessarily, with an unstable one. And in the act of doing so, all my package-management scripts even recompiled the DKMS kernel module for the graphics driver in question, because that is the correct way to install it.

Oh Yes, I see that the Apache Web-server software, which my machine hosts, has been given an upgrade as well. But as I see it, this was the least likely set of packages, for the maintainers to have botched. So it’s my full assumption that Web-server activity will continue without error.

Dirk