xscreensaver Bug With Latest Proprietary nVidia Graphics Drivers

As described in this posting, I have just applied a major software-update to the computer I name ‘Plato’, in which I replaced its open-source graphics drivers, with the proprietary nVidia drivers, suitable for its graphics card, and for its Linux-build.

That would be drivers version ‘375.82-1~deb9u1′ , from the package manager, for a ‘Debian 9.4′ system.

I have just noticed a major bug, which other people should know about, before they also decide to go with the proprietary drivers. They tend to cause some malfunction with OpenGL-based ‘xscreensaver’ screen-savers, version ‘5.36-1′ .

The bug seems to be, that if I use the graphical configuration tool to preview several screen-savers, when I switch from one screen-saver to another, the previous GL-screen-saver being previewed fails to terminate, which in turn causes the configuration window to freeze, so that the next-chosen screen-saver cannot be previewed. A small blank rectangle takes its place, in the configuration window. When this happens, I actually need to ‘kill -9′ the screen-saver-process – belonging to the screen-saver in question and not ‘/usr/bin/xscreensaver’ – the former of which is taking up 100% of 1 CPU core with nice-time, before I can continue previewing screen-savers.

The problem with this as I see it is, it could also happen after the screen-saver has locked the screen, and when I have entered my password to unlock it. The mere fact that I was always able to unlock one GL-based screen-saver in the past was good in itself, but may only have been luck! The strangeness with which my bug seems to differ from other users’ bug-reports, is that when my OpenGL-based screen-saver was rendering to the root window – i.e., to the whole screen – it did exit properly when unlocked by me.

So as it currently stands, I have set my screen-saver on the computer ‘Plato’, to just a blank screen… :-(

At the same time, OpenGL applications seem to run just fine, like this example, just tested:

screenshot_20180430_142338

However, since the description of the screen-saver packages in the package manager states “GL(Mesa)” screen-savers, it may be better just to ‘remove’ the ‘xscreensaver-gl’ and ‘xscreensaver-gl-extra’ packages.

I found out, that this bug also affects ‘rss-glx 0.9.1-6.1′ .

(Updated 04/30/2018, 19h25 … )

Continue reading xscreensaver Bug With Latest Proprietary nVidia Graphics Drivers

I’ve finally installed the proprietary nVidia graphics drivers.

In this earlier posting, I had written about the fact that the project was risky, to switch from the open-source ‘Nouveau’ graphics drivers, which are provided by a set of packages under Debian / Linux that contain the word ‘Mesa’, to the proprietary ‘nVidia’ drivers. So risky, that for a long time I faltered at doing this.

Well just this evening I made the switch. Under Debian / Stretch – aka Debian 9, this switch is relatively straightforward to accomplish. What we do is to switch to a text-session, using <Ctrl>+<Alt>+F1, and then kill the X-server. From there, we essentially just need to give the command (as root):

apt-get install nvidia-driver nvidia-settings nvidia-xconfig

Giving this command essentially allows the Debian package-managers to perform all the post-install steps, such as black-listing the Nouveau drivers. One should expect that this command has much work as its side-effects, as it pulls in quite a few dependencies.

(Edit 04/30/2018 :

In addition, the user must have up-to-date kernel / Linux -headers installed, because to install the graphics driver, also requires to build DKMS kernel modules. But, it’s always my assumption that I’d have kernel headers installed myself. )

When I gave this command the first time, apt-get suggested additional packages to me, which I wrote down on a sheet of paper. And then I answered ‘No’ to the question of whether or not to proceed (without those), so that I could add all the suggested packages onto a new command-line.

(Update 05/05/2018 :

The additional, suggested packages which I mentioned above, offer the ‘GLVND’ version of GLX. With nVidia, there are actually two ways to deliver GLX, one of which is an nVidia-centered way, and the other of which is a generic way. ‘GLVND’ provides the generic way. It’s also potentially more-useful, if later-on, we might  want to install the 32-bit versions as well.

However, if we fail to add any other packages to the command-line, then, the graphics-driver will load, but we won’t have any OpenGL capabilities at all. Some version of GLX must also be installed, and my package manager just happened to suggest the ‘GLVND’ packages.

Without OpenGL at all, the reader will be very disappointed, especially since even his desktop-compositing will not be running – at first.

The all-nVidia packages, which are not the ‘GLVND’ packages, offer certain primitive inputs from user-space applications, which ‘GLVND’ does not implement, because those instructions are not generically a part of OpenGL. Yet, certain applications do exist, which require the non-‘GLVND’ versions of GLX to be installed, and I leave it up to the reader to find out which packages do that – if the reader needs them – and to write their names on a sheet of paper, prior to switching drivers.

It should be noted, that once we’ve decided to switch to either ‘GLVND’- or the other- version of GLX, trying to change our minds, and to switch to the other version, is yet another nightmare, which I have not even contemplated so far. I’m content with the ‘GLVND’- GLX version. )

(Edited 04/30/2018 :

There is one aspect to installing up-to-date nVidia drivers which I should mention. The GeForce GTX460 graphics card does not support 3rd-party frame-buffers. These 3rd-party frame-buffer drivers would normally allow, <Ctrl>+<Alt>+F1, to show us not only a text-session, but one with decent resolution. Well, with the older, legacy graphics-chips, what I’d normally do is to use the ‘uvesafb’ frame-buffer drivers, just to obtain that. With modern nVidia hardware and drivers, this frame-buffer driver is incompatible. It even causes crashes, because with it, essentially, two drivers are trying to control the same hardware.

Just this evening, I tried to get ‘uvesafb’ working one more time, to no avail, just as it does work on the computer I name ‘Phoenix’. )

So the way it looks now for me, the text-sessions are available, but only in very low resolution. They only exist for emergencies now.

But this is the net result I obtained, after I had disabled the ‘uvesafb’ kernel module again:

 


dirk@Plato:~$ infobash -v
Host/Kernel/OS  "Plato" running Linux 4.9.0-6-amd64 x86_64 [ Kanotix steelfire-nightly Steelfire64 171013a LXDE ]
CPU Info        8x Intel Core i7 950 @ clocked at Min:1600.000Mhz Max:2667.000Mhz
Videocard       NVIDIA GF104 [GeForce GTX 460]  X.Org 1.19.2  [ 1920x1080 ]
Processes 262 | Uptime 1:16 | Memory 3003.9/12009.6MB | HDD Size 2000GB (6%used) | GLX Renderer GeForce GTX 460/PCIe/SSE2 | GLX Version 4.5.0 NVIDIA 375.82 | Client Shell | Infobash v2.67.2
dirk@Plato:~$

dirk@Plato:~$ clinfo | grep units
  Max compute units                               7
dirk@Plato:~$ clinfo | grep multiple
  Preferred work group size multiple              32
dirk@Plato:~$ clinfo | grep Warp
  Warp size (NV)                                  32
dirk@Plato:~$


 

So what this means in practice, is that I have OpenGL 4.5 on the computer named ‘Plato’ now, as well as having a fully-functional install of ‘OpenCL‘ and ‘CUDA‘, contrarily to what I had according to this earlier posting.

Therefore, GPU-computing will not just exist in theory for me now, but also in practice.

And this displays, that the graphics card on that machine ‘only’ possesses 224 cores after all, not the 7×48 which I had expected earlier, according to a Windows-based tool – no longer installed.

(Updated 04/29/2018 … )

Continue reading I’ve finally installed the proprietary nVidia graphics drivers.

Finding Out, How Many GPU Cores we have, Under Linux

One question which I see written about often on the Web, is how to find out certain stats about our GPU, under Linux. Under Windows, we had GUI-based programs such as ‘GPU-Z’, etc., but under Linux, the information can be just a bit harder to find.

I think that one tool which helps, is to have ‘OpenCL’ installed, as well as the command-line utility ‘clinfo’, which exists as one out of several packages, and as an actual, resulting command-name.

If we’re serious about programming our GPU, then having a GUI won’t help us much. We’d need to get dirty with code in that case, and then to have text-based solutions is suitable. But, if we’re just spectators in this sport, then two stats we may nevertheless want to know are:

  1. How many GPU-Core-Groups do we have – since GPU-Cores are organized as Groups, and
  2. How many actual Shader-Cores do we have in each Group?

Interestingly, the grouping of shader-cores, also represents how many vector-processors such GPU-computing tools as OpenCL see. And so, on the computer which I name ‘Klystron’, which is running Debian / Jessie, when typing in these commands as user, I get the following results:

 


dirk@Klystron:~$ clinfo | grep units
  Max compute units:                             4
  Max compute units:                             6
dirk@Klystron:~$ clinfo | grep multiple
  Kernel Preferred work group size multiple:     1
  Kernel Preferred work group size multiple:     64
dirk@Klystron:~$

 

This needs some explaining. On ‘Klystron’, I have the proprietary, AMD packages for OpenCL installed, since that computer has both an AMD CPU and a Radeon GPU. And this means that the OpenCL version will be able to carry out computing on both. And so I have the stats for both.

In this case, the second entries reveal that I have 6×64 cores on the GPU.

Continue reading Finding Out, How Many GPU Cores we have, Under Linux