Getting Steam to run with proprietary nVidia.

According to this earlier posting, I had switched my Debian / Stretch, Debian 9 -based computer named ‘Plato’ from the open-source ‘Nouveau’ drivers, which are delivered via ‘Mesa’ packages, to the ‘proprietary nVidia drivers’, because the latter offer more power, in several ways.

But then one question which we’d want an answer to, is how to get “Steam” to run. Just from the Linux package manager, the available games are slim picking, and through a Steam membership, we can buy Linux-versions of at least some powerful games, meaning, to pay for with money.

But, when I tried to launch Steam naively, which used to launch, I only got a message-box which said, that Steam could not find the 32-bit version of ‘’ – and then Steam died. This temporary result ‘makes sense’, because I had only installed the default, 64-bit libraries, that go with the proprietary packages. Steam is a 32-bit application by default, and I have a multi-arch setup, as a prerequisite.

And so my next project became, to create a 32-bit as well as the existing, 64-bit interface to the rendering system.

The steps that I took assume, that I had previously chosen to install the ‘GLVND’ version of the GLX binaries, and unless the reader has done same, the following recipe will be incorrect. Only, the ‘GLVND’ packages which I initially installed, are not listed in the posting linked to above; they belonged to the suggested packages, which I wrote I had written down on paper, and then added to the command-line, which transformed my graphics system.

When I installed the additional, 32-bit libraries, I did get a disturbing error message, but my box still runs.

Continue reading Getting Steam to run with proprietary nVidia.

Finding Out How Many GPU Cores We Have Under Linux, Revisited!

In this earlier posting, I tried to describe in a roundabout way, what the shader cores of a GPU – the Graphics Processing Unit – actually do.

And in this earlier posting, I tried to encourage even Linux-users to find out approximately how many GPU cores they have, given a correct install of the open standard OpenCL – for actual GPU computing – using the command-line tool ‘clinfo’. But that use of ‘clinfo’ left much to be desired, including the fact that sometimes, OpenCL will only assign a maximum number of cores belonging to each core group, that’s a power of 2, even if there may be a non-power-of-two number of cores.

Well, if we have the full set of nVidia drivers installed, nVidia CUDA – which is a competitor to OpenCL, as well as having the nVidia Settings GUI installed, it turns out that there is a much-more accurate answer:


But, this method has as drawback, that it’s only available to us, when we have both nVidia hardware, and the proprietary drivers installed. This could lead some people to the false impression, that maybe, only nVidia graphics cards have real GPUs?



Setting Up VESAFB Under GRUB2

In This Earlier Posting, I had written, that I switched to the proprietary nVidia graphics-drivers on the computer I name ‘Plato’, but that for the purposes of managing several console-sessions using

  • <Ctrl>+<Alt>+F1 ,
  • <Ctrl>+<Alt>+F7

My customary solution, to set up ‘uvesafb‘, no longer works. What happens is that everything runs fine, until the command is given to switch back to the X-server session, at which point the system crashes. Thus, as I had left it at first, console-sessions were available, but at some horribly-low default resolution (without ‘uvesafb’). This had to be remedied, and the way I chose to solve this was actually to use the older ‘vesafb’, which is not a 3rd-party frame-buffer ‘device’, but rather a set of kernel-instructions / kernel-settings, which can be specified in the file ‘/etc/default/grub’.

Because my computers use ‘GRUB2′, the most-elegant way to solve this problem would be, to put the following two lines / uncomment and adapt, like so:




But, on ‘Plato’, this solution was not available, because 1920×1080 was not an available frame-buffer resolution. On this machine, I’d need to have set the highest-possible VESA resolution first, and then have been in the position of having to state next, whether to use “keep” or some other, available resolution, actually to start Linux.

This might have resulted in a ‘lightdm’ log-in screen, set to an unsuitable resolution, all the way until the user logs in, and the Plasma 5 desktop manager re-establishes his or her personal, desktop-resolution – just because, 1920×1080 was not available from the GRUB.

Instead, the following first command reveals which frame-buffer resolutions are available on any one machine, and then it’s still possible today, to give the option ” vga=#” , using the exact code which was provided by the first command:

Continue reading Setting Up VESAFB Under GRUB2

Another Caveat, To GPU-Computing

I had written in previous postings, that I had replaced the ‘Nouveau’ graphics-drivers, that are open-source, with proprietary ‘nVidia’ drivers, that offer more capabilities, on the computer which I name ‘Plato’. In this previous posting, I described a bug that had developed between these recent graphics-drivers, and ‘xscreensaver’.

Well there is more, that can go wrong between the CPU and the GPU of a computer, if the computer is operating a considerable GPU.

When applications set up ‘rendering pipelines’ – aka contexts – they are loading data-structures as well as register-values, onto the graphics card and onto its graphics memory. Well, if the application, that would according to older standards only have resided in system memory, either crashes, or gets forcibly closed using a ‘kill -9′ instruction, then the kernel and the graphics driver will fail to clean up, whatever data-structures it had set up on the graphics card.

The ideal behavior would be, that if an application crashes, the kernel not only clean up whatever resources it was using in system memory, and within the O/S, but also, belonging to graphics memory. And for all I know, the programmers of the open-source drivers under Linux may have made this a top priority. But apparently, nVidia did not.

And so a scenario which can take place, is that the user needs to kill a hung application that was making heavy use of the graphics card, and that afterward, the state of the graphics card is corrupted, so that for example, ‘OpenCL‘ kernels will no longer run on it correctly.

Continue reading Another Caveat, To GPU-Computing