Finding Out How Many GPU Cores We Have Under Linux, Revisited!

In this earlier posting, I tried to describe in a roundabout way, what the shader cores of a GPU – the Graphics Processing Unit – actually do.

And in this earlier posting, I tried to encourage even Linux-users to find out approximately how many GPU cores they have, given a correct install of the open standard OpenCL – for actual GPU computing – using the command-line tool ‘clinfo’. But that use of ‘clinfo’ left much to be desired, including the fact that sometimes, OpenCL will only assign a maximum number of cores belonging to each core group, that’s a power of 2, even if there may be a non-power-of-two number of cores.

Well, if we have the full set of nVidia drivers installed, nVidia CUDA – which is a competitor to OpenCL, as well as having the nVidia Settings GUI installed, it turns out that there is a much-more accurate answer:

screenshot_20180502_204640_c

But, this method has as drawback, that it’s only available to us, when we have both nVidia hardware, and the proprietary drivers installed. This could lead some people to the false impression, that maybe, only nVidia graphics cards have real GPUs?

Dirk

 

Setting Up VESAFB Under GRUB2

In This Earlier Posting, I had written, that I switched to the proprietary nVidia graphics-drivers on the computer I name ‘Plato’, but that for the purposes of managing several console-sessions using

  • <Ctrl>+<Alt>+F1 ,
  • <Ctrl>+<Alt>+F7

My customary solution, to set up ‘uvesafb‘, no longer works. What happens is that everything runs fine, until the command is given to switch back to the X-server session, at which point the system crashes. Thus, as I had left it at first, console-sessions were available, but at some horribly-low default resolution (without ‘uvesafb’). This had to be remedied, and the way I chose to solve this was actually to use the older ‘vesafb’, which is not a 3rd-party frame-buffer ‘device’, but rather a set of kernel-instructions / kernel-settings, which can be specified in the file ‘/etc/default/grub’.

Because my computers use ‘GRUB2′, the most-elegant way to solve this problem would be, to put the following two lines / uncomment and adapt, like so:

 


GRUB_GFXMODE=1920x1080
GRUB_GFXPAYLOAD_LINUX="keep"


 

But, on ‘Plato’, this solution was not available, because 1920×1080 was not an available frame-buffer resolution. On this machine, I’d need to have set the highest-possible VESA resolution first, and then have been in the position of having to state next, whether to use “keep” or some other, available resolution, actually to start Linux.

This might have resulted in a ‘lightdm’ log-in screen, set to an unsuitable resolution, all the way until the user logs in, and the Plasma 5 desktop manager re-establishes his or her personal, desktop-resolution – just because, 1920×1080 was not available from the GRUB.

Instead, the following first command reveals which frame-buffer resolutions are available on any one machine, and then it’s still possible today, to give the option ” vga=#” , using the exact code which was provided by the first command:

Continue reading Setting Up VESAFB Under GRUB2

Another Caveat, To GPU-Computing

I had written in previous postings, that I had replaced the ‘Nouveau’ graphics-drivers, that are open-source, with proprietary ‘nVidia’ drivers, that offer more capabilities, on the computer which I name ‘Plato’. In this previous posting, I described a bug that had developed between these recent graphics-drivers, and ‘xscreensaver’.

Well there is more, that can go wrong between the CPU and the GPU of a computer, if the computer is operating a considerable GPU.

When applications set up ‘rendering pipelines’ – aka contexts – they are loading data-structures as well as register-values, onto the graphics card and onto its graphics memory. Well, if the application, that would according to older standards only have resided in system memory, either crashes, or gets forcibly closed using a ‘kill -9′ instruction, then the kernel and the graphics driver will fail to clean up, whatever data-structures it had set up on the graphics card.

The ideal behavior would be, that if an application crashes, the kernel not only clean up whatever resources it was using in system memory, and within the O/S, but also, belonging to graphics memory. And for all I know, the programmers of the open-source drivers under Linux may have made this a top priority. But apparently, nVidia did not.

And so a scenario which can take place, is that the user needs to kill a hung application that was making heavy use of the graphics card, and that afterward, the state of the graphics card is corrupted, so that for example, ‘OpenCL‘ kernels will no longer run on it correctly.

Continue reading Another Caveat, To GPU-Computing

xscreensaver Bug With Latest Proprietary nVidia Graphics Drivers

As described in this posting, I have just applied a major software-update to the computer I name ‘Plato’, in which I replaced its open-source graphics drivers, with the proprietary nVidia drivers, suitable for its graphics card, and for its Linux-build.

That would be drivers version ‘375.82-1~deb9u1′ , from the package manager, for a ‘Debian 9.4′ system.

I have just noticed a major bug, which other people should know about, before they also decide to go with the proprietary drivers. They tend to cause some malfunction with OpenGL-based ‘xscreensaver’ screen-savers, version ‘5.36-1′ .

The bug seems to be, that if I use the graphical configuration tool to preview several screen-savers, when I switch from one screen-saver to another, the previous GL-screen-saver being previewed fails to terminate, which in turn causes the configuration window to freeze, so that the next-chosen screen-saver cannot be previewed. A small blank rectangle takes its place, in the configuration window. When this happens, I actually need to ‘kill -9′ the screen-saver-process – belonging to the screen-saver in question and not ‘/usr/bin/xscreensaver’ – the former of which is taking up 100% of 1 CPU core with nice-time, before I can continue previewing screen-savers.

The problem with this as I see it is, it could also happen after the screen-saver has locked the screen, and when I have entered my password to unlock it. The mere fact that I was always able to unlock one GL-based screen-saver in the past was good in itself, but may only have been luck! The strangeness with which my bug seems to differ from other users’ bug-reports, is that when my OpenGL-based screen-saver was rendering to the root window – i.e., to the whole screen – it did exit properly when unlocked by me.

So as it currently stands, I have set my screen-saver on the computer ‘Plato’, to just a blank screen… :-(

At the same time, OpenGL applications seem to run just fine, like this example, just tested:

screenshot_20180430_142338

However, since the description of the screen-saver packages in the package manager states “GL(Mesa)” screen-savers, it may be better just to ‘remove’ the ‘xscreensaver-gl’ and ‘xscreensaver-gl-extra’ packages.

I found out, that this bug also affects ‘rss-glx 0.9.1-6.1′ .

(Updated 04/30/2018, 19h25 … )

Continue reading xscreensaver Bug With Latest Proprietary nVidia Graphics Drivers