Finding Out, How Many GPU Cores we have, Under Linux

One question which I see written about often on the Web, is how to find out certain stats about our GPU, under Linux. Under Windows, we had GUI-based programs such as ‘GPU-Z’, etc., but under Linux, the information can be just a bit harder to find.

I think that one tool which helps, is to have ‘OpenCL’ installed, as well as the command-line utility ‘clinfo’, which exists as one out of several packages, and as an actual, resulting command-name.

If we’re serious about programming our GPU, then having a GUI won’t help us much. We’d need to get dirty with code in that case, and then to have text-based solutions is suitable. But, if we’re just spectators in this sport, then two stats we may nevertheless want to know are:

  1. How many GPU-Core-Groups do we have – since GPU-Cores are organized as Groups, and
  2. How many actual Shader-Cores do we have in each Group?

Interestingly, the grouping of shader-cores, also represents how many vector-processors such GPU-computing tools as OpenCL see. And so, on the computer which I name ‘Klystron’, which is running Debian / Jessie, when typing in these commands as user, I get the following results:

 


dirk@Klystron:~$ clinfo | grep units
  Max compute units:                             4
  Max compute units:                             6
dirk@Klystron:~$ clinfo | grep multiple
  Kernel Preferred work group size multiple:     1
  Kernel Preferred work group size multiple:     64
dirk@Klystron:~$

 

This needs some explaining. On ‘Klystron’, I have the proprietary, AMD packages for OpenCL installed, since that computer has both an AMD CPU and a Radeon GPU. And this means that the OpenCL version will be able to carry out computing on both. And so I have the stats for both.

In this case, the second entries reveal that I have 6×64 cores on the GPU.

Continue reading Finding Out, How Many GPU Cores we have, Under Linux

I’m impressed with the Mesa drivers.

Before we install Linux on our computers, we usually try to make sure that we either have an NVIDIA or an AMD / Radeon  GPU  – the graphics chip-set – so that we can use either the proprietary NVIDIA drivers designed by their company to run under Linux, or so that we can use the proprietary ‘fglrx’ drivers provided by AMD, or so that we can use the ‘Mesa‘ drivers, which are open-source, and which are designed by Linux specialists. Because the proprietary drivers only cover one out of the available families of chip-sets, this means that after we have installed Linux, our choice boils down to a choice between either proprietary or Mesa drivers.

I think that the main advantage of the proprietary drivers remains, that they will offer our computers the highest version of OpenGL possible from the hardware – which could go up to 4.5 ! But obviously, there are also advantages to using Mesa , one of which is the fact that to install those doesn’t install a ‘blob’ – an opaque piece of binary code which nobody can analyze. Another is the fact that the Mesa drivers will provide ‘VDPAU‘, which the ‘fglrx’ drivers fail to implement. This last detail has to do with the hardware-accelerated playback of 2D video-streams, that have been compressed with one out of a very short list of Codecs.

But I would add to the possible reasons for choosing Mesa, the fact that its stated OpenGL version-number does not set a real limit, on what the graphics-chip-set can do. Officially, Mesa offers OpenGL 3.0 , and this could make it look at the surface, as though its implementation of OpenGL is somewhat lacking, as a trade-off against its other benefits.

One way in which ‘OpenGL’ seems to differ from its competitor in real-life: ‘DirectX’, is in the system by which certain DirectX drivers and hardware offer a numeric compute-level, and where if that compute-level has been achieved, the game-designer can count on a specific set of features being implemented. What seems to happen with OpenGL instead, is that 3.0 must first be satisfied. And if it is, the 3D application next checks individually, whether the OpenGL system available, offers specific OpenGL extensions by name. If the application is very-well-written, it will test for the existence of every extension it needs, before giving the command to load that extension. But in certain cases, a failure to test this can lead to the graphics card crashing, because the graphics card itself may not have the extension requested.

As an example of what I mean, my KDE / Plasma compositor settings, allow me to choose ‘OpenGL 3.1′ as an available back-end, and when I select it, it works, in spite of my Mesa drivers ‘only’ achieving 3.0 . I think that if the drivers had been stated to be 3.1 , then this could actually mean they lose backward-compatibility with 3.0 , while in fact they preserve that backward-compatibility as much as possible.

screenshot_20171127_185831

screenshot_20171127_185939

Continue reading I’m impressed with the Mesa drivers.

The Answer to my Previous Question was No.

In This Posting, I had asked myself whether my problems in getting the OGRE 1.10 OpenGL 3+ Rendering Engine to run under Mesa Drivers, was due to Desktop Compositing.

And the answer seems to be No.

There are certain operations related to Geometry Shaders, which Mesa Drivers still do not support, even though they claim to be OpenGL 3.3 -capable.

One of them is ‘Render To Vertex Buffer’.

And another is ‘glBeginTransformFeedback()‘ / ‘glEndTransformFeedback()‘.

The Mesa Drivers report all this as invalid OpenGL operations, even though I know that these are all valid capabilities, of certain other Drivers.

Dirk