I’m impressed with the Mesa drivers.

Before we install Linux on our computers, we usually try to make sure that we either have an NVIDIA or an AMD / Radeon¬† GPU¬† – the graphics chip-set – so that we can use either the proprietary NVIDIA drivers designed by their company to run under Linux, or so that we can use the proprietary ‘fglrx’ drivers provided by AMD, or so that we can use the ‘Mesa‘ drivers, which are open-source, and which are designed by Linux specialists. Because the proprietary drivers only cover one out of the available families of chip-sets, this means that after we have installed Linux, our choice boils down to a choice between either proprietary or Mesa drivers.

I think that the main advantage of the proprietary drivers remains, that they will offer our computers the highest version of OpenGL possible from the hardware – which could go up to 4.5 ! But obviously, there are also advantages to using Mesa , one of which is the fact that to install those doesn’t install a ‘blob’ – an opaque piece of binary code which nobody can analyze. Another is the fact that the Mesa drivers will provide ‘VDPAU‘, which the ‘fglrx’ drivers fail to implement. This last detail has to do with the hardware-accelerated playback of 2D video-streams, that have been compressed with one out of a very short list of Codecs.

But I would add to the possible reasons for choosing Mesa, the fact that its stated OpenGL version-number does not set a real limit, on what the graphics-chip-set can do. Officially, Mesa offers OpenGL 3.0 , and this could make it look at the surface, as though its implementation of OpenGL is somewhat lacking, as a trade-off against its other benefits.

One way in which ‘OpenGL’ seems to differ from its competitor in real-life: ‘DirectX’, is in the system by which certain DirectX drivers and hardware offer a numeric compute-level, and where if that compute-level has been achieved, the game-designer can count on a specific set of features being implemented. What seems to happen with OpenGL instead, is that 3.0 must first be satisfied. And if it is, the 3D application next checks individually, whether the OpenGL system available, offers specific OpenGL extensions by name. If the application is very-well-written, it will test for the existence of every extension it needs, before giving the command to load that extension. But in certain cases, a failure to test this can lead to the graphics card crashing, because the graphics card itself may not have the extension requested.

As an example of what I mean, my KDE / Plasma compositor settings, allow me to choose ‘OpenGL 3.1′ as an available back-end, and when I select it, it works, in spite of my Mesa drivers ‘only’ achieving 3.0 . I think that if the drivers had been stated to be 3.1 , then this could actually mean they lose backward-compatibility with 3.0 , while in fact they preserve that backward-compatibility as much as possible.

screenshot_20171127_185831

screenshot_20171127_185939

Continue reading I’m impressed with the Mesa drivers.

There is a bug in the Wayland Compositor, under Debian Stretch.

One of the facts which I have written about before, is that modern desktop managers will use compositing – i.e. will use hardware-acceleration – to render desktop effects, specifically, when we are only running regular, 2D applications with a GUI. This feature exists with the old KDE 4, under Debian / Jessie, as well as with the new Plasma 5, under Debian / Stretch.

Under Debian / Jessie, this feature is extremely stable. Under Debian / Stretch, it is not yet so.

What will happen under Debian / Stretch, as far as I can make out, is that if an attempt has been made to disable compositing, instead of this succeeding, the desktop-session becomes corrupted, in that black rectangles will display, when we simply open multiple windows / dialogs. AFAICT, This can only be fixed, by rebooting / starting a new user-session.

I became aware of this, when running Steam-based games on the computer I name ‘Plato’. When games run that are heavy on OpenGL / Hardware-Rendering, it’s normal for the game-platform to try to switch compositing off, because often, the hardware-rendering of the game is not compatible with the desktop-compositing. After I have finished my session with Steam, the rendering errors in my desktop manager become noticeable, and Steam does not gain the permissions, to install any system software.

I do not blame this on Steam per se, because I can reproduce this problem by just clicking <Shift>+<Alt>+F12, which used to be the key-combination under KDE 4, that toggled desktop compositing on and off at will. Within seconds, under Plasma 5, this key-combination will also cause the malfunction.

(Updated 12/03/2017 : )

Now, there is a simplistic workaround for me:

 

Continue reading There is a bug in the Wayland Compositor, under Debian Stretch.

Alpha-Blending

The concept seems rather intuitive, by which a single object or entity can be translucent. But another concept which is less intuitive, is that the degree to which it is so can be stated once per pixel, through an alpha-channel.

Just as every pixel can possess one channel for each of the three additive primary colors: Red, Green and Blue, It can possess a 4th channel named Alpha, which states on a scale from [ 0.0 … 1.0 ] , how opaque it is.

This does not just apply to the texture images, whose pixels are named texels, but also to Fragment Shader output, as well as to the pixels actually associated with the drawing surface, which provide what is known as destination alpha, since the drawing surface is also the destination of the rendering, or its target.

Hence, there exist images whose pixels have a 4channel format, as opposed to others, with a mere 3-channel format.

Now, there is no clear way for a display to display alpha. In certain cases, alpha in an image being viewed is hinted by software, as a checkerboard pattern. But what we see is nevertheless color-information and not transparency. And so a logical question can be, what the function of this alpha-channel is, which is being rendered to.

There are many ways in which the content from numerous sources can be blended, but most of the high-quality ones require, that much communication takes place between rendering-stages. A strategy is desired in which output from rendering-passes is combined, without requiring much communication between the passes. And alpha-blending is a de-facto strategy for that.

By default, closer entities, according to the position of their origins in view space, are rendered first. What this does is put closer values into the Z-buffer as soon as possible, so that the Z-buffer can prevent the rendering of the more distant entities as efficiently as possible. 3D rendering starts when the CPU gives the command to ‘draw’ one entity, which has an arbitrary position in 3D. This may be contrary to what 2D graphics might teach us to predict.

Alas, alpha-entities – aka entities that possess alpha textures – do not write the Z-buffer, because if they did, they would prevent more-distant entities from being rendered. And then, there would be no point in the closer ones being translucent.

The default way in which alpha-blending works, is that the alpha-channel of the display records the extent to which entities have been left visible, by previous entities which have been rendered closer to the virtual camera.

Continue reading Alpha-Blending

Why R2VB Should Not Simply be Deprecated

The designers of certain graphics cards / GPUs, have decided that Render-To-Vertex-Buffer is deprecated. In order to appreciate why I believe this to be a mistake, the reader first needs to know what R2VB is – or was.

The rendering pipeline of DirectX 9 versus DirectX 11 is somewhat different, yet also very similar, and DirectX 9 was extremely versatile, with a wide range of applications written that use it, while the fancier Dx 11 pipeline is more powerful, but has less of an established base of algorithms.

Dx 9 is approximated in OpenGL 2, while Dx 10 and Dx 11 are approximated in OpenGL 3(+) .

Continue reading Why R2VB Should Not Simply be Deprecated