Another Thought About Micropolygons

One task which I once undertook, was To provide a general-case description, of what the possible shader-types are, that can run on a GPU – on the Graphics-Processing Unit of a powerful Graphics Card. But one problem in providing such a panoply of ideas, is the fact that too many shader-types can run, for one synopsis to explain all their uses.

One use for a Geometry Shader is, to treat an input-triangle as if to consist of multiple smaller triangles, each of which would then be called ‘a micropolygon’, so that the points between these micropolygons can be displaced along the normal-vector of the base-geometry, from which the original triangle came. One reason for which the emergence of DirectX 10, which also corresponds to OpenGL 3.x , was followed so quickly by DirectX 11, which also corresponds to OpenGL 4.y , is the fact that the tessellation of the original triangle can be performed most efficiently, when yet-another type of shader only performs the tessellation. But in principle, a Geometry Shader is also capable of performing the actual tessellation, because in response to one input-triangle, a GS can output numerous points, that either form triangles again, or that form triangle strips. And in fact, if the overall pattern to the tessellation is rectangular, triangle strips make an output-topology for the GS, that makes more sense than individual triangles. But I’m not going to get into ‘Geometry Shaders coded to work as Tessellators’, in this posting.

Instead, I’m going to focus on a different aspect of the idea of micropolygons, that I think is more in need of explanation.

Our GS doesn’t just need to displace the micropolygons – hence, the term ‘displacement shader’ – but in addition, must compute the normal vector of each Point output. If this normal vector was just the same as the one for the triangle input, then the Fragment Shader which follows the Geometry Shader, would not really be able to shade the resulting surface, as having been displaced. And this would be because, especially when viewing a 3D model from within a 2D perspective, the observer does not really see depth. He or she sees changes in surface-brightness, which his or her brain decides, must have been due to subtle differences in depth. And so a valid question which arises is, ‘How can a Geometry Shader compute the normal-vectors of all its output Points, especially since shaders typically don’t have access to data that arises outside one invocation?’

(Updated 07/08/2018, 7h55 … )

Continue reading Another Thought About Micropolygons

A clarification about (Linux) Mesa / Nouveau Drivers

Two of the subjects which I like to blog about, are direct-rendering and Linux graphics drivers.

Well in This Earlier Posting, I had essentially written, that on the Debian 9 , Debian /Stretch computer I name ‘Plato’, I have the ‘Mesa’ Drivers installed, and that therefore, that computer cannot benefit from OpenCL, massively-parallel GPU-computing.

What may confuse some readers about this is the fact that elsewhere on the Internet, there is speak about ‘Nouveau’ Drivers, but less so about Mesa Drivers.

‘Mesa’, which I referred to, is a Debian set of meta-packages, that is all open-source. It installs several drivers, and selects the drivers based on which graphics hardware we may have. But, because ‘Plato’ does in fact have an nVidia graphics card, the Mesa package automatically selects the Nouveau drivers, which is one of the drivers it contains. Hence, when I wrote about using the Mesa Drivers, I was in fact writing about the Nouveau Drivers.

One of the reasons I have to keep using these Nouveau Drivers, is the fact that presently, ‘Plato’ is extremely stable. There would be some performance-improvements if I was to switch to the proprietary drivers, but making the transition can be a nightmare. It involves black-lists, etc..

Another reason for me to keep using the Nouveau Drivers, is the fact that unlike how it was years ago, today, those drivers support real OpenGL 3, hardware-rendering. Therefore, I’m already getting partial benefit from the hardware-rendering which the graphics card has, while using the open-source driver.

The only two things which I do not get, is OpenCL or CUDA computing capabilities, as Nouveau does not support that. Therefore, anything which I write about that subject, will have to remain theoretical for now.

I suppose that on my laptop ‘Klystron’, because I have the AMD chip-set more-correctly installed, I could be using OpenCL…

Also, ‘Plato’ is not fully a ‘Kanotix’ system. When I installed ‘Plato’, I borrowed a core system from Kanotix, before Kanotix was ready for Debian / Stretch. This means that certain features which Kanotix would normally have, which make it easier to switch between graphics drivers, are not installed on ‘Plato’. And that really makes the idea daunting, to try to switch…

Dirk

 

I’m impressed with the Mesa drivers.

Before we install Linux on our computers, we usually try to make sure that we either have an NVIDIA or an AMD / Radeon  GPU  – the graphics chip-set – so that we can use either the proprietary NVIDIA drivers designed by their company to run under Linux, or so that we can use the proprietary ‘fglrx’ drivers provided by AMD, or so that we can use the ‘Mesa‘ drivers, which are open-source, and which are designed by Linux specialists. Because the proprietary drivers only cover one out of the available families of chip-sets, this means that after we have installed Linux, our choice boils down to a choice between either proprietary or Mesa drivers.

I think that the main advantage of the proprietary drivers remains, that they will offer our computers the highest version of OpenGL possible from the hardware – which could go up to 4.5 ! But obviously, there are also advantages to using Mesa , one of which is the fact that to install those doesn’t install a ‘blob’ – an opaque piece of binary code which nobody can analyze. Another is the fact that the Mesa drivers will provide ‘VDPAU‘, which the ‘fglrx’ drivers fail to implement. This last detail has to do with the hardware-accelerated playback of 2D video-streams, that have been compressed with one out of a very short list of Codecs.

But I would add to the possible reasons for choosing Mesa, the fact that its stated OpenGL version-number does not set a real limit, on what the graphics-chip-set can do. Officially, Mesa offers OpenGL 3.0 , and this could make it look at the surface, as though its implementation of OpenGL is somewhat lacking, as a trade-off against its other benefits.

One way in which ‘OpenGL’ seems to differ from its competitor in real-life: ‘DirectX’, is in the system by which certain DirectX drivers and hardware offer a numeric compute-level, and where if that compute-level has been achieved, the game-designer can count on a specific set of features being implemented. What seems to happen with OpenGL instead, is that 3.0 must first be satisfied. And if it is, the 3D application next checks individually, whether the OpenGL system available, offers specific OpenGL extensions by name. If the application is very-well-written, it will test for the existence of every extension it needs, before giving the command to load that extension. But in certain cases, a failure to test this can lead to the graphics card crashing, because the graphics card itself may not have the extension requested.

As an example of what I mean, my KDE / Plasma compositor settings, allow me to choose ‘OpenGL 3.1′ as an available back-end, and when I select it, it works, in spite of my Mesa drivers ‘only’ achieving 3.0 . I think that if the drivers had been stated to be 3.1 , then this could actually mean they lose backward-compatibility with 3.0 , while in fact they preserve that backward-compatibility as much as possible.

screenshot_20171127_185831

screenshot_20171127_185939

Continue reading I’m impressed with the Mesa drivers.

There is a bug in the Wayland Compositor, under Debian Stretch.

One of the facts which I have written about before, is that modern desktop managers will use compositing – i.e. will use hardware-acceleration – to render desktop effects, specifically, when we are only running regular, 2D applications with a GUI. This feature exists with the old KDE 4, under Debian / Jessie, as well as with the new Plasma 5, under Debian / Stretch.

Under Debian / Jessie, this feature is extremely stable. Under Debian / Stretch, it is not yet so.

What will happen under Debian / Stretch, as far as I can make out, is that if an attempt has been made to disable compositing, instead of this succeeding, the desktop-session becomes corrupted, in that black rectangles will display, when we simply open multiple windows / dialogs. AFAICT, This can only be fixed, by rebooting / starting a new user-session.

I became aware of this, when running Steam-based games on the computer I name ‘Plato’. When games run that are heavy on OpenGL / Hardware-Rendering, it’s normal for the game-platform to try to switch compositing off, because often, the hardware-rendering of the game is not compatible with the desktop-compositing. After I have finished my session with Steam, the rendering errors in my desktop manager become noticeable, and Steam does not gain the permissions, to install any system software.

I do not blame this on Steam per se, because I can reproduce this problem by just clicking <Shift>+<Alt>+F12, which used to be the key-combination under KDE 4, that toggled desktop compositing on and off at will. Within seconds, under Plasma 5, this key-combination will also cause the malfunction.

(Updated 12/03/2017 : )

Now, there is a simplistic workaround for me:

 

Continue reading There is a bug in the Wayland Compositor, under Debian Stretch.