Alpha-Blending

The concept seems rather intuitive, by which a single object or entity can be translucent. But another concept which is less intuitive, is that the degree to which it is so can be stated once per pixel, through an alpha-channel.

Just as every pixel can possess one channel for each of the three additive primary colors: Red, Green and Blue, It can possess a 4th channel named Alpha, which states on a scale from , how opaque it is.

This does not just apply to the texture images, whose pixels are named texels, but also to Fragment Shader output, as well as to the pixels actually associated with the drawing surface, which provide what is known as destination alpha, since the drawing surface is also the destination of the rendering, or its target.

Hence, there exist images whose pixels have a 4channel format, as opposed to others, with a mere 3-channel format.

Now, there is no clear way for a display to display alpha. In certain cases, alpha in an image being viewed is hinted by software, as a checkerboard pattern. But what we see is nevertheless color-information and not transparency. And so a logical question can be, what the function of this alpha-channel is, which is being rendered to.

There are many ways in which the content from numerous sources can be blended, but most of the high-quality ones require, that much communication takes place between rendering-stages. A strategy is desired in which output from rendering-passes is combined, without requiring much communication between the passes. And alpha-blending is a de-facto strategy for that.

By default, closer entities, according to the position of their origins in view space, are rendered first. What this does is put closer values into the Z-buffer as soon as possible, so that the Z-buffer can prevent the rendering of the more distant entities as efficiently as possible. 3D rendering starts when the CPU gives the command to ‘draw’ one entity, which has an arbitrary position in 3D. This may be contrary to what 2D graphics might teach us to predict.

Alas, alpha-entities – aka entities that possess alpha textures – do not write the Z-buffer, because if they did, they would prevent more-distant entities from being rendered. And then, there would be no point in the closer ones being translucent.

The default way in which alpha-blending works, is that the alpha-channel of the display records the extent to which entities have been left visible, by previous entities which have been rendered closer to the virtual camera.

Continue reading Alpha-Blending

Multisampling

One of the problems with bit-mapped graphics is “aliasing”. This is the phenomenon by which pixels along the edge of a pure shape will seem either to belong to that shape or not so, resulting in an edge which has rectangular errors. Even at fairly high resolutions, this can lead to a low-quality experience. And so schemes have been devised since the beginning of digital graphics, to make this effect less pronounced, even if we do choose raster-graphics.

3D has not been left out. One of the strategies which has existed for some time, is to Super-Sample each screen pixel, let us say by subdividing it by a fixed factor, such as 4×4 sub-pixels. This is also known as “Full-Screen Anti-Aliasing”, or ‘FSAA’. The output of the sub-pixels can be mixed in various ways, to result in a blended color-value for the resulting screen pixel.

But one problem with FSAA has been from the start, that it slows down rendering a whole lot. And so an alternative was devised, called Multi-Sampling.

The main idea behind Multi-Sampling is, that only the screen-pixels that span a triangle-edge, are objectionable in the degree with which they suffer from aliasing. Therefore, most of the screen-pixels are not super-sampled. And, the limited logic of the GPU has a hard time trying to distinguish, which triangle-edges are also model / entity -edges, where aliasing does the most damage. But, because the GPU has specialized logic circuits, which are referred to somewhat incorrectly as one render-output generator, that rasterize a given triangle, those circuits can be expanded somewhat feasibly into also being able to detect which screen-pixels do straddle the edge between two triangles. And then, for the sake of argument, only those may be subdivided into 4×4 sub-samples, each of which is Fragment-Shaded once.

But the logic gets just a bit more complicated. There is no simple way, in which the render-output generator can know, which other triangle a current triangle borders on. This is because in general, once each triangle has been processed, it is forgotten. Once each Geometry Shader input-topology has been processed, it too is forgotten, and the GS proceeds to process the next input-topology with complete amnesia…

Continue reading Multisampling