Alpha Blending and Multisampling Revisited.

I recently read a WiKiPedia article about the subject of Alpha-Blending, and found it to be of good quality. Yet, there is a major inconsistency between how this article explains the subject, and how I once explained it myself:

According to the article, alpha-entities – i.e., models with an alpha-channel, which therefore have per-pixel translucency – are rendered starting with the background first, and ending with the most-superimposed models last. Hence, formally, alpha-blended models should be rendered ‘back-to-front’. Well in computer graphics much rendering is done ‘front-to-back’, i.e., starting with the closest model, and ending with the farthest.

More specifically, the near-to-far rendering order applies to non-alpha entities, which therefore also don’t need to be alpha-blended, and only exists as an optimization. Non-alpha entities in CGI can also be rendered back-to-front, except that doing so usually requires our graphics hardware to render a much-larger number of triangles. By rendering opaque, closer models first, the graphics engine allows their triangles to occlude much of what would be behind them, the latter of which therefore do not consume Fragment-Shader invocations, and which therefore leads to higher frame-rates.

One fact which should be observed about the equations in the WiKi-article is, that they are asymmetric, and that those will therefore only work when rendered back-to-front. What the reader should be aware of, is that a complementary set of equations can be written, that will produce optically-correct results, when the rendering order is nearest-to-farthest. In fact, each reader should have done the mental exercise, of writing the complementary equations.

When routine rendering gets done, the content-developer – aka the game-dev – not, the game-engine-designer, has the capacity to change the rendering order. So what will probably get done in game-design is, that models are grouped, so that non-alpha entities are rendered first, and that only after this has been done, the alpha-entities would be rendered, as a separate rendering-group.

(Edit 04/14/2018, 21h35 :

But there is one specific situation which would require that the mentioned, complementary set of equations be used. Actually, according to my latest thoughts, I’d only suggest that a modified set of equations be used. And that would be, when “Multi-Sampling” is implemented, in a way that treats the fraction of sub-samples which belongs to one real sample, but that are rendered to, as if that fraction was just a multiplier to be applied to the “Source-Alpha”, which the entity’s textures already possess. In that case, the alpha-blending must actually be adapted to the rendering-order, which is more-common to game-design.

The reason for which I’d say so, is the simple observation that, according to conventional alpha-blending, if a pixel is rendered to with 0.5 opacity twice, it not only remains 0.75 opaque, but its resulting color favors the second color rendered to it, twice as strongly, as doing so favors the first color. For alpha-blending this is correct, because alpha-blending just mirrors optics, which successive ‘layers’ would cause.

But with multi-sampling, a real pixel could be rendered to 0.5 times, twice, and there would be no reason why the second color rendered to it, should contribute more-strongly, than the second did… )

You see, this subject has given me reason to rethink the somewhat overly-complex ideas I once had, on how best to achieve multi-sampling. And the cleanest way would be, by treating the fraction of sub-pixels rendered to, as just a component of the source-alpha.

(Updated 04/14/2018, 21h35 … )

Continue reading Alpha Blending and Multisampling Revisited.

Alpha-Blending

The concept seems rather intuitive, by which a single object or entity can be translucent. But another concept which is less intuitive, is that the degree to which it is so can be stated once per pixel, through an alpha-channel.

Just as every pixel can possess one channel for each of the three additive primary colors: Red, Green and Blue, It can possess a 4th channel named Alpha, which states on a scale from [ 0.0 … 1.0 ] , how opaque it is.

This does not just apply to the texture images, whose pixels are named texels, but also to Fragment Shader output, as well as to the pixels actually associated with the drawing surface, which provide what is known as destination alpha, since the drawing surface is also the destination of the rendering, or its target.

Hence, there exist images whose pixels have a 4channel format, as opposed to others, with a mere 3-channel format.

Now, there is no clear way for a display to display alpha. In certain cases, alpha in an image being viewed is hinted by software, as a checkerboard pattern. But what we see is nevertheless color-information and not transparency. And so a logical question can be, what the function of this alpha-channel is, which is being rendered to.

There are many ways in which the content from numerous sources can be blended, but most of the high-quality ones require, that much communication takes place between rendering-stages. A strategy is desired in which output from rendering-passes is combined, without requiring much communication between the passes. And alpha-blending is a de-facto strategy for that.

By default, closer entities, according to the position of their origins in view space, are rendered first. What this does is put closer values into the Z-buffer as soon as possible, so that the Z-buffer can prevent the rendering of the more distant entities as efficiently as possible. 3D rendering starts when the CPU gives the command to ‘draw’ one entity, which has an arbitrary position in 3D. This may be contrary to what 2D graphics might teach us to predict.

Alas, alpha-entities – aka entities that possess alpha textures – do not write the Z-buffer, because if they did, they would prevent more-distant entities from being rendered. And then, there would be no point in the closer ones being translucent.

The default way in which alpha-blending works, is that the alpha-channel of the display records the extent to which entities have been left visible, by previous entities which have been rendered closer to the virtual camera.

Continue reading Alpha-Blending