Alpha Blending and Multisampling Revisited.

I recently read a WiKiPedia article about the subject of Alpha-Blending, and found it to be of good quality. Yet, there is a major inconsistency between how this article explains the subject, and how I once explained it myself:

According to the article, alpha-entities – i.e., models with an alpha-channel, which therefore have per-pixel translucency – are rendered starting with the background first, and ending with the most-superimposed models last. Hence, formally, alpha-blended models should be rendered ‘back-to-front’. Well in computer graphics much rendering is done ‘front-to-back’, i.e., starting with the closest model, and ending with the farthest.

More specifically, the near-to-far rendering order applies to non-alpha entities, which therefore also don’t need to be alpha-blended, and only exists as an optimization. Non-alpha entities in CGI can also be rendered back-to-front, except that doing so usually requires our graphics hardware to render a much-larger number of triangles. By rendering opaque, closer models first, the graphics engine allows their triangles to occlude much of what would be behind them, the latter of which therefore do not consume Fragment-Shader invocations, and which therefore leads to higher frame-rates.

One fact which should be observed about the equations in the WiKi-article is, that they are asymmetric, and that those will therefore only work when rendered back-to-front. What the reader should be aware of, is that a complementary set of equations can be written, that will produce optically-correct results, when the rendering order is nearest-to-farthest. In fact, each reader should have done the mental exercise, of writing the complementary equations.

When routine rendering gets done, the content-developer – aka the game-dev – not, the game-engine-designer, has the capacity to change the rendering order. So what will probably get done in game-design is, that models are grouped, so that non-alpha entities are rendered first, and that only after this has been done, the alpha-entities would be rendered, as a separate rendering-group.

(Edit 04/14/2018, 21h35 :

But there is one specific situation which would require that the mentioned, complementary set of equations be used. Actually, according to my latest thoughts, I’d only suggest that a modified set of equations be used. And that would be, when “Multi-Sampling” is implemented, in a way that treats the fraction of sub-samples which belongs to one real sample, but that are rendered to, as if that fraction was just a multiplier to be applied to the “Source-Alpha”, which the entity’s textures already possess. In that case, the alpha-blending must actually be adapted to the rendering-order, which is more-common to game-design.

The reason for which I’d say so, is the simple observation that, according to conventional alpha-blending, if a pixel is rendered to with 0.5 opacity twice, it not only remains 0.75 opaque, but its resulting color favors the second color rendered to it, twice as strongly, as doing so favors the first color. For alpha-blending this is correct, because alpha-blending just mirrors optics, which successive ‘layers’ would cause.

But with multi-sampling, a real pixel could be rendered to 0.5 times, twice, and there would be no reason why the second color rendered to it, should contribute more-strongly, than the second did… )

You see, this subject has given me reason to rethink the somewhat overly-complex ideas I once had, on how best to achieve multi-sampling. And the cleanest way would be, by treating the fraction of sub-pixels rendered to, as just a component of the source-alpha.

(Updated 04/14/2018, 21h35 … )

Continue reading Alpha Blending and Multisampling Revisited.