Alpha Blending and Multisampling Revisited.

I recently read a WiKiPedia article about the subject of Alpha-Blending, and found it to be of good quality. Yet, there is a major inconsistency between how this article explains the subject, and how I once explained it myself:

According to the article, alpha-entities – i.e., models with an alpha-channel, which therefore have per-pixel translucency – are rendered starting with the background first, and ending with the most-superimposed models last. Hence, formally, alpha-blended models should be rendered ‘back-to-front’. Well in computer graphics much rendering is done ‘front-to-back’, i.e., starting with the closest model, and ending with the farthest.

More specifically, the near-to-far rendering order applies to non-alpha entities, which therefore also don’t need to be alpha-blended, and only exists as an optimization. Non-alpha entities in CGI can also be rendered back-to-front, except that doing so usually requires our graphics hardware to render a much-larger number of triangles. By rendering opaque, closer models first, the graphics engine allows their triangles to occlude much of what would be behind them, the latter of which therefore do not consume Fragment-Shader invocations, and which therefore leads to higher frame-rates.

One fact which should be observed about the equations in the WiKi-article is, that they are asymmetric, and that those will therefore only work when rendered back-to-front. What the reader should be aware of, is that a complementary set of equations can be written, that will produce optically-correct results, when the rendering order is nearest-to-farthest. In fact, each reader should have done the mental exercise, of writing the complementary equations.

When routine rendering gets done, the content-developer – aka the game-dev – not, the game-engine-designer, has the capacity to change the rendering order. So what will probably get done in game-design is, that models are grouped, so that non-alpha entities are rendered first, and that only after this has been done, the alpha-entities would be rendered, as a separate rendering-group.

(Edit 04/14/2018, 21h35 :

But there is one specific situation which would require that the mentioned, complementary set of equations be used. Actually, according to my latest thoughts, I’d only suggest that a modified set of equations be used. And that would be, when “Multi-Sampling” is implemented, in a way that treats the fraction of sub-samples which belongs to one real sample, but that are rendered to, as if that fraction was just a multiplier to be applied to the “Source-Alpha”, which the entity’s textures already possess. In that case, the alpha-blending must actually be adapted to the rendering-order, which is more-common to game-design.

The reason for which I’d say so, is the simple observation that, according to conventional alpha-blending, if a pixel is rendered to with 0.5 opacity twice, it not only remains 0.75 opaque, but its resulting color favors the second color rendered to it, twice as strongly, as doing so favors the first color. For alpha-blending this is correct, because alpha-blending just mirrors optics, which successive ‘layers’ would cause.

But with multi-sampling, a real pixel could be rendered to 0.5 times, twice, and there would be no reason why the second color rendered to it, should contribute more-strongly, than the first did… )

You see, this subject has given me reason to rethink the somewhat overly-complex ideas I once had, on how best to achieve multi-sampling. And the cleanest way would be, by treating the fraction of sub-pixels rendered to, as just a component of the source-alpha.

(Updated 04/14/2018, 21h35 … )

(As of 04/13/2018 : )

It’s already an established fact in game-design, that the game-engine is fed a flag by the game-content, which warns it about whether one specific model is an alpha-model or a non-alpha model. Well, rather than to ask game-developers to adapt their games to graphics cards capable of multi-sampling, the game-engine could use this flag to distinguish, between models that are alpha, because the game-dev decided they should be, or whether they’re just being treated similarly to alpha-models, due to how the multi-sampling may have been implemented. Then, depending on whether this flag is set or not, the game-engine can just apply one set of alpha-blending computations or the other.

(Edit 04/14/2018 : )

One practice which content-designers – i.e., game-devs – have applied in the past, is to change whether a model writes the Z-buffer or not. This setting is independent from whether the model reads the Z-buffer, so that models which no longer write it, will still be occluded by other models which do.

But one reason fw some models do not write the Z-buffer can be, the fact that they are alpha-models, which means that scene-details behind them should still be partially visible.

Hence, if a game-dev discovers that his alpha-particles, which he placed in a later rendering-group, are being multi-sampled fine, but that his non-alpha particles are not, a simple trick he could try would be, to place the non-alpha particles into a later rendering group, and to set them not to write the Z-buffer, even though they aren’t alpha-entities. The game-dev may find that those particles will then render fine, on engines that use multi-sampling.

OTOH, another practice which some game-devs have, is to create two rendering-groups, both for non-alpha models, such as for example, one to render the (“convex”) scene geometry, and the other to render models which would be considered actors within the scene.


 

I would propose that the best way to adapt alpha-blending as a way to implement (non-alpha) multi-sampling, might be to use the following equations:

 



AOUT = ASRC + ADEST,

AOUT == 0 ->
  RGBOUT = 0.

AOUT > 1 ->
  AOUT = 1,
  RGBOUT =
    RGBSRCASRC + RGBDEST( 1 - ASRC ) .

0 < AOUT <= 1 ->
  RGBOUT =
    ( RGBSRCASRC + RGBDESTADEST ) / AOUT .


AOUT == 1 ->
  ADEST == ( 1 - ASRC ) ->
    RGBSRCASRC + RGBDESTADEST ==
    RGBSRCASRC + RGBDEST( 1 - ASRC ) .


 

Mind you, this will produce somewhat unreliable results, if a real pixel is rendered to more than twice.


 

(Edit 6/17/2019, 7h50 : )

I can visualize a reason, for which some sort of compromise must be applied, when a complex scene combines alpha-blending with multi-sampling. According to what I just wrote, by the time the alpha entities and their later rendering group are rendered, for the sake of argument, the Destination Alpha may already be (1.0), just because the surrounding, opaque scene has already been rendered, and their visibility may be decided largely by whether parts of their geometry have been occluded by the non-alpha entities of the earlier rendering group.

The reason this situation poses a special problem is the fact that often, alpha entities are defined by a mesh of triangles, just as most other models would be. Those triangles need to fit together in such a way, as not to form a mesh obvious to the player. Before multi-sampling was invented, this did not pose a problem because each edge of a triangle either occupied a full pixel or did not. But if, in addition to possessing a Source Alpha value that might be consistent, each edge of the triangles rendered to full pixels partially – i.e., to only some of their sub-pixels – then alpha-blending would take place on full pixels at the borders between triangles, but alpha-blending from one entity onto itself. This would undoubtedly cause the full pixels at such ‘internal edges’ to receive coloration to a degree inconsistent with the degree of coloration of pixels in the middle of a triangle, thereby making the mesh obvious and incorrect.

Further, detection at the hardware-level of the tangent of a model, as opposed to the edge between the triangles of its mesh, is difficult because such tangents as seen from the camera axis would be the responsibility of shaders to compute and not of the graphics hardware or driver.

One hypothetical compromise that would solve this problem could be, that an attenuation of the Source Alpha of an alpha entity would not result, due to the edges of triangles at all. Hence, the geometry of such a mesh might generate one Z-value per sub-pixel, even at the edge of triangles (and even though alpha-entities are also set not to write to the Z-buffer), and might render to each full pixel one set of (R, G, B, A) values, but the causal Source Alpha might only be attenuated by what the fraction of its sub-pixels is, that are occluded by preexisting Z-buffer-values…

This result could be improved upon, if the stencil buffer was involved in rendering with multi-sampling. The cutout of the entity could first be written to the stencil buffer (at the full resolution of the sub-pixels), so that multi-sampling / anti-aliasing could be applied at the edge of the entity as seen from the virtual camera position, but still not to the edges of individual triangles. This writing to the stencil buffer would also take occlusion into account, at the full resolution of the sub-pixels. In a separate operation, the Source Alpha of the entity would be modulated by what fraction of the stencil-buffer sub-pixels it owns, belonging to any one full pixel, and then a set of (R, G, B, A) colour values could be rendered at the (lower) resolution of the full pixels, but such that a triangle either owns a full pixel or does not.

However, if the graphics hardware had such a reserved use for its stencil buffer, then an issue might arise, as to how the content of a game designer should behave for example, when the game-dev is trying to program the stencil buffer himself. Does hardware capable of multi-sampling have a separate stencil-buffer bit for itself, which regular content design ‘does not see’?

Dirk

 

Print Friendly, PDF & Email

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

 

This site uses Akismet to reduce spam. Learn how your comment data is processed.