Alpha Blending and Multisampling Revisited.

I recently read a WiKiPedia article about the subject of Alpha-Blending, and found it to be of good quality. Yet, there is a major inconsistency between how this article explains the subject, and how I once explained it myself:

According to the article, alpha-entities – i.e., models with an alpha-channel, which therefore have per-pixel translucency – are rendered starting with the background first, and ending with the most-superimposed models last. Hence, formally, alpha-blended models should be rendered ‘back-to-front’. Well in computer graphics much rendering is done ‘front-to-back’, i.e., starting with the closest model, and ending with the farthest.

More specifically, the near-to-far rendering order applies to non-alpha entities, which therefore also don’t need to be alpha-blended, and only exists as an optimization. Non-alpha entities in CGI can also be rendered back-to-front, except that doing so usually requires our graphics hardware to render a much-larger number of triangles. By rendering opaque, closer models first, the graphics engine allows their triangles to occlude much of what would be behind them, the latter of which therefore do not consume Fragment-Shader invocations, and which therefore leads to higher frame-rates.

One fact which should be observed about the equations in the WiKi-article is, that they are asymmetric, and that those will therefore only work when rendered back-to-front. What the reader should be aware of, is that a complementary set of equations can be written, that will produce optically-correct results, when the rendering order is nearest-to-farthest. In fact, each reader should have done the mental exercise, of writing the complementary equations.

When routine rendering gets done, the content-developer – aka the game-dev – not, the game-engine-designer, has the capacity to change the rendering order. So what will probably get done in game-design is, that models are grouped, so that non-alpha entities are rendered first, and that only after this has been done, the alpha-entities would be rendered, as a separate rendering-group.

(Edit 04/14/2018, 21h35 :

But there is one specific situation which would require that the mentioned, complementary set of equations be used. Actually, according to my latest thoughts, I’d only suggest that a modified set of equations be used. And that would be, when “Multi-Sampling” is implemented, in a way that treats the fraction of sub-samples which belongs to one real sample, but that are rendered to, as if that fraction was just a multiplier to be applied to the “Source-Alpha”, which the entity’s textures already possess. In that case, the alpha-blending must actually be adapted to the rendering-order, which is more-common to game-design.

The reason for which I’d say so, is the simple observation that, according to conventional alpha-blending, if a pixel is rendered to with 0.5 opacity twice, it not only remains 0.75 opaque, but its resulting color favors the second color rendered to it, twice as strongly, as doing so favors the first color. For alpha-blending this is correct, because alpha-blending just mirrors optics, which successive ‘layers’ would cause.

But with multi-sampling, a real pixel could be rendered to 0.5 times, twice, and there would be no reason why the second color rendered to it, should contribute more-strongly, than the second did… )

You see, this subject has given me reason to rethink the somewhat overly-complex ideas I once had, on how best to achieve multi-sampling. And the cleanest way would be, by treating the fraction of sub-pixels rendered to, as just a component of the source-alpha.

(Updated 04/14/2018, 21h35 … )

Continue reading Alpha Blending and Multisampling Revisited.

More Thoughts On Multisampling

I wrote an earlier posting, in which I tried to simplify the concept of Multi-Sampling. This posting will not make sense to the reader, unless he or she has already read the earlier posting, or unless otherwise, he or she already knows, how Multi-Sampling is different from Full-Screen Anti-Aliasing.

Just due to some thought, I’ve come to realize a major flaw in my earlier description. In spite of the rendering of each triangle, being unaware of the rendering of other triangles, a distinction nevertheless needs to exist, between how the ability of one triangle-edge to fill only part of a screen-pixel, should affect the lighting of triangles belonging to the same model / entity, and how this should affect the lighting of triangles belonging to some other model /entity.

If two triangles belong to the same model, and the first fills 47% of a screen-pixel, then this should not make the second triangle less-bright, and the two of them may yet succeed at filling that screen-pixel completely. Yet, if the second triangle belonged to another model later-rendered, and assumed to be placed behind the first model, then its brightness should in fact be reduced to 53%.

I think that the only way this can be solved, is to involve another buffer. This one could be called a ‘Multi-Sample Mask’. Triangles are super-sampled, and start to fill this mask with single bits per super-sample, kind of like a stencil. Then, the triangles belonging to the same model / entity would be singly-sampled, but would only write their shaded color to the screen-pixel, to whatever degree the corresponding patch in the multi-sample mask fills the screen-pixel.

(By default, whatever fraction of the output-color would be added to the screen-pixel, as long as the screen-pixels started out as zeroes or black, before rendering of the model /entity began. )

And then, before another entity can be rendered, the mask would need to be cleared – i.e. set back to zeroes.

As it stands, the Z-buffer would need to have the resolution of the Multi-Sample Mask – as if FSAA was being applied.

I think that the question, of whether only the edges of each entity will be anti-aliased, or of each triangle, will be answered by how often this mask is reset.

(Updated 12/06/2017 : )

(As it stood 12/05/2017 : )

AFAICT, This represents a special problem with alpha-textures, and alpha-entities.

Continue reading More Thoughts On Multisampling

Multisampling

One of the problems with bit-mapped graphics is “aliasing”. This is the phenomenon by which pixels along the edge of a pure shape will seem either to belong to that shape or not so, resulting in an edge which has rectangular errors. Even at fairly high resolutions, this can lead to a low-quality experience. And so schemes have been devised since the beginning of digital graphics, to make this effect less pronounced, even if we do choose raster-graphics.

3D has not been left out. One of the strategies which has existed for some time, is to Super-Sample each screen pixel, let us say by subdividing it by a fixed factor, such as 4×4 sub-pixels. This is also known as “Full-Screen Anti-Aliasing”, or ‘FSAA’. The output of the sub-pixels can be mixed in various ways, to result in a blended color-value for the resulting screen pixel.

But one problem with FSAA has been from the start, that it slows down rendering a whole lot. And so an alternative was devised, called Multi-Sampling.

The main idea behind Multi-Sampling is, that only the screen-pixels that span a triangle-edge, are objectionable in the degree with which they suffer from aliasing. Therefore, most of the screen-pixels are not super-sampled. And, the limited logic of the GPU has a hard time trying to distinguish, which triangle-edges are also model / entity -edges, where aliasing does the most damage. But, because the GPU has specialized logic circuits, which are referred to somewhat incorrectly as one render-output generator, that rasterize a given triangle, those circuits can be expanded somewhat feasibly into also being able to detect which screen-pixels do straddle the edge between two triangles. And then, for the sake of argument, only those may be subdivided into 4×4 sub-samples, each of which is Fragment-Shaded once.

But the logic gets just a bit more complicated. There is no simple way, in which the render-output generator can know, which other triangle a current triangle borders on. This is because in general, once each triangle has been processed, it is forgotten. Once each Geometry Shader input-topology has been processed, it too is forgotten, and the GS proceeds to process the next input-topology with complete amnesia…

Continue reading Multisampling