More Thoughts On Multisampling

I wrote an earlier posting, in which I tried to simplify the concept of Multi-Sampling. This posting will not make sense to the reader, unless he or she has already read the earlier posting, or unless otherwise, he or she already knows, how Multi-Sampling is different from Full-Screen Anti-Aliasing.

Just due to some thought, I’ve come to realize a major flaw in my earlier description. In spite of the rendering of each triangle, being unaware of the rendering of other triangles, a distinction nevertheless needs to exist, between how the ability of one triangle-edge to fill only part of a screen-pixel, should affect the lighting of triangles belonging to the same model / entity, and how this should affect the lighting of triangles belonging to some other model /entity.

If two triangles belong to the same model, and the first fills 47% of a screen-pixel, then this should not make the second triangle less-bright, and the two of them may yet succeed at filling that screen-pixel completely. Yet, if the second triangle belonged to another model later-rendered, and assumed to be placed behind the first model, then its brightness should in fact be reduced to 53%.

I think that the only way this can be solved, is to involve another buffer. This one could be called a ‘Multi-Sample Mask’. Triangles are super-sampled, and start to fill this mask with single bits per super-sample, kind of like a stencil. Then, the triangles belonging to the same model / entity would be singly-sampled, but would only write their shaded color to the screen-pixel, to whatever degree the corresponding patch in the multi-sample mask fills the screen-pixel.

(By default, whatever fraction of the output-color would be added to the screen-pixel, as long as the screen-pixels started out as zeroes or black, before rendering of the model /entity began. )

And then, before another entity can be rendered, the mask would need to be cleared – i.e. set back to zeroes.

As it stands, the Z-buffer would need to have the resolution of the Multi-Sample Mask – as if FSAA was being applied.

I think that the question, of whether only the edges of each entity will be anti-aliased, or of each triangle, will be answered by how often this mask is reset.

(Updated 12/06/2017 : )

(As it stood 12/05/2017 : )

AFAICT, This represents a special problem with alpha-textures, and alpha-entities.

Continue reading More Thoughts On Multisampling

Alpha-Blending

The concept seems rather intuitive, by which a single object or entity can be translucent. But another concept which is less intuitive, is that the degree to which it is so can be stated once per pixel, through an alpha-channel.

Just as every pixel can possess one channel for each of the three additive primary colors: Red, Green and Blue, It can possess a 4th channel named Alpha, which states on a scale from [ 0.0 … 1.0 ] , how opaque it is.

This does not just apply to the texture images, whose pixels are named texels, but also to Fragment Shader output, as well as to the pixels actually associated with the drawing surface, which provide what is known as destination alpha, since the drawing surface is also the destination of the rendering, or its target.

Hence, there exist images whose pixels have a 4channel format, as opposed to others, with a mere 3-channel format.

Now, there is no clear way for a display to display alpha. In certain cases, alpha in an image being viewed is hinted by software, as a checkerboard pattern. But what we see is nevertheless color-information and not transparency. And so a logical question can be, what the function of this alpha-channel is, which is being rendered to.

There are many ways in which the content from numerous sources can be blended, but most of the high-quality ones require, that much communication takes place between rendering-stages. A strategy is desired in which output from rendering-passes is combined, without requiring much communication between the passes. And alpha-blending is a de-facto strategy for that.

By default, closer entities, according to the position of their origins in view space, are rendered first. What this does is put closer values into the Z-buffer as soon as possible, so that the Z-buffer can prevent the rendering of the more distant entities as efficiently as possible. 3D rendering starts when the CPU gives the command to ‘draw’ one entity, which has an arbitrary position in 3D. This may be contrary to what 2D graphics might teach us to predict.

Alas, alpha-entities – aka entities that possess alpha textures – do not write the Z-buffer, because if they did, they would prevent more-distant entities from being rendered. And then, there would be no point in the closer ones being translucent.

The default way in which alpha-blending works, is that the alpha-channel of the display records the extent to which entities have been left visible, by previous entities which have been rendered closer to the virtual camera.

Continue reading Alpha-Blending