More Thoughts On Multisampling

I wrote an earlier posting, in which I tried to simplify the concept of Multi-Sampling. This posting will not make sense to the reader, unless he or she has already read the earlier posting, or unless otherwise, he or she already knows, how Multi-Sampling is different from Full-Screen Anti-Aliasing.

Just due to some thought, I’ve come to realize a major flaw in my earlier description. In spite of the rendering of each triangle, being unaware of the rendering of other triangles, a distinction nevertheless needs to exist, between how the ability of one triangle-edge to fill only part of a screen-pixel, should affect the lighting of triangles belonging to the same model / entity, and how this should affect the lighting of triangles belonging to some other model /entity.

If two triangles belong to the same model, and the first fills 47% of a screen-pixel, then this should not make the second triangle less-bright, and the two of them may yet succeed at filling that screen-pixel completely. Yet, if the second triangle belonged to another model later-rendered, and assumed to be placed behind the first model, then its brightness should in fact be reduced to 53%.

I think that the only way this can be solved, is to involve another buffer. This one could be called a ‘Multi-Sample Mask’. Triangles are super-sampled, and start to fill this mask with single bits per super-sample, kind of like a stencil. Then, the triangles belonging to the same model / entity would be singly-sampled, but would only write their shaded color to the screen-pixel, to whatever degree the corresponding patch in the multi-sample mask fills the screen-pixel.

(By default, whatever fraction of the output-color would be added to the screen-pixel, as long as the screen-pixels started out as zeroes or black, before rendering of the model /entity began. )

And then, before another entity can be rendered, the mask would need to be cleared – i.e. set back to zeroes.

As it stands, the Z-buffer would need to have the resolution of the Multi-Sample Mask – as if FSAA was being applied.

I think that the question, of whether only the edges of each entity will be anti-aliased, or of each triangle, will be answered by how often this mask is reset.

(Updated 12/06/2017 : )

(As it stood 12/05/2017 : )

AFAICT, This represents a special problem with alpha-textures, and alpha-entities.

  1. In principle, one way to handle alpha could be an approximative way, in which only the entire entity is anti-aliased – i.e., at its edges. As long as the default-operation was to subtract the current fragment’s alpha from that of the screen-pixel, the relevant fraction of the current alpha would be subtracted instead. And a similar rule could handle what should happen, if the alpha-blend rule had specified addition
  2. The most-exact way to anti-alias alpha-entities, would be to assure that the screen-alpha channel, since it also defines occlusion in some way, have the fully super-sampled resolution. That way, when the fragment shader is run once for the screen-pixel, the alpha-value it reads – i.e., sensitivity to alpha – would be averaged, from the super-sample alpha-values, for which the multi-sampling mask bit is set. And when alpha-value changes are written back to the pixel, they will effectively be written at full amplitude, but only to the super-sampled alpha-values, for which the multi-sampling bit is set.

Using the second approach would achieve in theory, that alpha-entities can also be anti-aliased once per triangle. But, the way such low-level logic works, is usually set at the hardware, or at the device-driver level, which means that it’s immutable, to game-devs, or to game-engine developers.

Another question this second idea could raise would be, ‘How can the Frame-Buffer Object be screen-compatible, if its alpha-channel had 2x or 4x the screen-resolution?’ And the convenient answer would emerge, that no need ever existed, to output screen-alpha to the monitor. Screen-alpha was always meant purely for internal use, in alpha-blending. It’s only the resulting color-values, that need to be sent to the monitor.


 

The reason fw I’m referring to Entities and not Models, is the plausibility that we could be rendering something other than just models, such as particle-clouds, volume-effects, etc.. And then, if we were to say, ‘Only anti-alias once per Entity,’ in effect this could represent a cop-out, because we would ultimately not be anti-aliasing.

I think that in the case of particle-clouds, these are generally rendered after the solid models, and rendering the particles does not really affect the visibility of the models, whose Z-buffer values only affect the visibility of the particles. I.e., if the particles are set to Emit, then their color-values are simply added to the darker colors of their BG, and the overall color-values should always be clipped.

But if the anti-aliased solution to such a  case is needed, then each particle will also set some of the bits of the Multi-Sample Mask, after which the degree with which each particle brightens a screen-pixel, would be modulated, as before, with the fraction of the screen-pixel, the mask-bits of which are set.

And in that case it should not be necessary to reset the mask, after each particle is rendered, only, after the entire particle-cloud is rendered.

OTOH, If we needed the particles to look solid, then we’d change the rendering-order, to render those first, and we’d allow each particle to write the Z-buffer. And this should continue to work with anti-aliasing either way.

(Edit 12/05/2017 : )

There is an important aspect to how non-alpha models would need to be rendered, to make existing assumptions consistent with this suggested framework.

The principle that existed was, that models whose centers are closer to the camera-position be rendered first, but only as an optimization. In other words, it could always have happened, that a model needs to be rendered, whose depth is less than the depths recorded in the Z-buffer, such that the current color-values still replace the screen color values, after which of course, the Z-buffer would be updated per pixel, with the new, closer depths of the current model.

Only, if the closest models have been rendered first, this greatly reduces how often more-distant models’ fragments pass the depth-test, thus reducing the amount of work that needs to be done running fragment shaders.

Well in order for this to work, with the multi-sampling framework described in this posting, the stencil for each non-alpha model needs to be rendered twice.

At first, the entire model is depth-tested, resulting in multi-sample mask bits. Then, the pre-existing screen-pixel RGB-values need to be multiplied by whatever fraction of their super-samples mask-bits are equal to zero, to reflect that some parts the background may become occluded, thus leaving less of the the pre-existing pixel-colors.

But I would say only do this, if the minimum depth in the Z-buffer (as belonging to any one screen-pixel), exceeds the minimum depth currently being drawn at, and if ( screen-alpha == 1.0 ) ( :1 ) , and if the current model is set to write the Z-buffer.

Then, the mask needs to be cleared, after which, the triangle-by-triangle depth-testing – leading to mask-bits and new Z-buffer values – needs to take place, subsequently to which the (RGBA) fragments of each triangle would also be computed (shaded) for the first time, multiplied by whatever factor is determined by the fraction of mask super-samples being ones, and added to the screen-pixel RGBA-values.

(Update 12/06/2017 : )

1: )

In the earlier posting linked to above, I made the wanton assumption, that ‘the way Alpha-Blending works,’ the rendering-context initializes the frame-buffer with an alpha-channel equal to (1.0), and that opacity is subtracted from that, as alpha-entities are rendered front-to-back, so that what remains in the screen-alpha, is the diminishing degrees of visibility.

My actual opinion on this is, that the way alpha-blending works, the way in which ‘output-alpha’ is computed, is arbitrary and up to the shader. Without anti-aliasing, whatever the shader outputs is simply written as-is to the screen-pixel, with no assumed post-processing. This would seem to suggest that if the Fixed-Function Pipeline is not being used, then alpha-blending also require, that the Fragment Shader read the entire RGBA-vector in, before computing its fragment to be output.

This requires, that the screen-pixel not have been altered (yet). Above, an ( alpha == 1.0 ) potentially resulted in an altered pixel.

So in principle, it could be the reverse, in that the buffer could start with an alpha of zeroes, and the opacity of each subsequent alpha-entity could be added to it… This would suggest, that the pixels’ RGB-values should only be darkened in a pre-pass, if ( screen-alpha == 0.0 ) .

In this posting, my main concern was to assure that the values of the screen-pixel would remain averages of the outputs of the Fragment Shader, Mathematically as accurate as possible, after the post-processing has been applied, which has as its input, the mask at super-sampled resolution, the Z-buffer at super-sampled resolution, and the Fragment Shader outputs, at single resolution.

The fact that this average is computed as a summation, does not prevent the Fragment Shader from computing a subtraction, in deriving its output:

FS-Prefetch: ( RGBAs )

vis = αs * fancy_vecα

{ vis = (1 – αs) * fancy_vecα }

RGBo = (fancy_vecRGB * vis) [ + RGBs ]

αo = [ αs ] { -/+ } vis

FS-Output: [ Ro, Go, Bo, αo ]

(Followed by the post-processing, as described above.)

What I did take into account however, is that in the design of (non-Anti-Aliased) exercises, if no alpha-blending is indicated, as a formality, the output-alpha is set to (1.0) in practice. Now, this could be done because the coders are worried that their Z-buffer doesn’t work, to make sure that more-distant models are occluded anyway. Or this could be done, to assure that Fragments which have been rendered ‘in front of’ pre-existing Z-buffer depths, are still rendered at full brightness, in case the rendering-order does get switched, and in case somebody else’s code tries to apply alpha-sensitivity

Dirk

 

Print Friendly, PDF & Email

One thought on “More Thoughts On Multisampling”

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

This site uses Akismet to reduce spam. Learn how your comment data is processed.