The role Materials play in CGI

When content-designers work with their favorite model editors or scene editors, in 3D, towards providing either a 3D game or another type of 3D application, they will often not map their 3D models directly to texture images. Instead, they will often connect each model to one Material, and the Material will then base its behavior on zero or more texture images. And a friend of mine has asked, what this describes.

Effectively, these Materials replace what a programmed shader would do, to define the surface properties of the simulated, 3D model. They tend to have a greater role in CPU rendering / ray tracing than they do with raster-based / DirectX or OpenGL -based graphics, but high-level editors may also be able to apply Materials to the hardware-rendered graphics, IF they can provide some type of predefined shader, that implements what the Material is supposed to implement.

A Material will often state such parameters as Gloss, Specular, Metallicity, etc.. When a camera-reflection-vector is computed, this reflection vector will land in some 3D direction relative to the defined light sources. Hence, a dot-product can be computed between it and the direction of the light source. Gloss represents the power to which this dot-product needs to be raised, resulting in specular highlights that become narrower. Often Gloss must be compensated for the fact that the integral of a power-function, is less than (1.0) times a higher power-function, and that therefore, the average brightness of a surface with gloss would seem to decrease…

But, if a content-designer enrolls a programmed shader, especially a Fragment Shader, than this shader replaces everything that a Material would otherwise have provided. It is often less-practical, though not impossible, to implement a programmed shader in software-rendered contexts, where mainly for this reason, the use of Materials still prevails.

Also, the notion often occurs to people, however unproven, that Materials will only provide basic shading options, such as ‘DOT3 Bump-Mapping‘, so that programmed shaders need to be used if more-sophisticated shading options are required, such as Tangent-Mapping. Yet, as I just wrote, every blend-mode a Material offers, is being defined by some sort of predefined shader – i.e. by a pre-programmed algorithm.

OGRE is an open-source rendering system, which requires that content-designers assign Materials to their models, even though hardware-rendering is being used, and then these Materials cause shaders to be loaded. Hence, if an OGRE content-designer wants to code his own shader, he must first also define his own Material, which will then load his custom shader.

Continue reading The role Materials play in CGI

Alpha-Blending

The concept seems rather intuitive, by which a single object or entity can be translucent. But another concept which is less intuitive, is that the degree to which it is so can be stated once per pixel, through an alpha-channel.

Just as every pixel can possess one channel for each of the three additive primary colors: Red, Green and Blue, It can possess a 4th channel named Alpha, which states on a scale from [ 0.0 … 1.0 ] , how opaque it is.

This does not just apply to the texture images, whose pixels are named texels, but also to Fragment Shader output, as well as to the pixels actually associated with the drawing surface, which provide what is known as destination alpha, since the drawing surface is also the destination of the rendering, or its target.

Hence, there exist images whose pixels have a 4-channel format, as opposed to others, with a mere 3-channel format.

Now, there is no clear way for a display to display alpha. In certain cases, alpha in an image being viewed is hinted by software, as a checkerboard pattern. But what we see is nevertheless color-information and not transparency. And so a logical question can be, what the function of this alpha-channel is, which is being rendered to.

There are many ways in which the content from numerous sources can be blended, but most of the high-quality ones require, that much communication takes place between rendering-stages. A strategy is desired in which output from rendering-passes is combined, without requiring much communication between the passes. And alpha-blending is a de-facto strategy for that.

By default, closer entities, according to the position of their origins in view space, are rendered first. What this does is put closer values into the Z-buffer as soon as possible, so that the Z-buffer can prevent the rendering of the more distant entities as efficiently as possible. 3D rendering starts when the CPU gives the command to ‘draw’ one entity, which has an arbitrary position in 3D. This may be contrary to what 2D graphics might teach us to predict.

Alas, alpha-entities – aka entities that possess alpha textures – do not write the Z-buffer, because if they did, they would prevent more-distant entities from being rendered. And then, there would be no point in the closer ones being translucent.

The default way in which alpha-blending works, is that the alpha-channel of the display records the extent to which entities have been left visible, by previous entities which have been rendered closer to the virtual camera.

Continue reading Alpha-Blending

Multisampling

One of the problems with bit-mapped graphics is “aliasing”. This is the phenomenon by which pixels along the edge of a pure shape will seem either to belong to that shape or not so, resulting in an edge which has rectangular errors. Even at fairly high resolutions, this can lead to a low-quality experience. And so schemes have been devised since the beginning of digital graphics, to make this effect less pronounced, even if we do choose raster-graphics.

3D has not been left out. One of the strategies which has existed for some time, is to Super-Sample each screen pixel, let us say by subdividing it by a fixed factor, such as 4×4 sub-pixels. This is also known as “Full-Screen Anti-Aliasing”, or ‘FSAA’. The output of the sub-pixels can be mixed in various ways, to result in a blended color-value for the resulting screen pixel.

But one problem with FSAA has been from the start, that it slows down rendering a whole lot. And so an alternative was devised, called Multi-Sampling.

The main idea behind Multi-Sampling is, that only the screen-pixels that span a triangle-edge, are objectionable in the degree with which they suffer from aliasing. Therefore, most of the screen-pixels are not super-sampled. And, the limited logic of the GPU has a hard time trying to distinguish, which triangle-edges are also model / entity -edges, where aliasing does the most damage. But, because the GPU has specialized logic circuits, which are referred to somewhat incorrectly as one render-output generator, that rasterize a given triangle, those circuits can be expanded somewhat feasibly into also being able to detect which screen-pixels do straddle the edge between two triangles. And then, for the sake of argument, only those may be subdivided into 4×4 sub-samples, each of which is Fragment-Shaded once.

But the logic gets just a bit more complicated. There is no simple way, in which the render-output generator can know, which other triangle a current triangle borders on. This is because in general, once each triangle has been processed, it is forgotten. Once each Geometry Shader input-topology has been processed, it too is forgotten, and the GS proceeds to process the next input-topology with complete amnesia…

Continue reading Multisampling