More about Framebuffer Objects

In the past, when I was writing about hardware-accelerated graphics – i.e., graphics rendered by the GPU – such as in this article, I chose the phrasing, according to which the Fragment Shader eventually computes the color-values of pixels ‘to be sent to the screen’. I felt that this over-simplification could make my topics a bit easier to understand at the time.

A detail which I had deliberately left out, was that the rendering target may not be the screen in any given context. What happens is that memory-allocation, even the allocation of graphics-memory, is still carried out by the CPU, not the GPU. And ‘a shader’ is just another way to say ‘a GPU program’. In the case of a “Fragment Shader”, what this GPU program does can be visualized better as shading, whereas in the case of a “Vertex Shader”, it just consists of computations that affect coordinates, and may therefore be referred to just as easily as ‘a Vertex Program’. Separately, there exists the graphics-card extension, that allows for the language to be the ARB-language, which may also be referred to as defining a Vertex Program. ( :4 )

The CPU sets up the context within which the shader is supposed to run, and one of the elements of this context, is to set up a buffer, to which the given, Fragment Shader is to render its pixels. The CPU sets this up, as much as it sets up 2D texture images, from which the shader fetches texels.

The rendering target of a given shader-instance may be, ‘what the user finally sees on his display’, or it may not. Under OpenGL, the rendering target could just be a Framebuffer Object (an ‘FBO’), which has also been set up by the CPU as an available texture-image, from which another shader-instance samples texels. The result of that would be Render To Texture (‘RTT’).

Continue reading More about Framebuffer Objects

DOT3 Versus Tangent-Space Bump-Mapping

One concept which has been used often in the design of Fragment Shaders and/or Materials, is “DOT3 Bump-Mapping”. The way in which this scheme works is rather straightforward. A Bump-Map, which is being provided as one (source) texture image out of several, does not define coloration, but rather relief, as a kind of Height-Map. And it must first be converted into a Normal-Map, which is a specially-formatted type of image, in which the Red, Green and Blue component channels for each texel are able to represent floating-point values from (-1.0 … +1.0) , even though each color channel is still only an assumed 8-bit pixel-value belonging to the image. There are several ways to do this, out of which one has been accepted as standard, but then the Red, Green and Blue channels represent a Normal-Vector and its X, Y, and Z components.

The problem arises in the design of simple shaders, that this technique offers two Normal-Vectors, because an original Normal-Vector was already provided, and interpolated from the Vertex-Normals. There are basically two ways to blend these Normal-Vectors into one: An easy way and a difficult way.

Using DOT3, the assumption is made that the Normal-Map is valid when its surface is facing the camera directly, but that the actual computation of its Normal-Vectors was never extremely accurate. What DOT3 does is to add the vectors, with one main caveat. We want the combined Normal-Vector to be accurate at the edges of a model, as seen from the camera-position, even though something has been added to the Vertex-Normal.

The way DOT3 solves this problem, is by setting the (Z) component of the Normal-Map to zero, before performing the addition, and to normalize the resulting sum, after the addition, so that we are left with a unit vector anyway.

On that assumption, the (X) and (Y) components of the Normal-Map can just as easily be computed as a differentiation of the Bump-Map, in two directions. If we want our Normal-Map to be more accurate than that, then we should also apply a more-accurate method of blending it with the Vertex-Normal, than DOT3.

And so there exists Tangent-Space Mapping. According to Tangent-Mapping, the Vertex-Normal is also associated with at least one tangent-vector, as defined in model space, and a bitangent-vector must either be computed by the Vertex Shader, or provided as part of the model definition, as part of the Vertex Array.

What the Fragment Shader must next do, after assuming that the Vertex- Normal, Tangent and Bitangent vectors correspond also to the Z, X and Y components of the Normal-Map, and after normalizing them, since anything interpolated from unit vectors cannot be assumed to have remained a unit vector, is to treat them as though they formed the columns of another matrix, IF Mapped Normal-Vectors multiplied by this texture, are simply to be rotated in 3D, into View Space.

I suppose I should add, that these 3 vectors were part of the model definition, and needed to find their way into View Space, before building this matrix. If the rendering engine supplies one, this is where the Normal Matrix would come in – once per Vertex Shader invocation.

Ideally, the Fragment Shader would perform a complete Orthonormalization of the resulting matrix, but to do so also requires a lot of GPU work in the FS, and would therefore assume a very powerful graphics card. But an Orthonormalization will also ensure, that a Transposed Matrix does correspond to an Inverse Matrix. And the sense must be preserved, of whether we are converting from View Space to Tangent-Space, or from Tangent-Space into View Space.

Continue reading DOT3 Versus Tangent-Space Bump-Mapping

The role Materials play in CGI

When content-designers work with their favorite model editors or scene editors, in 3D, towards providing either a 3D game or another type of 3D application, they will often not map their 3D models directly to texture images. Instead, they will often connect each model to one Material, and the Material will then base its behavior on zero or more texture images. And a friend of mine has asked, what this describes.

Effectively, these Materials replace what a programmed shader would do, to define the surface properties of the simulated, 3D model. They tend to have a greater role in CPU rendering / ray tracing than they do with raster-based / DirectX or OpenGL -based graphics, but high-level editors may also be able to apply Materials to the hardware-rendered graphics, IF they can provide some type of predefined shader, that implements what the Material is supposed to implement.

A Material will often state such parameters as Gloss, Specular, Metallicity, etc.. When a camera-reflection-vector is computed, this reflection vector will land in some 3D direction relative to the defined light sources. Hence, a dot-product can be computed between it and the direction of the light source. Gloss represents the power to which this dot-product needs to be raised, resulting in specular highlights that become narrower. Often Gloss must be compensated for the fact that the integral of a power-function, is less than (1.0) times a higher power-function, and that therefore, the average brightness of a surface with gloss would seem to decrease…

But, if a content-designer enrolls a programmed shader, especially a Fragment Shader, than this shader replaces everything that a Material would otherwise have provided. It is often less-practical, though not impossible, to implement a programmed shader in software-rendered contexts, where mainly for this reason, the use of Materials still prevails.

Also, the notion often occurs to people, however unproven, that Materials will only provide basic shading options, such as ‘DOT3 Bump-Mapping‘, so that programmed shaders need to be used if more-sophisticated shading options are required, such as Tangent-Mapping. Yet, as I just wrote, every blend-mode a Material offers, is being defined by some sort of predefined shader – i.e. by a pre-programmed algorithm.

OGRE is an open-source rendering system, which requires that content-designers assign Materials to their models, even though hardware-rendering is being used, and then these Materials cause shaders to be loaded. Hence, if an OGRE content-designer wants to code his own shader, he must first also define his own Material, which will then load his custom shader.

Continue reading The role Materials play in CGI

Alpha-Blending

The concept seems rather intuitive, by which a single object or entity can be translucent. But another concept which is less intuitive, is that the degree to which it is so can be stated once per pixel, through an alpha-channel.

Just as every pixel can possess one channel for each of the three additive primary colors: Red, Green and Blue, It can possess a 4th channel named Alpha, which states on a scale from [ 0.0 … 1.0 ] , how opaque it is.

This does not just apply to the texture images, whose pixels are named texels, but also to Fragment Shader output, as well as to the pixels actually associated with the drawing surface, which provide what is known as destination alpha, since the drawing surface is also the destination of the rendering, or its target.

Hence, there exist images whose pixels have a 4channel format, as opposed to others, with a mere 3-channel format.

Now, there is no clear way for a display to display alpha. In certain cases, alpha in an image being viewed is hinted by software, as a checkerboard pattern. But what we see is nevertheless color-information and not transparency. And so a logical question can be, what the function of this alpha-channel is, which is being rendered to.

There are many ways in which the content from numerous sources can be blended, but most of the high-quality ones require, that much communication takes place between rendering-stages. A strategy is desired in which output from rendering-passes is combined, without requiring much communication between the passes. And alpha-blending is a de-facto strategy for that.

By default, closer entities, according to the position of their origins in view space, are rendered first. What this does is put closer values into the Z-buffer as soon as possible, so that the Z-buffer can prevent the rendering of the more distant entities as efficiently as possible. 3D rendering starts when the CPU gives the command to ‘draw’ one entity, which has an arbitrary position in 3D. This may be contrary to what 2D graphics might teach us to predict.

Alas, alpha-entities – aka entities that possess alpha textures – do not write the Z-buffer, because if they did, they would prevent more-distant entities from being rendered. And then, there would be no point in the closer ones being translucent.

The default way in which alpha-blending works, is that the alpha-channel of the display records the extent to which entities have been left visible, by previous entities which have been rendered closer to the virtual camera.

Continue reading Alpha-Blending