The role that Materials and Textures play in CGI

I once had a friend, who had asked me what the difference was, between a Texture, and a Material, in CGI. And as it was then, it’s difficult to provide a definitive answer which is true in all cases, because each graphics application framework, has a slightly different definition of what a material is.

What I had told my friend, was that in general, a material is a kind of node, which we can attach texture-images to, but that usually, the material allows the content-designer, additionally, to specify certain parameters, with which the textures are to be rendered. My friend next wanted to know what then, the difference was, between a material and a shader. And basically, when we use material nodes, we usually don’t code any shaders. But, if we did code a shader, then the logical place to tell our graphics application to load it, is as another parameter of the material. In fact, the subtle details of what a material does, are often defined by a shader of some sort, which the content designer doesn’t see.

But, materials will sometimes have a more-powerful GUI, which allows the content-designer to connect apparent nodes, which are shown in front of him visually, in order to decide how his texture images are to be combined, into the final appearance of a 3D model, and this GUI can go so far as to display Nodes, visible to the content-designer, that make this work easier.

My friend was not happy with this answer, because I had not defined what a material is, in a way that applies to ALL Graphics Applications. And the reason I did not, was the fact that each graphics application is slightly different in this regard.

Dirk

 

Terrain Objects

In this YouTube Video:

I told potential viewers, that there can be more than one way, in which a 2D Image can be transformed into some form of 3D Geometry, in the form of a mesh, suitable for CGI. This took the form of Terrain Objects.

Some of my readers may already know a lot about Terrain Objects, but then again, some may not.

There was a detail to Terrain Objects which my screen-cast failed to explain. Given a serious game engine, or other 3D rendering engine, this will offer its content-developer a variety of different objects, which he or she can build a game, etc., out of. And most game engines, will actually implement Terrain Objects as being different Entities, from generic Models. Not only that, but Convex Models exist in addition to the types of Models, that would be used to represent Actors… And the exact way in which this is organized usually depends on the game engine.

What most game engines will do is actually allow their content-developer just to specify a height-map, which refers to the 2D image the pixel-values of which are the heights, and to convert this into a 3D mesh behind the scenes for him. Not only that, but powerful game engines will actually support ‘Chunked Terrain’ in some way, which means that the Terrain of a given Game Level is subdivided into Chunks, not all of which need to be loaded onto the graphics card at once.

The reason fw this is done, is the fact that the actual 3D mesh consumes far more graphics memory, than a 2D Image would, especially in the case of Terrains. Not having to load the geographical definition of an entire world at once, has its benefits.

But I also felt that it was beyond the scope of my video, to explain that.

(Update 05/08/2018, 15h35 … )

Continue reading Terrain Objects

Something Discouraging, When Using ‘Ayam’

One of the subjects which I have written about, here and here, is that I seem to like the Open-Source application named ‘Ayam‘, as a potential way to create images, which are 2D perspective views of a virtual 3D scene. This program just might do for me, what the proprietary program ‘Bryce’ once did for me, under Windows.

But obviously it’s harder to use Ayam, to create any meaningful images. And one reason seems to be, that doing so requires we define explicit, simulated light-sources, and the fact that there are no defaults, that would set up these light-sources, with parameters, that lead to good images.

I.e., the program will compute per-pixel color-values, that are derived from the positions, geometries, and intensities of the light-sources, but with no advanced warning to the user, of what intensities he needs to give his light-sources, so that the pixel-brightness values are in-range.

Usually, if we begin by rendering a cube, and we find that it renders, but only as a black outline, against a transparent background, then we may have everything set up correctly, except for the intensity of the light-source. It might just take some trial-and-error, before we set that correctly. And what I have found is that on their arbitrary scale, an intensity of 16 or so, just works well, in experimental scenes I’ve created.

At the same time, ‘Ayam’ has a specific idiosyncrasy to its GUI, which I rather appreciate. Its many numeric parameter-widgets have an arrow to the right of them, and an arrow to their left, which respectively, increase or decrease the value in the numeric field. The way these little buttons work, is either to double or halve the value they alter.

I suppose this could confuse some first-time users, because the numeric value in the field defaults to zero. This means that to click on the buttons does not alter the value, because to double or halve zero, results in zero. But what we can do is type a (+1) or a (-1) into the numeric field, and then, if we find that the parameter in question isn’t even close to where it’s supposed to be, we can repeatedly use the buttons, until we either find that (+16) is approximately correct, or until we find that (+0.0625) is about correct… The users just need to remember that after choosing widget-settings, we must also click to ‘Apply’ those settings before moving on, or else ‘Ayam’ will tend to forget them.

Dirk

 

DOT3 Versus Tangent-Space Bump-Mapping

One concept which has been used often in the design of Fragment Shaders and/or Materials, is “DOT3 Bump-Mapping”. The way in which this scheme works is rather straightforward. A Bump-Map, which is being provided as one (source) texture image out of several, does not define coloration, but rather relief, as a kind of Height-Map. And it must first be converted into a Normal-Map, which is a specially-formatted type of image, in which the Red, Green and Blue component channels for each texel are able to represent floating-point values from (-1.0 … +1.0) , even though each color channel is still only an assumed 8-bit pixel-value belonging to the image. There are several ways to do this, out of which one has been accepted as standard, but then the Red, Green and Blue channels represent a Normal-Vector and its X, Y, and Z components.

The problem arises in the design of simple shaders, that this technique offers two Normal-Vectors, because an original Normal-Vector was already provided, and interpolated from the Vertex-Normals. There are basically two ways to blend these Normal-Vectors into one: An easy way and a difficult way.

Using DOT3, the assumption is made that the Normal-Map is valid when its surface is facing the camera directly, but that the actual computation of its Normal-Vectors was never extremely accurate. What DOT3 does is to add the vectors, with one main caveat. We want the combined Normal-Vector to be accurate at the edges of a model, as seen from the camera-position, even though something has been added to the Vertex-Normal.

The way DOT3 solves this problem, is by setting the (Z) component of the Normal-Map to zero, before performing the addition, and to normalize the resulting sum, after the addition, so that we are left with a unit vector anyway.

On that assumption, the (X) and (Y) components of the Normal-Map can just as easily be computed as a differentiation of the Bump-Map, in two directions. If we want our Normal-Map to be more accurate than that, then we should also apply a more-accurate method of blending it with the Vertex-Normal, than DOT3.

And so there exists Tangent-Space Mapping. According to Tangent-Mapping, the Vertex-Normal is also associated with at least one tangent-vector, as defined in model space, and a bitangent-vector must either be computed by the Vertex Shader, or provided as part of the model definition, as part of the Vertex Array.

What the Fragment Shader must next do, after assuming that the Vertex- Normal, Tangent and Bitangent vectors correspond also to the Z, X and Y components of the Normal-Map, and after normalizing them, since anything interpolated from unit vectors cannot be assumed to have remained a unit vector, is to treat them as though they formed the columns of another matrix, IF Mapped Normal-Vectors multiplied by this texture matrix, are simply to be rotated in 3D, into View Space.

(Above Corrected 07/05/2018 . )

I suppose I should add, that these 3 vectors were part of the model definition, and needed to find their way into View Space, before building this matrix. If the rendering engine supplies one, this is where the Normal Matrix would come in – once per Vertex Shader invocation.

Ideally, the Fragment Shader would perform a complete Orthonormalization of the resulting matrix, but to do so also requires a lot of GPU work in the FS, and would therefore assume a very powerful graphics card. But an Orthonormalization will also ensure, that a Transposed Matrix does correspond to an Inverse Matrix. And the sense must be preserved, of whether we are converting from View Space to Tangent-Space, or from Tangent-Space into View Space.

Continue reading DOT3 Versus Tangent-Space Bump-Mapping