The role that Materials and Textures play in CGI

I once had a friend, who had asked me what the difference was, between a Texture, and a Material, in CGI. And as it was then, it’s difficult to provide a definitive answer which is true in all cases, because each graphics application framework, has a slightly different definition of what a material is.

What I had told my friend, was that in general, a material is a kind of node, which we can attach texture-images to, but that usually, the material allows the content-designer, additionally, to specify certain parameters, with which the textures are to be rendered. My friend next wanted to know what then, the difference was, between a material and a shader. And basically, when we use material nodes, we usually don’t code any shaders. But, if we did code a shader, then the logical place to tell our graphics application to load it, is as another parameter of the material. In fact, the subtle details of what a material does, are often defined by a shader of some sort, which the content designer doesn’t see.

But, materials will sometimes have a more-powerful GUI, which allows the content-designer to connect apparent nodes, which are shown in front of him visually, in order to decide how his texture images are to be combined, into the final appearance of a 3D model, and this GUI can go so far as to display Nodes, visible to the content-designer, that make this work easier.

My friend was not happy with this answer, because I had not defined what a material is, in a way that applies to ALL Graphics Applications. And the reason I did not, was the fact that each graphics application is slightly different in this regard.

Dirk

 

Terrain Objects

In this YouTube Video:

I told potential viewers, that there can be more than one way, in which a 2D Image can be transformed into some form of 3D Geometry, in the form of a mesh, suitable for CGI. This took the form of Terrain Objects.

Some of my readers may already know a lot about Terrain Objects, but then again, some may not.

There was a detail to Terrain Objects which my screen-cast failed to explain. Given a serious game engine, or other 3D rendering engine, this will offer its content-developer a variety of different objects, which he or she can build a game, etc., out of. And most game engines, will actually implement Terrain Objects as being different Entities, from generic Models. Not only that, but Convex Models exist in addition to the types of Models, that would be used to represent Actors… And the exact way in which this is organized usually depends on the game engine.

What most game engines will do is actually allow their content-developer just to specify a height-map, which refers to the 2D image the pixel-values of which are the heights, and to convert this into a 3D mesh behind the scenes for him. Not only that, but powerful game engines will actually support ‘Chunked Terrain’ in some way, which means that the Terrain of a given Game Level is subdivided into Chunks, not all of which need to be loaded onto the graphics card at once.

The reason fw this is done, is the fact that the actual 3D mesh consumes far more graphics memory, than a 2D Image would, especially in the case of Terrains. Not having to load the geographical definition of an entire world at once, has its benefits.

But I also felt that it was beyond the scope of my video, to explain that.

(Update 05/08/2018, 15h35 … )

Continue reading Terrain Objects

About the Black Borders Around some of my Screen-Shots

One practice I have, is to take simple screen-shots of my Linux desktop, using the KDE-compatible utility named ‘KSnapshot’. It can usually be activated, by just tapping on the ‘Print-Screen’ keyboard-key, and if not, KDE can be customized with a hot-key combination to launch it just as easily.

If I use this utility to take a snapshot, of one single application-window, then it may or may not happen, that the screen-shot of that window has a wide, black border. And the appearance of this border, may confuse my readers.

The reason this border appears, has to do with the fact that I have Desktop Compositing activated, which on my Linux systems is based on a version of the Wayland Compositor, that has been built specifically, to work together with the X-server.

One of the compositing effects I have enabled, is to draw a bluish halo around the active application-window. Because this is introduced as much as possible, at the expense of GPU power and not CPU power, it has its own way of working, specific to OpenGL 2 or OpenGL 3. Essentially, the application draws its GUI-window into a specifically-assigned memory region, called a ‘drawing surface’, but not directly to the screen-area to be seen. Instead, the drawing surface of any one application window, is taken by the compositor to be a Texture Image, just like 3D Models would have Texture Images. And then the way Wayland organizes its scene, essentially just simplifies the computation of coordinates. Because OpenGL versions are optimized for 3D, they have specialized way to turn 3D coordinates into 2D, screen-coordinates, which the Wayland Compositor bypasses for the most part, by feeding the GPU some simplified matrices, where the GPU would be able to accept much more complex matrices.

In the end, in order for any one application-window to receive a blue halo, to indicate that it is the one, active application in the foreground, its drawing surface must be made larger to begin with, than what the one window-size would normally require. And then, the blue halo exists statically within this drawing-surface, but outside the normal set of coordinates of the drawn window.

The halo appears over the desktop layout, and over other application windows, through the simple use of alpha-blending on the GPU, using a special blending-mode:

  • The inverse of the per-texel alpha determines by how much the background should remain visible.
  • If the present window is not the active window, the background simply replaces the foreground.
  • If the present window is the active window, the two color-values add, causing the halo to seem to glow.
  • The CPU can decide to switch the alpha-blending mode of an entity, without requiring the entity be reloaded.

KSnapshot sometimes recognizes, that if instructed to take a screen-shot of one window, it should copy a sub-rectangle of the drawing surface. But in certain cases the KSanpshot utility does not recognize the need to do this, and just captures the entire drawing surface. Minus whatever alpha-channel the drawing surface might have, since screen-shots are supposed to be without alpha-channels. So the reader will not be able to make out the effect, because by the time a screen-shot has been saved to my hard-drive, it is without any alpha-channel.

And there are two ways I know of by default, to reduce an image that has an alpha-channel, to one that does not:

  1. The non-alpha, output-image can cause the input image to appear, as though in front of a checkerboard-pattern, taking its alpha into account,
  2. The non-alpha, output-image can cause the input image to appear, as though just in front of a default-color, such as ‘black’, but again taking its alpha into account.

This would be decided by a library, resulting in a screen-shot, that has a wide black border around it. This represents the maximum extent, by which static, 2D effects can be dawn in – on the assumption that those effects were defined on the CPU, and not on the GPU.

So, just as the actual application could be instructed to draw its window into a sub-rectangle of the whole desktop, it can be instructed to draw its window into a sub-rectangle, of its assigned drawing-surface. And with this effect enabled, this is indeed how it’s done.

Dirk