Musing about Deferred Shading.

One of the subjects which fascinate me is, Computer-Generated Images, CGI, specifically, that render a 3D scene to a 2D perspective. But that subject is still rather vast. One could narrow it by first suggesting an interest in the hardware-accelerated form of CGI, which is also referred to as “Raster-Based Graphics”, and which works differently from ‘Ray-Tracing’. And after that, a further specialization can be made, into a modern form of it, known a “Deferred Shading”.

What happens with Deferred Shading is, that an entire scene is Rendered To Texture, but in such a way that, in addition to surface colours, separate output images also hold normal-vectors, and a distance-value (a depth-value), for each fragment of this initial rendering. And then, the resulting ‘G-Buffer’ can be put through post-processing, which results in the final 2D image. What advantages can this bring?

  • It allows for a virtually unlimited number of dynamic lights,
  • It allows for ‘SSAO’ – “Screen Space Ambient Occlusion” – to be implemented,
  • It allows for more-efficient reflections to be implemented, in the form of ‘SSR’s – “Screen-Space Reflections”.
  • (There could be more benefits.)

One fact which people should be aware of, given traditional strategies for computing lighting, is, that by default, the fragment shader would need to perform a separate computation for each light source that strikes the surface of a model. An exception to this has been possible with some game engines in the past, where a virtually unlimited number of static lights can be incorporated into a level map, by being baked in, as additional shadow-maps. But when it comes to computing dynamic lights – lights that can move and change intensity during a 3D game – there have traditionally been limits to how many of those may illuminate a given surface simultaneously. This was defined by how complex a fragment shader could be made, procedurally.

(Updated 1/15/2020, 14h45 … )

Continue reading Musing about Deferred Shading.

A little trick needed, to get Blender to smooth-shade an object.

I was recently working on a project in Blender, which I have little experience doing, and noticing that, after my project was completed, the exported results showed flat-shading of mesh-approximations of spheres. And my intent was, to use mesh-approximations of spheres, but to have them smooth-shaded, such as, Phong-Shaded.

Because I was exporting the results to WebGL, my next suspicion was, that the WebGL platform was somehow handicapped, into always flat-shading the surfaces of its models. But a problem with this very suspicion was, that according to something I had already posted, to convert a model which is designed to be smooth-shaded, into a model which is flat-shaded, is not only bad practice in modelling, but also difficult to do. Hence, whatever WebGL malfunction might have been taking place, would also need to be accomplishing something computationally difficult.

As it turns out, when one wants an object to be smooth-shaded in Blender, there is an extra setting one needs to select, to make it so:

Screenshot_20200104_124756c

Once that setting has been clicked on for every object to be smooth-shaded, they will turn out to be so. Not only that, but the exported file-size actually got smaller, once I had done this for my 6 spheroids, than it was, when they were to be flat-shaded. And this last observation reassures me that:

  • Flat-Shading does in fact work as I had expected, and
  • WebGL is not handicapped out of smooth-shading.

 

It should be pointed out that, while Blender allows Materials to be given different methods of applying Normal Vectors, one of which is “Lambert Shading”, it will not offer the user different methods of interpolating the normal vector, between vertex-normals, because this interpolation, if there is to be one, is usually defined by an external program, or, in many cases, by the GPU, if hardware accelerated graphics is to be applied.

Dirk

 

The role that Materials and Textures play in CGI

I once had a friend, who had asked me what the difference was, between a Texture, and a Material, in CGI. And as it was then, it’s difficult to provide a definitive answer which is true in all cases, because each graphics application framework, has a slightly different definition of what a material is.

What I had told my friend, was that in general, a material is a kind of node, which we can attach texture-images to, but that usually, the material allows the content-designer, additionally, to specify certain parameters, with which the textures are to be rendered. My friend next wanted to know what then, the difference was, between a material and a shader. And basically, when we use material nodes, we usually don’t code any shaders. But, if we did code a shader, then the logical place to tell our graphics application to load it, is as another parameter of the material. In fact, the subtle details of what a material does, are often defined by a shader of some sort, which the content designer doesn’t see.

But, materials will sometimes have a more-powerful GUI, which allows the content-designer to connect apparent nodes, which are shown in front of him visually, in order to decide how his texture images are to be combined, into the final appearance of a 3D model, and this GUI can go so far as to display Nodes, visible to the content-designer, that make this work easier.

My friend was not happy with this answer, because I had not defined what a material is, in a way that applies to ALL Graphics Applications. And the reason I did not, was the fact that each graphics application is slightly different in this regard.

Dirk

 

Terrain Objects

In this YouTube Video:

I told potential viewers, that there can be more than one way, in which a 2D Image can be transformed into some form of 3D Geometry, in the form of a mesh, suitable for CGI. This took the form of Terrain Objects.

Some of my readers may already know a lot about Terrain Objects, but then again, some may not.

There was a detail to Terrain Objects which my screen-cast failed to explain. Given a serious game engine, or other 3D rendering engine, this will offer its content-developer a variety of different objects, which he or she can build a game, etc., out of. And most game engines, will actually implement Terrain Objects as being different Entities, from generic Models. Not only that, but Convex Models exist in addition to the types of Models, that would be used to represent Actors… And the exact way in which this is organized usually depends on the game engine.

What most game engines will do is actually allow their content-developer just to specify a height-map, which refers to the 2D image the pixel-values of which are the heights, and to convert this into a 3D mesh behind the scenes for him. Not only that, but powerful game engines will actually support ‘Chunked Terrain’ in some way, which means that the Terrain of a given Game Level is subdivided into Chunks, not all of which need to be loaded onto the graphics card at once.

The reason fw this is done, is the fact that the actual 3D mesh consumes far more graphics memory, than a 2D Image would, especially in the case of Terrains. Not having to load the geographical definition of an entire world at once, has its benefits.

But I also felt that it was beyond the scope of my video, to explain that.

(Update 05/08/2018, 15h35 … )

Continue reading Terrain Objects