An oversight which I made, in an earlier posting: Matrices with Negative Determinants.

One of the subjects which I have written about a number of times, especially in This Posting, is the use of ‘rotation matrices’, and what their determinant is. This subject actually requires some understanding of Linear Algebra to be understood in turn. But it also requires just a bit more insight, into what the equations stand for.

A matrix can exist, the columns of which are mutually perpendicular – i.e., orthogonal – in addition to being unit vectors each. What I wrote was that, in such a case, the determinant of the matrix would equal (+1), and that its transpose can be used, in place of computing its inverse.

Such a matrix can be used to rotate objects that are distinctly not rectangular in appearance, but rotate them nonetheless, in computer games, CGI, etc.

A situation which I had overlooked was, that the determinant of such a matrix could also be (-1). And if it is, then to apply this matrix to a 3D system of coordinates has as effect:

  • To convert between a right-handed coordinate system and a left-handed coordinate system accurately, or
  • To derive a model that is the mirror-image of the original model.

What tends to happen in Scientific Computing, as well as in certain other areas, is that right-handed coordinate systems are often used, and left-handed coordinates less-frequently so. Yet, left-handed coordinate systems are still used. And so, if that is the case, this conversion will need to take place eventually, and no longer counts as a rotation. I.e., it has been observed that, if a right-handed helix is rotated whichever way, it stays a right-handed helix. Well, if such an orthonormal matrix with a determinant of (-1) is applied to its model coordinates, then it will become a left-handed helix…

Dirk

 

Musing about Deferred Shading.

One of the subjects which fascinate me is, Computer-Generated Images, CGI, specifically, that render a 3D scene to a 2D perspective. But that subject is still rather vast. One could narrow it by first suggesting an interest in the hardware-accelerated form of CGI, which is also referred to as “Raster-Based Graphics”, and which works differently from ‘Ray-Tracing’. And after that, a further specialization can be made, into a modern form of it, known a “Deferred Shading”.

What happens with Deferred Shading is, that an entire scene is Rendered To Texture, but in such a way that, in addition to surface colours, separate output images also hold normal-vectors, and a distance-value (a depth-value), for each fragment of this initial rendering. And then, the resulting ‘G-Buffer’ can be put through post-processing, which results in the final 2D image. What advantages can this bring?

  • It allows for a virtually unlimited number of dynamic lights,
  • It allows for ‘SSAO’ – “Screen Space Ambient Occlusion” – to be implemented,
  • It allows for more-efficient reflections to be implemented, in the form of ‘SSR’s – “Screen-Space Reflections”.
  • (There could be more benefits.)

One fact which people should be aware of, given traditional strategies for computing lighting, is, that by default, the fragment shader would need to perform a separate computation for each light source that strikes the surface of a model. An exception to this has been possible with some game engines in the past, where a virtually unlimited number of static lights can be incorporated into a level map, by being baked in, as additional shadow-maps. But when it comes to computing dynamic lights – lights that can move and change intensity during a 3D game – there have traditionally been limits to how many of those may illuminate a given surface simultaneously. This was defined by how complex a fragment shader could be made, procedurally.

(Updated 1/15/2020, 14h45 … )

Continue reading Musing about Deferred Shading.