Musing about Deferred Shading.

One of the subjects which fascinate me is, Computer-Generated Images, CGI, specifically, that render a 3D scene to a 2D perspective. But that subject is still rather vast. One could narrow it by first suggesting an interest in the hardware-accelerated form of CGI, which is also referred to as “Raster-Based Graphics”, and which works differently from ‘Ray-Tracing’. And after that, a further specialization can be made, into a modern form of it, known a “Deferred Shading”.

What happens with Deferred Shading is, that an entire scene is Rendered To Texture, but in such a way that, in addition to surface colours, separate output images also hold normal-vectors, and a distance-value (a depth-value), for each fragment of this initial rendering. And then, the resulting ‘G-Buffer’ can be put through post-processing, which results in the final 2D image. What advantages can this bring?

  • It allows for a virtually unlimited number of dynamic lights,
  • It allows for ‘SSAO’ – “Screen Space Ambient Occlusion” – to be implemented,
  • It allows for more-efficient reflections to be implemented, in the form of ‘SSR’s – “Screen-Space Reflections”.
  • (There could be more benefits.)

One fact which people should be aware of, given traditional strategies for computing lighting, is, that by default, the fragment shader would need to perform a separate computation for each light source that strikes the surface of a model. An exception to this has been possible with some game engines in the past, where a virtually unlimited number of static lights can be incorporated into a level map, by being baked in, as additional shadow-maps. But when it comes to computing dynamic lights – lights that can move and change intensity during a 3D game – there have traditionally been limits to how many of those may illuminate a given surface simultaneously. This was defined by how complex a fragment shader could be made, procedurally.

(Updated 1/15/2020, 14h45 … )

Continue reading Musing about Deferred Shading.

A butterfly is being oppressed by 6 evil spheroids!

As this previous posting of mine chronicles, I have acquired an Open-Source Tool, which enables me to create 3D / CGI content, and to distribute that in the form of a WebGL Scene.

The following URL will therefore test the ability of the reader’s browser more, to render WebGL properly:

http://dirkmittler.homeip.net/WebGL/Marbles6.html

And this is a complete rundown of my source files:

http://dirkmittler.homeip.net/WebGL


 

(Updated 01/07/2020, 17h00 … )

(As of 01/04/2020, 22h35 : )

On one of my alternate computers, I also have Firefox ESR running under Linux, and that browser was reluctant to Initialize WebGL. There is a workaround, but I’d only try it if I’m sure that graphics hardware / GPU is strong on a given computer, and properly installed, meaning, stable…

Continue reading A butterfly is being oppressed by 6 evil spheroids!

Understanding why some e-Readers fall short of performing as Android tablets (Setting, Hidden Benefits).

There is a fact about modern graphics chips which some people may not be aware of – especially some Linux users – but which I was recently reminded of because I have bought an e-Reader that has the Android O/S, but that features the energy-saving benefits of “e-Ink” – an innovative technology that has a surface somewhat resembling paper, the brightness of which can vary between white and black, but that mainly uses available light, although back-lit and front-lit versions of e-Ink now exist, and that consumes very little current, so that it’s frequently possible to read an entire book on one battery-charge. With an average Android tablet that merely has an LCD, the battery-life can impede enjoying an e-Book.

An LCD still has in common with the old CRTs, being refreshed at a fixed frequency by something called a “raster” – a pattern that scans a region of memory and feeds pixel-values to the display sequentially, but maybe 60 times per second, thus refreshing the display that often. e-Ink pixels are sent a signal once, to change brightness, and then stay at the assigned brightness level until they receive another signal, to change again. What this means is that, at the hardware-level, e-Ink is less powerful than ‘frame-buffer devices’ once were.

But any PC, Mac or Android graphics card or graphics chip manufactured later than in the 1990s has a non-trivial GPU – a ‘Graphics Processing Unit’ – that acts as a co-processor, working in parallel with the computer’s main CPU, to take much of the workload off the CPU, associated with rendering graphics to the screen. Much of what a modern GPU does consists of taking as input, pixels which software running on the CPU wrote either to a region of dedicated graphics memory, or, in the case of an Android device, to a region of memory shared between the GPU and the CPU, but part of the device’s RAM. And the GPU then typically ‘transforms’ the image of these pixels, to the way they will appear on the screen, finally. This ends up modifying a ‘Frame-Buffer’, the contents of which are controlled by the GPU and not the CPU, but which the raster scans, resulting in output to the actual screen.

Transforming an image can take place in a strictly 2D sense, or can take place in a sense that preserves 3D perspective, but that results in 2D screen-output. And it gets applied to desktop graphics as much as to application content. In the case of desktop graphics, the result is called ‘Compositing’, while in the case of application content, the result is either fancier output, or faster execution of the application, on the CPU. And on many Android devices, compositing results in multiple Home-Screens that can be scrolled, and the glitz of which is proven by how smoothly they scroll.

Either way, a modern GPU is much more versatile than a frame-buffer device was. And its benefits can contribute in unexpected places, such as when an application outputs text to the screen, but when the text is merely expected to scroll. Typically, the rasterization of fonts still takes place on the CPU, but results in pixel-values being written to shared memory, that correspond to text to be displayed. But the actual scrolling of the text can be performed by the GPU, where more than one page of text, with a fixed position in the drawing surface the CPU drew it to, is transformed by the GPU to advancing screen-positions, without the CPU having to redraw any pixels. (:1) This effect is often made more convincing, by the fact that at the end of a sequence, a transformed image is sometimes replaced by a fixed image, in a transition of the output, but between two graphics that are completely identical. These two graphics would reside in separate regions of RAM, even though the GPU can render a transition between them.

(Updated 4/20/2019, 12h45 … )

Continue reading Understanding why some e-Readers fall short of performing as Android tablets (Setting, Hidden Benefits).

I no longer have Compiz-Fusion running on the LXDE-based computer, named ‘Klexel’.

According to this earlier posting, I had run in to stability issues with my newly-reinstalled Linux computer, which I name ‘Klexel’. Well, the only sensible way, finally, to solve those problems, was to deactivate ‘Compiz Fusion’, which is a special window-manager / compositor, that creates a desktop cube animation, as well as certain other effects, chosen by the user out of a long menu of effects, but which needs to run on the graphics hardware, using OpenGL.

Even though Compiz Fusion is fancy and seems like a nice idea, I’ve run in to the following problems with it, in my own experience:

  • Compiz is incompatible with Plasma 5, which is still my preferred desktop manager,
  • If we have a weak graphics-chip, such as the one provided on the computer named ‘Klexel’ using ‘i915 Support’, trying to run Compiz on it forces the so-called GPU to jump through too many hoops, to display what it’s being asked to display.

Before a certain point in time, even a hardware-accelerated graphics chip, only consisted of X vertex pipelines and Y fragment pipelines, and had other strict limitations on what it could do. It was after a point in time, that the “Unified Shader Model” was introduced, whereby any GPU core could act, as a vertex shader core, as a fragment shader core, etc.. And after that point in time, the GPU also became capable of rendering its output to texture images, several stages deep… Well, programmers today tend to program for the eventuality, that the host machine has ‘a real GPU’, with Unified Shader Model and unlimited cores, as well as unlimited texture space.

The “HP Compaq DC7100 SFF”, that has become my computer ‘Klexel’, is an ancient computer whose graphics chip stems from ‘the old days’. That seems to have been an Intel 910, which has as hardware-capability, direct-rendering with OpenGL 1.4 , the Open-Source equivalent of DirectX 7 or 8 . Even though some Compiz effects only require OpenGL 1.4 , by default, I need to run the computer named ‘Klexel’ without compositing:

screenshot-from-2018-09-04-11-56-50

Also, before, when this was the computer ‘Walnut’, it actually still had KDE 3 on it! KDE 3 was essentially also, without compositing.

It should finally be stable again, now.

By comparison, the computer which acts as Web-server and hosts this blog, which I name ‘Phoenix’, has as graphics chip an Nvidia “GeForce 6150SE”, that is more powerful than the Intel ‘i915′ series was, is capable of OpenGL 2.1 , equivalent to DirectX 9 , but still predated the Unified Shader Model chips. Microsoft has even dropped support for this graphics chip, because according to Microsoft, it’s also not powerful enough anymore. And so up-to-date Windows versions won’t run on either of these two computers.

(Update 09/04/2018, 18h20 : )

Continue reading I no longer have Compiz-Fusion running on the LXDE-based computer, named ‘Klexel’.