Screenshot_20200103_221443

A butterfly is being oppressed by 6 evil spheroids!

As this previous posting of mine chronicles, I have acquired an Open-Source Tool, which enables me to create 3D / CGI content, and to distribute that in the form of a WebGL Scene.

The following URL will therefore test the ability of the reader’s browser more, to render WebGL properly:

http://dirkmittler.homeip.net/WebGL/Marbles6.html

And this is a complete rundown of my source files:

http://dirkmittler.homeip.net/WebGL


 

(Updated 01/07/2020, 17h00 … )

(As of 01/04/2020, 22h35 : )

On one of my alternate computers, I also have Firefox ESR running under Linux, and that browser was reluctant to Initialize WebGL. There is a workaround, but I’d only try it if I’m sure that graphics hardware / GPU is strong on a given computer, and properly installed, meaning, stable…

In the URL bar, type ‘about:config’. Accept the scare-message, that proceeding will void your warranty. There is no warranty! :-D  Then, search for ‘webgl’, and set ‘webgl.force-enabled‘ to ‘true‘.

At that point, the reader will have enabled WebGL. (:1)


 

(Update 01/05/2020, 8h25 : )

I’m surprised at how powerful WebGL has become. This platform may be making use of WebGL 2. Apparently, WebGL 2 allows for the Web-page to allocate Render-To-Texture (‘RTT’) up to a maximum of 4 times.

Because shadows require a depth-map per light source, up to 4 light-sources may cast shadows. But, Screen-Space Ambient Occlusion (‘SSAO’) is also available. Because this is a technique that relies on Deferred Shading, and therefore consumes 1 ‘RTT Slot’, if it’s enabled, it reduces the number of light-sources which may cast shadows to 3.

I enabled 1 in-scene (real-time) reflection, which, again takes up 1 ‘RTT Slot’, but I did not make use of ‘SSAO’. Therefore, together with my one shadow-casting light-source, this is taking up 2 ‘RTT Slots’.

The documentation indicates that The latest ‘Blend4Web’ version provides a full implementation of Refraction. (In the cited Web-page, “IOR” stands for ‘Index Of Refraction’.) Therefore, it would also take up 1 ‘RTT Slot’ (per refracting surface). I did not make any use of refraction here. (:2)

What tall this means is that this little 3D Scene will take up quite a bit of GPU resources, and my development server tells me, that it takes up about 52 MB of Graphics Memory. So, it’s not for systems with weak graphics chips! :-)

A note of caution, though:

After we have made the settings under the Mirror Section of the Materials Panel, we are instructed also to enable reflections in the “Real-Time Reflections” section of the Object Panel. Here, if we select “Reflective”, what next becomes visible is a drop-down that defaults to “Plane”. This drop-down is followed by a selection field named “Reflection Plane”, which should normally remain empty.

What these fields do, is to tell the rendering engine how may ‘RTT Instances’ to use, which, under WebGL, are in short supply. If anything other than ‘Plane’ was chosen here, namely “Cube”, then this would instruct the rendering engine to allocate 6 ‘RTT Instances’, to complete a cube-map around the object which is to display reflections. But, because WebGL apparently only allows 4 ‘RTT Slots’, selecting ‘Cube’ here will only result in a Blender Scene, that cannot be loaded completely into the WebGL viewer.


 

Another Observation:

When using ‘Blender’, materials can either be defined ‘the easy way’, as “Stack Materials”, or ‘the more complex, node-defined way’, the latter of which is referred to as “Cycles Render(ing)”. Apparently, ‘Blend4Web’ gives full support to node-based materials, and even adds node-types of its own. This is the only way in which either out of 2 types of refraction can be implemented.

In my own brief experiences with ‘Cycles Render’, a material needs to be set up as a node-based material from the beginning, in order for this feature ever to become available under ‘Blender’. My own attempts, first to define a simple ‘Stack Material’, but then to replace it with a node-based, ‘Cycles Render’, have always resulted in failure in the past.


 

Further:

What my Blend4Web Scene above seems to display, is 6 objects with transparency, all casting shadows on 1 receiving object, that being a plane, thereby eventually consuming 1 ‘RTT Slot’. In reality, it cost me several hours last night, to discover that objects with transparency, cannot cast shadows.

The workaround for a case like this was, to select all 6 visible spheroids – objects that are supposed to cast shadows – simultaneously, and then to left-click on Duplicate, and then, without having moved the mouse, to right-click, thereby placing the duplicated objects into the scene without displacing them. The 6 newly created objects will be the objects, that actually cast the shadows. In order for those to do so, but not to become visible, they must first be deselected and then re-selected one-by-one, the materials need to be deleted from their materials list, and then, in the Object Panel, under the Shadows Section, “Cast Shadows” as well as “Cast Only” are to be selected. “Cast Only” becomes visible as soon as “Cast Shadows” has been selected.

This resulted in 6 invisible proxy-objects, the only purpose of which was, to cast shadows. If their materials list still contained the materials of the duplicated, original objects, then the per-object settings to ‘Cast’ and ‘Cast Only’ apparently did not work fully, because these settings were already different according to the (still-linked) materials.


 

(Update 01/05/2020, 10h10 : )

I suppose that a possible follow-up question which my experiments with Blend4Web specifically pose would be, ‘Is it actually possible to perform shadow-mapping, including to blur the shadows, using only 1 RTT Instance per shadow-casting light source?’ And the short answer is, ‘Theoretically, Yes.’

Long Answer:

In order to perform shadow-mapping, a Render-To-Texture needs to be performed, which is actually a depth-map, as seen from the light source, as if that was yet another camera-position or view. Fragments further from the light-source than the value indicated in this depth-map, receive a shadow, as seen from the real camera-position. In addition, depth-values written to this depth-map need to be ‘biased’, i.e., made minutely more distant, than values used to determine whether the fragments will become shadowed or not, just to prevent objects which can both cast and receive shadows, from hiding their own light-source-facing surfaces…

An optional practice which some CGI developers – coders – follow, is to create a separate shadow-map, per camera position, that simply indicates how many times fragments to be rendered to it, have become shadowed. In that case issues can also arise over how to communicate shadow-values to the view that is finally rendered. This can be solved using a Stencil Buffer, thereby avoiding the need to program an ‘Über-Shader’.

The way that second step can be bypassed is, that within the fragment-shader that renders to the final camera-position, the comparison can take place, according to a fragment’s light-source-coordinates as well as its camera-coordinates, so that shadow-application can be computed within one fragment-shader invocation. That may not be the most GPU-efficient way to do it, but it skips the need for an extra RTT Instance.

What can also be done then is, that the depth-map, not any shadow-map, can be sampled over a number of points, so that a return value, stating whether each point is to be exposed to the given light-source or not, can be transformed into a count, of how many points were thus exposed (…), but still within the same fragment-shader invocation. This makes for a more-complicated fragment-shader, but one which professionals may have the ability to get working.

It’s assumed that when the user increases the blur radius, he is not increasing the number of points to be sub-sampled, but only increasing the distance between them.


 

Similar logic applies, when a texture-image is applied to a light-source, to achieve “Caustic Lighting”. The current fragment’s coordinates can be transformed, both into light-source coordinates and into camera-position coordinates, so that the texture-image associated with the light-source can be sampled at known, (2D) U,V coordinates, and the value fetched, can be made to modulate the fragment as it will be rendered to the camera-position.

Again, a sort of Über-Shader could be programmed, that incorporates all these steps, without requiring multiple RTT Instances.


 

(Update 01/05/2020, 16h00 : )

1:)

Under Linux, in general, Firefox ESR ships with GPU support disabled, for fear of stability issues. The recipe I gave above, will specifically enable WebGL, and no other GPU use. But, this can still cause a performance problem, because whatever drawing surface OpenGL gave Firefox, where the OpenGL version could be 3.1, may only exist as a memory location, and because software running on the CPU may still need to ‘blit’ the image-stream to the actual display manager.

Hence, one can go even further, and enable GPU usage overall, to accelerate how Firefox outputs all its contents to the (Linux) display manager. This is actually a bigger risk to undertake, especially since many Linux computers are known to have either weak or quirky graphics drivers. But if the user wishes to do this, he may be able to speed up his entire browsing experience, and reduce CPU usage, when viewing WebGL content.

The recipe is to look for the configuration key ‘layers.acceleration.force-enabled‘, and to set it to ‘true‘. This might sound rather reckless coming from me, but, if the user does have proper drivers for their graphics hardware, then at some point, he might actually want to benefit from that… Having said that, Hung and crashed desktops may also result.


 

(Update 01/07/2020, 17h00 : )

2:)

Actually, it’s entirely possible that the RTT instance which Blend4Web needs, to implement refraction, can be shared with the ‘SSAO’ effect, so that to have both effects enabled may only consume ‘1 slot’, and then, to have more refractive instances, will consume no more. But, never having played with the ‘BSDF’ node myself, I cannot really be sure of what it does.

In fact, if one goes by what’s possible for OpenGL 3, then such a G-Buffer can be used to implement multiple, ‘Screen-Space Reflections’ (SSRs), in addition to Refraction. However, I see two reasons not to dig deeper into this subject for the moment:

  1. A big problem with the type of G-Buffer that gets used for SSAO is, that it cannot contain objects with transparency. Such objects need to be put ‘in front of it’. Therefore, objects with transparency should also not be visible, within SSR reflections…
  2. What’s possible with OpenGL, may not be available to WebGL. Specifically, the concept of ‘Ray Marching’ may simply not be available in this context.

And so, the exploration of SSR etc., remains for a future article to describe.


 

Yet, there is a subject which I can describe in this context: The means which Blend4Web supports, to achieve simplified Refraction.

Screenshot_20200107_163531c

The setting looks easy enough to use, and my screen-shot assumes that in the ‘Render’ Panel, under the ‘Reflections and Refractions’ section, the ‘Refract’ setting is already set either to ‘Auto’ or ‘On’.

How does it work? It assumes that the scene has already been Rendered To a Texture, similar to the colour-texture used in Screen-Space Ambient Occlusion, except that the Depth values won’t be used, only Colour values. What the effect needs to do is to transform the coordinates of the current fragment to this buffer’s U,V coordinates, and essentially, to sample this background image from those coordinates.

Before the effect can do that, it must however determine whether the depth recorded in the G-Buffer, corresponding to the current fragment, is closer to the camera, than the current fragment. If so, the effect must kill the current fragment.

Otherwise, the only refinement that needs to be applied is, that the Normal Vector of the fragment needs to be rotated into View Space, the resulting Z-component needs to be set to zero, the X and Y components need to be multiplied by (-1.0) each, and a fudge-factor needs to be applied, so that the scalar product of this 2-vector with the fudge-factor, can be added to the U,V coordinates, from which the scene is to be sampled.

I suppose that a question this leaves unanswered is, what units the fudge-factor is in. It could be divided by the current fragment’s distance in front of the camera, (-Z), so that the value entered will seem to be in World coordinate units. Or, it could just be added to the U,V coordinates as-is, so that only very small fluctuations will be appropriate.


 

A more-accurate effect can be visualized, in which the relative distance as read from the G-Buffer is taken into account (multiplied, after the current fragment’s distance was subtracted), consistently with the assumption that a true Index Of Reflection results in a back-traced angle, and that the displacement that results from an angle, before it strikes a background image, needs to follow from the Z-distance ‘behind’ the refractive object.

Dirk

 

Print Friendly, PDF & Email

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>