About how I won’t be doing any ‘ASL’ computing soon.

There exists an Open-Source code library named ‘ASL’, which stands for “Advanced Simulation Library“. Its purpose is to allow application-designers who don’t have to be deep experts at writing C++ code, to perform fluid simulations, but with the volume-based simulations running on the GPU of the computer, instead of on the CPU. This can also lead people to say, that ‘ASL’ is hardware-accelerated.

Last night I figured, that ‘ASL’ should run nicely on the Debian / Stretch computer I name ‘Plato’, because that computer has a GeForce GTX460 graphics card, which was considered state-of-the-art in 2011. But unfortunately for me, ‘ASL’ will only run its simulations correctly, if the GPU delivers ‘OpenCL’, version 1.2 or greater. The GeForce 460 graphics card is only capable of OpenCL 1.1, and is therefore no longer state-of-the-art by far.

Last night, I worked until exhausted, trying various solutions, in hopes that maybe the library had not been compiled correctly – I custom-compiled it, after finding out that the simulations were not running correctly. I also looked in to the possibility, that maybe I had just not been executing the sample experiments correctly. But alas, the problem was my ‘weak’ graphics card, that is nevertheless OpenGL 4 -capable.

As an alternative to using ‘ASL’, Linux users can use the Open-Source program-set called ‘Elmer‘. They run on the CPU.

Further, there is an associated GUI-application called ‘ParaView‘, the purpose of which is to take as input, volume-based geometries and arbitrary values – i.e., fluid states – and to render those with some amount of graphics finesse. I.e., ‘ParaView’ can be used to post-process the simulations that were created with ‘ASL’ or with ‘Elmer’, into a presentable visual. The version of ‘ParaView’ that installs from the package-manager under Debian / Stretch, ‘5.1.x’ , works fine. But for a while last night, I did not know whether problems that I was running in to were actually due to ‘ASL’ or to ‘ParaView’ misbehaving. And so what I also did, was to custom-compile ‘ParaView’, to version 5.5.2 . And if one does this, then the next problem one has, is that ParaView v5.5.2 requires VTK v7, while under Debian / Stretch, all we have is VTK v6.3 . And so on my platform, version 5.5.2 of ParaView encounters problems, in addition to ‘ASL’ encountering problems. And so for a while I had difficulty, identifying what the root causes of these bugs were.

Finally, the development branch, custom-compiled version of ‘Elmer’ and package-manager-installed ‘ParaView’ v5.1.x will serve me fine.

Dirk

 

About the LuxCore Renderer, and OpenCL Rendering.

One of the subjects which I’ve written about before, is that On one of my computers, the 3D Graphics Application ‘Blender’ is set up, optionally, to use ‘LuxCore’ to render a scene, and that the main feature of LuxCore which I’m interested in, is its ability to render the scene using OpenCL, and therefore using the GPU cores, rather than just using CPU cores.

A curious question which some of my readers might ask would be, ‘How does the user know, that LuxCore is in fact using the GPU, when he or she has simply selected OpenCL as the rendering hardware?’

In my case, the answer is the fact, that my GPU-temperature increases dramatically. And when I perform any type of GPU computing, the GPU temperature does same. I’ve had the GPU get 70⁰C hot, while, when idling, the GPU-temperature will not exceed 40⁰C. It can be observed though, that when OpenCL rendering is selected, all 8 CPU cores are also in-use, and that may just be a way in which LuxCore works. Further, it may be  away in which OpenCL works, to make the CPU cores available, in addition to the GPU cores.

Dirk

 

Now, I have LuxCoreRender working on one of my computers.

In This Earlier Posting, I wrote that a rendering-engine exists, which is actually a (‘software-based’) ray-tracer, making it better in picture-quality, from what raster-based rendering can achieve, the latter of which is more-commonly associated with hardware acceleration, but that this rendering engine has been written in a variant of C, which makes it suitable both in syntax and in semantics, to run under OpenCL, which, in turn, is a driver for running massively-parallel computations on our GPU – on our Graphics Processing Unit. Assuming we have a sufficiently-strong graphics card.

That ray-tracer is known as “LuxCoreRender”.

Well previously I had spent a long time, trying to get this to run on my computer, which I name ‘Plato’, using the wrong methodologies, and thus consuming a whole day of my time, and tiring me out.

What I finally did was to register with the user forums of LuxCoreRender, and asking them for advice. And the advice they gave me, solved that problem within about 30 minutes.

Their advice was:

  1. Download up-to-date Blender 2.79b, from the Web-site, not the package-installed 2.78a, and then just unpack the binary into a local, user-space folder on my Linux computer,
  2. Insert the binary BlendLuxCore / Tell Blender 2.79b to install it directly from its .ZIP-File.

It now works like a charm, and I am able to define 3D Scenes, which I want LuxCoreRender to render! :-)  And running the code on the GPU – using OpenCL – also works fully.

luxcore_test1

I suppose that one advantage which ray-tracing affords, over raster-based graphics, is that the Material which we give models is not limited to U,V-mapped texture images and their H/W-based shaders, but can rather be complex Materials, which Blender allows us to edit using its Node-Editor, and the input-textures for which can be mathematically-based noise-patterns, as shown above.

Dirk

 

Problems getting LuxCoreRender to work on one of my computers.

In This Earlier Posting, I had written that I had used a benchmarking tool named “LuxMark”, to test whether my GPU even works, on the computer which I name ‘Plato’. In the process, I ran across the discovery, that there exists a type of rendering-engine named ‘LuxCoreRenderer’, which is a ray-tracer, but which will do its ray-tracing with code, that runs under OpenCL.

I had assumed that the benchmarking tool was free, but that users would need to be paying customers, before they can use ‘LuxCoreRender’, to render their own scenes. To my great approval I’ve discovered that LuxRender and LuxCoreRender are also, Free, Open-Source Software. :-D  But to my disappointment I’ve learned, that there is no feasible way in which I could use that, on ‘Plato’.

The reason is fairly straightforward. The devs have renamed their renderer, from LuxRender to LuxCoreRender, at which point they also carried out a rebuild of their code, so that version 1.6 was discontinued, and ‘a cleaner build’ was started, as version 2.x . In order to make practical use of a rendering engine, I need a GUI, to create scenes for that engine to render. Well, LuxCoreRender has as minimum requirement, Blender v2.79b , while the Blender-version I have installed on ‘Plato’ is only v2.78a . The requirements that the devs state is strict, because Blender versions before 2.79 contained a bug, which would cause crashes. Not only that, but in this case, a user-space application would crash, for which there are considerable processes running on the GPU, which can cause severe memory-leaks, as I wrote Here.

Now, there does exist a stand-alone version of LuxCoreRender, v2.x , which in fact runs on ‘Plato’, but which remains rather useless to me, because it can only load and then render scene-descriptions, which have been stored to a file which is totally based on Lux, and not on any other standards.

Continue reading Problems getting LuxCoreRender to work on one of my computers.