About how I won’t be doing any ‘ASL’ computing soon.

There exists an Open-Source code library named ‘ASL’, which stands for “Advanced Simulation Library“. Its purpose is to allow application-designers who don’t have to be deep experts at writing C++ code, to perform fluid simulations, but with the volume-based simulations running on the GPU of the computer, instead of on the CPU. This can also lead people to say, that ‘ASL’ is hardware-accelerated.

Last night I figured, that ‘ASL’ should run nicely on the Debian / Stretch computer I name ‘Plato’, because that computer has a GeForce GTX460 graphics card, which was considered state-of-the-art in 2011. But unfortunately for me, ‘ASL’ will only run its simulations correctly, if the GPU delivers ‘OpenCL’, version 1.2 or greater. The GeForce 460 graphics card is only capable of OpenCL 1.1, and is therefore no longer state-of-the-art by far.

Last night, I worked until exhausted, trying various solutions, in hopes that maybe the library had not been compiled correctly – I custom-compiled it, after finding out that the simulations were not running correctly. I also looked in to the possibility, that maybe I had just not been executing the sample experiments correctly. But alas, the problem was my ‘weak’ graphics card, that is nevertheless OpenGL 4 -capable.

As an alternative to using ‘ASL’, Linux users can use the Open-Source program-set called ‘Elmer‘. They run on the CPU.

Further, there is an associated GUI-application called ‘ParaView‘, the purpose of which is to take as input, volume-based geometries and arbitrary values – i.e., fluid states – and to render those with some amount of graphics finesse. I.e., ‘ParaView’ can be used to post-process the simulations that were created with ‘ASL’ or with ‘Elmer’, into a presentable visual. The version of ‘ParaView’ that installs from the package-manager under Debian / Stretch, ‘5.1.x’ , works fine. But for a while last night, I did not know whether problems that I was running in to were actually due to ‘ASL’ or to ‘ParaView’ misbehaving. And so what I also did, was to custom-compile ‘ParaView’, to version 5.5.2 . And if one does this, then the next problem one has, is that ParaView v5.5.2 requires VTK v7, while under Debian / Stretch, all we have is VTK v6.3 . And so on my platform, version 5.5.2 of ParaView encounters problems, in addition to ‘ASL’ encountering problems. And so for a while I had difficulty, identifying what the root causes of these bugs were.

Finally, the development branch, custom-compiled version of ‘Elmer’ and package-manager-installed ‘ParaView’ v5.1.x will serve me fine.

Dirk

 

I’ve just benchmarked my GPU’s ability to run OpenCL v1.2 .

Recently I’ve come into some doubt, about whether the GPU-computing ability of my graphics hardware specifically, might be defective somehow. But, given that ability, there exist benchmarks which people can run.

One such benchmark is called “LuxMark“, and I just ran it, on the computer I name ‘Plato’.

The way LuxMark works, is that it uses software to ray-trace a scene, thereby explicitly not using the standard, ‘raster-based rendering’, which graphics hardware is most famous for. But as a twist, this engine compiles the C-code which performs this task, using OpenCL instead of using a general C compiler for the CPU. Therefore, this software runs as C, but on the GPU.

This is similar to what a demo-program once did, which nVidia used to ship with their graphics cards, which showed a highly-realistic sports-car, because ray-tracing produces greater realism, than raster-based graphics would.

Here is the result:

screenshot_20180504_140224_c

I suppose that people who are intrigued by CGI – as I am – might eventually be interested in acquiring the LuxCoreRender engine, which would allow software-customers to render scenes which they choose. LuxMark just uses LuxCoreRender, in order to benchmark the GPU with one specific, preset scene.

But what this tells me is that there is essentially still nothing wrong at the hardware-level, with my GPU, or its ability to compute using OpenCL v1.2 . And, some version of OpenCL was also what the BOINC Project was using, whose GPU Work Units I was completing for several recent days.

One question which I’d want to know next, is whether a score of “2280” is good or bad. The site suggest that visitors exist whose GPUs are much stronger. But then, I’d need to have an account with LuxCoreRender to find out… :-D  The answer to that question is logical. My graphics card is ‘only’ a series-400. Because users exist with series-900, or series-1000 graphics cards, obviously, theirs will result in much faster benchmarks.

Dirk