About how I won’t be doing any ‘ASL’ computing soon.

There exists an Open-Source code library named ‘ASL’, which stands for “Advanced Simulation Library“. Its purpose is to allow application-designers who don’t have to be deep experts at writing C++ code, to perform fluid simulations, but with the volume-based simulations running on the GPU of the computer, instead of on the CPU. This can also lead people to say, that ‘ASL’ is hardware-accelerated.

Last night I figured, that ‘ASL’ should run nicely on the Debian / Stretch computer I name ‘Plato’, because that computer has a GeForce GTX460 graphics card, which was considered state-of-the-art in 2011. But unfortunately for me, ‘ASL’ will only run its simulations correctly, if the GPU delivers ‘OpenCL’, version 1.2 or greater. The GeForce 460 graphics card is only capable of OpenCL 1.1, and is therefore no longer state-of-the-art by far.

Last night, I worked until exhausted, trying various solutions, in hopes that maybe the library had not been compiled correctly – I custom-compiled it, after finding out that the simulations were not running correctly. I also looked in to the possibility, that maybe I had just not been executing the sample experiments correctly. But alas, the problem was my ‘weak’ graphics card, that is nevertheless OpenGL 4 -capable.

As an alternative to using ‘ASL’, Linux users can use the Open-Source program-set called ‘Elmer‘. They run on the CPU.

Further, there is an associated GUI-application called ‘ParaView‘, the purpose of which is to take as input, volume-based geometries and arbitrary values – i.e., fluid states – and to render those with some amount of graphics finesse. I.e., ‘ParaView’ can be used to post-process the simulations that were created with ‘ASL’ or with ‘Elmer’, into a presentable visual. The version of ‘ParaView’ that installs from the package-manager under Debian / Stretch, ‘5.1.x’ , works fine. But for a while last night, I did not know whether problems that I was running in to were actually due to ‘ASL’ or to ‘ParaView’ misbehaving. And so what I also did, was to custom-compile ‘ParaView’, to version 5.5.2 . And if one does this, then the next problem one has, is that ParaView v5.5.2 requires VTK v7, while under Debian / Stretch, all we have is VTK v6.3 . And so on my platform, version 5.5.2 of ParaView encounters problems, in addition to ‘ASL’ encountering problems. And so for a while I had difficulty, identifying what the root causes of these bugs were.

Finally, the development branch, custom-compiled version of ‘Elmer’ and package-manager-installed ‘ParaView’ v5.1.x will serve me fine.

Dirk

 

About the LuxCore Renderer, and OpenCL Rendering.

One of the subjects which I’ve written about before, is that On one of my computers, the 3D Graphics Application ‘Blender’ is set up, optionally, to use ‘LuxCore’ to render a scene, and that the main feature of LuxCore which I’m interested in, is its ability to render the scene using OpenCL, and therefore using the GPU cores, rather than just using CPU cores.

A curious question which some of my readers might ask would be, ‘How does the user know, that LuxCore is in fact using the GPU, when he or she has simply selected OpenCL as the rendering hardware?’

In my case, the answer is the fact, that my GPU-temperature increases dramatically. And when I perform any type of GPU computing, the GPU temperature does same. I’ve had the GPU get 70⁰C hot, while, when idling, the GPU-temperature will not exceed 40⁰C. It can be observed though, that when OpenCL rendering is selected, all 8 CPU cores are also in-use, and that may just be a way in which LuxCore works. Further, it may be  away in which OpenCL works, to make the CPU cores available, in addition to the GPU cores.

Dirk

 

There are some bugs, in how the proprietary nVidia graphics drivers implement OpenCL.

In this earlier posting, I described how I had replaced the open-source ‘Nouveau’ drivers for my ‘nVidia’ graphics card, with nVidia’s proprietary drivers. And one of my goals was, to enable ‘OpenCL’ as well as ‘CUDA’ capabilities, which are both vehicles towards ‘GPU-computing’.

In order to test my new setup, I had subscribed to some ‘BOINC Projects‘, some of which in turn used OpenCL to power GPU Work Units.

The way in which I was setting up my computer ‘Plato’, on which all of this was to happen, was that I’d be able to use that computer, among other things, in order to run OpenGL applications and play 3D games during the day, but that at night–time hours, when I was at bed, the computer would fetch BOINC Work Units and run them – partially, on my GPU.

(Updated 05/04/2018, 13h50 … )

Continue reading There are some bugs, in how the proprietary nVidia graphics drivers implement OpenCL.

Another Caveat, To GPU-Computing

I had written in previous postings, that I had replaced the ‘Nouveau’ graphics-drivers, that are open-source, with proprietary ‘nVidia’ drivers, that offer more capabilities, on the computer which I name ‘Plato’. In this previous posting, I described a bug that had developed between these recent graphics-drivers, and ‘xscreensaver’.

Well there is more, that can go wrong between the CPU and the GPU of a computer, if the computer is operating a considerable GPU.

When applications set up ‘rendering pipelines’ – aka contexts – they are loading data-structures as well as register-values, onto the graphics card and onto its graphics memory. Well, if the application, that would according to older standards only have resided in system memory, either crashes, or gets forcibly closed using a ‘kill -9′ instruction, then the kernel and the graphics driver will fail to clean up, whatever data-structures it had set up on the graphics card.

The ideal behavior would be, that if an application crashes, the kernel not only clean up whatever resources it was using in system memory, and within the O/S, but also, belonging to graphics memory. And for all I know, the programmers of the open-source drivers under Linux may have made this a top priority. But apparently, nVidia did not.

And so a scenario which can take place, is that the user needs to kill a hung application that was making heavy use of the graphics card, and that afterward, the state of the graphics card is corrupted, so that for example, ‘OpenCL‘ kernels will no longer run on it correctly.

Continue reading Another Caveat, To GPU-Computing