ArrayFire 3.5.1 Compiled, using GCC 4.9.2 and CUDA – Success!

A project which has fixated me for several days has been, to get the API which is named “ArrayFire” custom-compiled, specifically version 3.5.1 of that API, to enable CUDA support. This poses a special problem because when we use the Debian / Stretch repositories to install the CUDA Run-Time, we are limited to installing version 8.0.44. This run-time is too old, to be compatible with the standard GCC / CPP / C++ compiler-set available under the same distribution of Linux. Therefore, it can be hard for users and Debian Maintainers alike to build projects that use CUDA, and which are compatible with -Stretch.

Why use ArrayFire? Because writing kernels for parallel computing on the GPU is hard. ArrayFire is supposed to make the idea more-accessible to people like me, who have no specific training in writing highly parallel code.

When we install ArrayFire from the repositories, OpenCL support is provided, but not CUDA support.

I’ve hatched a plan by which I can install an alternative compiler set on the computer I name ‘Phosphene’, that being GCC / CPP / C++ version 4.9.2, and with which I can switch my compilers to that version, so that maybe, I can compile projects in the future, that do use CUDA, and that are sophisticated enough to require a full compiler-suite to build.

What I’ve found was that contrarily to how it went the previous time, this time I’ve scored a success. :-)  Not only did the project compile without errors, but then, a specific Demo that ships with the source-code version, that uses the ability of CUDA to pass graphics to the OpenGL rendering of the GPU, also ran without errors…

Screenshot_20190501_160413

So now I can be sure that the tool-chain which I’ve installed, is up to the task of compiling highly-complex CUDA projects.

(Update 5/03/2019, 7h30 : )

Continue reading ArrayFire 3.5.1 Compiled, using GCC 4.9.2 and CUDA – Success!

About how I won’t be doing any ‘ASL’ computing soon.

There exists an Open-Source code library named ‘ASL’, which stands for “Advanced Simulation Library“. Its purpose is to allow application-designers who don’t have to be deep experts at writing C++ code, to perform fluid simulations, but with the volume-based simulations running on the GPU of the computer, instead of on the CPU. This can also lead people to say, that ‘ASL’ is hardware-accelerated.

Last night I figured, that ‘ASL’ should run nicely on the Debian / Stretch computer I name ‘Plato’, because that computer has a GeForce GTX460 graphics card, which was considered state-of-the-art in 2011. But unfortunately for me, ‘ASL’ will only run its simulations correctly, if the GPU delivers ‘OpenCL’, version 1.2 or greater. The GeForce 460 graphics card is only capable of OpenCL 1.1, and is therefore no longer state-of-the-art by far.

Last night, I worked until exhausted, trying various solutions, in hopes that maybe the library had not been compiled correctly – I custom-compiled it, after finding out that the simulations were not running correctly. I also looked in to the possibility, that maybe I had just not been executing the sample experiments correctly. But alas, the problem was my ‘weak’ graphics card, that is nevertheless OpenGL 4 -capable.

As an alternative to using ‘ASL’, Linux users can use the Open-Source program-set called ‘Elmer‘. They run on the CPU.

Further, there is an associated GUI-application called ‘ParaView‘, the purpose of which is to take as input, volume-based geometries and arbitrary values – i.e., fluid states – and to render those with some amount of graphics finesse. I.e., ‘ParaView’ can be used to post-process the simulations that were created with ‘ASL’ or with ‘Elmer’, into a presentable visual. The version of ‘ParaView’ that installs from the package-manager under Debian / Stretch, ‘5.1.x’ , works fine. But for a while last night, I did not know whether problems that I was running in to were actually due to ‘ASL’ or to ‘ParaView’ misbehaving. And so what I also did, was to custom-compile ‘ParaView’, to version 5.5.2 . And if one does this, then the next problem one has, is that ParaView v5.5.2 requires VTK v7, while under Debian / Stretch, all we have is VTK v6.3 . And so on my platform, version 5.5.2 of ParaView encounters problems, in addition to ‘ASL’ encountering problems. And so for a while I had difficulty, identifying what the root causes of these bugs were.

Finally, the development branch, custom-compiled version of ‘Elmer’ and package-manager-installed ‘ParaView’ v5.1.x will serve me fine.

Dirk

 

About the LuxCore Renderer, and OpenCL Rendering.

One of the subjects which I’ve written about before, is that On one of my computers, the 3D Graphics Application ‘Blender’ is set up, optionally, to use ‘LuxCore’ to render a scene, and that the main feature of LuxCore which I’m interested in, is its ability to render the scene using OpenCL, and therefore using the GPU cores, rather than just using CPU cores.

A curious question which some of my readers might ask would be, ‘How does the user know, that LuxCore is in fact using the GPU, when he or she has simply selected OpenCL as the rendering hardware?’

In my case, the answer is the fact, that my GPU-temperature increases dramatically. And when I perform any type of GPU computing, the GPU temperature does same. I’ve had the GPU get 70⁰C hot, while, when idling, the GPU-temperature will not exceed 40⁰C. It can be observed though, that when OpenCL rendering is selected, all 8 CPU cores are also in-use, and that may just be a way in which LuxCore works. Further, it may be  away in which OpenCL works, to make the CPU cores available, in addition to the GPU cores.

Dirk

 

There are some bugs, in how the proprietary nVidia graphics drivers implement OpenCL.

In this earlier posting, I described how I had replaced the open-source ‘Nouveau’ drivers for my ‘nVidia’ graphics card, with nVidia’s proprietary drivers. And one of my goals was, to enable ‘OpenCL’ as well as ‘CUDA’ capabilities, which are both vehicles towards ‘GPU-computing’.

In order to test my new setup, I had subscribed to some ‘BOINC Projects‘, some of which in turn used OpenCL to power GPU Work Units.

The way in which I was setting up my computer ‘Plato’, on which all of this was to happen, was that I’d be able to use that computer, among other things, in order to run OpenGL applications and play 3D games during the day, but that at night–time hours, when I was at bed, the computer would fetch BOINC Work Units and run them – partially, on my GPU.

(Updated 05/04/2018, 13h50 … )

Continue reading There are some bugs, in how the proprietary nVidia graphics drivers implement OpenCL.