ArrayFire 3.5.1 Compiled, using GCC 4.8.4 and CUDA – Failure!

In this earlier posting I had written, that I had installed GCC / CPP / C++ 4.8.4, alongside versions 6.3, on my computer named ‘Phosphene’, even though Debian / Stretch computers are not meant by design, to offer earlier versions of those compilers. And I had done so, specifically so that I’d be able to compile CUDA 8 projects, for which GCC version 6 is too high.

One possible CUDA project would have been ArrayFire 3.7.0, or version 3.6.3 of that same library. However, what I found was that these latest versions of ArrayFire require a stronger compiler, than v4.8.

Well what I have now been able to do is, to compile ArrayFire 3.5.1 using the GCC 4.8.4 tool-chain, and to do so, even with CUDA support. This is higher than what Debian Maintainers could do, if only because the package-repository version of ArrayFire do not include CUDA back-ends, only OpenCL back-ends.

This proves that my GCC 4.8.4, compatibility tool-chain is correctly installed.

The question which I must now ponder is, whether I should next install v3.5.1 of ArrayFire to my root file-system, replacing v3.7.0, just because the lower version gives me CUDA support. I might not because v3.7.0 already gives me OpenCL support. Either way, I built v3.5.1 with the configuration option, to ‘Use System Forge’, which may mean that I won’t need to recompile Forge, which was compiled as version 1.0.4, and as belonging to ArrayFire 3.7.0.

Installing v3.5.1 to my root file-system would be a necessary step, to verify whether the Demos actually run, since all ArrayFire Demos need to be linked to the same library-versions that their ArrayFire build generated.

(Update:)

What I have found is twofold:

  1. The Configuration option ‘Use System Forge’ will not work, when the installed version of Forge is 1.0.4. This needs to be unchecked, and the integrated version of Forge built, and installed, and
  2. Even though the exercise compiled, it will not run.

When trying to run a CUDA executable, I get the error message:

 


terminate called after throwing an instance of 'std::regex_error'
  what():  regex_error
Aborted

 

What this result means is that I have not been able to confirm that compiling and linking projects to CUDA finally works, using the platform I’ve created, even though the actual compilation produces no error message.

Dirk

 

Another way to assess quickly, how many computing cores our GPU has.

This posting is on a familiar topic.

On certain Windows computers, there was a popular GUI-based tool named “CPU-Z”, which would give the user fast info about the capabilities of his CPU. Well that application inspired many others, on different platforms, among them, “CUDA-Z”, available for Linux.

If the user has CUDA installed, then he can Download this tool from SourceForge, which is available under Linux as an executable, which can just be put into any directory and run from there. This type of statically-linked executable has come to be known as ‘an app-image’ in the Linux world, but in this case the filename ends with ‘.run’. Below is what mine shows me… Its permission-bits need to be changed to ‘a+x’ after downloading:

Screenshot_20190429_112857

 

I find almost all the information accurate, the only exception being the “Runtime Dll Version”. BTW, Linux computers don’t generally have DLL Files. But I expect that this version-number stems from some internal limitation of the app, as I already know that my Run-Time Version is 8.0.44 .

Dirk

 

Update to Computer Phosphene Last Night

Yesterday evening, a major software update was received to the computer which I name ‘Phosphene’, putting its Debian version to 9.9 from 9.8. One of the main features of the update was, an update to the NVIDIA graphics drivers, as installed from the standard Debian repositories, to version 390.116.

This allows the maximum OpenGL version supported by the drivers to be 4.6.0, and for the first time, I’m noticing that my hardware now limits me to OpenGL 4.5 .

The new driver version does not come with an update to the CUDA version, the latter of which merits some comment. When users install CUDA to Debian / Stretch from the repositories, they obtain run-time version 8.0.44, even though the newly-updated drivers support CUDA all the way up to version 9. This is a shame because CUDA 8.0 cannot be linked to, when compiling code on the GCC / CPP / C++ 6 framework, that is also standard for Debian Stretch. When we want code to run on the GPGPU, we can just load the code onto the GPU using the CUDA run-time v8.0.44, and it runs fine. But if we want to compile major software against the headers, we are locked out. The current Compiler version is too high, for this older CUDA Run-Time version. (:1) (:4)

But on the other side of this irony, I just performed an extension of my own by installing ‘ArrayFire‘ v3.6.3 , coincidentally directly after this update. And my first attempt to do so involved the binary installer that ships with its own CUDA run-time libraries, those being of version 10. Guess what, Driver version 390 is still not high enough to accommodate Run-Time version 10. This resulted in a confusing error message at first, stating that the driver was not high enough, apparently to accommodate the run-time installed system-wide, which would have been bad news for me, as it would have meant a deeply misconfigured setup – and a newly-botched update. It was only after learning that the binary installer for ArrayFire ships with its own CUDA run-time, that I was relieved to know that the┬ásystem-installed run-time, was fine…

Screenshot_20190429_104916

(Updated 4/29/2019, 20h20 … )

Continue reading Update to Computer Phosphene Last Night

A bit of my personal history, experimenting in 3D game design.

I was wide-eyed and curious. And much before the year 2000, I only owned Windows-based computers, purchased most of my software for money, and also purchased a license of 3D Game Studio, some version of which is still being sold today. The version that I purchased well before 2000 was using the ‘A4′ game engine, where all the 3DGS versions have a game engine specified by the latter ‘A’ and a number.

That version of 3DGS was based on DirectX 7 because Microsoft owns and uses DirectX, and DirectX 7 still had as one of its capabilities to switch back into software-mode, even though it was perhaps one of the earliest APIs that offered hardware-rendering, provided that is, that the host machine had a graphics card capable of hardware-rendering.

I created a simplistic game using that engine, which had no real title, but which I simply referred to as my ‘Defeat The Guard Game’. And in so doing I learned a lot.

The API which is referred to as OpenGL, offers what DirectX versions offer. But because Microsoft has the primary say in how the graphics hardware is to be designed, OpenGL versions are frequently just catching up to what the latest DirectX versions have to offer. There is a loose correspondence in version numbers.

Shortly after the year 2000, I upgraded to a 3D Game Studio version with their ‘A6′ game engine. This was a game engine based on DirectX 9.0c, which was also standard with Windows XP, which no longer offered any possibility of software rendering, but which gave the customers of this software product their first opportunity to program shaders. And because I was playing with the ‘A6′ game engine for a long time, in addition owning a PC that ran Windows XP for a long time, the capabilities of DirectX 9.0c became etched in my mind. However, as fate would have it, I never actually created anything significant with this game engine version – only snippets of code designed to test various capabilities.

Continue reading A bit of my personal history, experimenting in 3D game design.