(Solved) How the Latest Wine-Staging Release, for Debian, Introduces a Packaging Problem: Unresolved Dependencies.

One piece of software which many Linux users benefit from is called “Wine”, which is an acronym that stands for ‘Wine Is Not an Emulator’ . This software allows some Windows programs to run under Linux, but has recently been made more powerful, to allow graphics-intensive games to run as well. The version of wine which I’ve been subscribing to is called ‘wine-staging’ , and contains the latest, bleeding-edge features that Wine is to contain, at any later point down the road.

But as I’ve been receiving updates to wine-staging, the latest update would have taken all my computers from version 4.0-rc2, to version 4.0-rc4, which means, ‘…Release Candidate 4′ . At that point, some of my computers could not install the update, because of unresolved package dependencies.

Apparently what’s been happening with Wine-Staging is twofold:

  1. The original maintainers are no longer running the project, which also means that Wine is in the midst of a Code Freeze, and that present updates may offer fewer or no new features, from what Linux users are used to, and
  2. In the interest of allowing the Windows games with the highest graphics requirements to run (potentially), the devs have nevertheless created a dependency of the latest Wine packages, on the OpenCL packages that install OpenCL natively under Linux.

‘OpenCL’ is a user-side API, which also requires system-side drivers for the specific graphics hardware, and which allows greatly parallel computations to be carried out on the GPU, instead of on the CPU. Apparently, some games use either ‘OpenCL’ or ‘CUDA’ these days, in order to incorporate complex Physics simulations into the game.

(I will assume that the version of CUDA which Wine emulates, requires OpenCL to be installed on the side of Linux.)

The problem under Debian which I’ve detected is, that the newly-introduced dependency of ‘wine-staging-amd64′ (as an example) is simply mainly:

‘ocl-icd-libopencl1′

Which means, that Wine will now insist on using the generic OpenCL drivers, that were developed for and by the Linux community, while not allowing the use of proprietary OpenCL drivers, that are closed-source, and that were developed by the manufacturers of each graphics card.

The problem with this is, that as users, we may only have one or the other OpenCL driver-set installed. We cannot install both the generic and the proprietary drivers. When we try to update Wine-Staging now, doing so will try to install the package above, which will also try to ‘kick out’ , any proprietary packages we may have installed, that provide OpenCL.

Continue reading (Solved) How the Latest Wine-Staging Release, for Debian, Introduces a Packaging Problem: Unresolved Dependencies.

About how I won’t be doing any ‘ASL’ computing soon.

There exists an Open-Source code library named ‘ASL’, which stands for “Advanced Simulation Library“. Its purpose is to allow application-designers who don’t have to be deep experts at writing C++ code, to perform fluid simulations, but with the volume-based simulations running on the GPU of the computer, instead of on the CPU. This can also lead people to say, that ‘ASL’ is hardware-accelerated.

Last night I figured, that ‘ASL’ should run nicely on the Debian / Stretch computer I name ‘Plato’, because that computer has a GeForce GTX460 graphics card, which was considered state-of-the-art in 2011. But unfortunately for me, ‘ASL’ will only run its simulations correctly, if the GPU delivers ‘OpenCL’, version 1.2 or greater. The GeForce 460 graphics card is only capable of OpenCL 1.1, and is therefore no longer state-of-the-art by far.

Last night, I worked until exhausted, trying various solutions, in hopes that maybe the library had not been compiled correctly – I custom-compiled it, after finding out that the simulations were not running correctly. I also looked in to the possibility, that maybe I had just not been executing the sample experiments correctly. But alas, the problem was my ‘weak’ graphics card, that is nevertheless OpenGL 4 -capable.

As an alternative to using ‘ASL’, Linux users can use the Open-Source program-set called ‘Elmer‘. They run on the CPU.

Further, there is an associated GUI-application called ‘ParaView‘, the purpose of which is to take as input, volume-based geometries and arbitrary values – i.e., fluid states – and to render those with some amount of graphics finesse. I.e., ‘ParaView’ can be used to post-process the simulations that were created with ‘ASL’ or with ‘Elmer’, into a presentable visual. The version of ‘ParaView’ that installs from the package-manager under Debian / Stretch, ‘5.1.x’ , works fine. But for a while last night, I did not know whether problems that I was running in to were actually due to ‘ASL’ or to ‘ParaView’ misbehaving. And so what I also did, was to custom-compile ‘ParaView’, to version 5.5.2 . And if one does this, then the next problem one has, is that ParaView v5.5.2 requires VTK v7, while under Debian / Stretch, all we have is VTK v6.3 . And so on my platform, version 5.5.2 of ParaView encounters problems, in addition to ‘ASL’ encountering problems. And so for a while I had difficulty, identifying what the root causes of these bugs were.

Finally, the development branch, custom-compiled version of ‘Elmer’ and package-manager-installed ‘ParaView’ v5.1.x will serve me fine.

Dirk

 

About the LuxCore Renderer, and OpenCL Rendering.

One of the subjects which I’ve written about before, is that On one of my computers, the 3D Graphics Application ‘Blender’ is set up, optionally, to use ‘LuxCore’ to render a scene, and that the main feature of LuxCore which I’m interested in, is its ability to render the scene using OpenCL, and therefore using the GPU cores, rather than just using CPU cores.

A curious question which some of my readers might ask would be, ‘How does the user know, that LuxCore is in fact using the GPU, when he or she has simply selected OpenCL as the rendering hardware?’

In my case, the answer is the fact, that my GPU-temperature increases dramatically. And when I perform any type of GPU computing, the GPU temperature does same. I’ve had the GPU get 70⁰C hot, while, when idling, the GPU-temperature will not exceed 40⁰C. It can be observed though, that when OpenCL rendering is selected, all 8 CPU cores are also in-use, and that may just be a way in which LuxCore works. Further, it may be  away in which OpenCL works, to make the CPU cores available, in addition to the GPU cores.

Dirk

 

Now, I have LuxCoreRender working on one of my computers.

In This Earlier Posting, I wrote that a rendering-engine exists, which is actually a (‘software-based’) ray-tracer, making it better in picture-quality, from what raster-based rendering can achieve, the latter of which is more-commonly associated with hardware acceleration, but that this rendering engine has been written in a variant of C, which makes it suitable both in syntax and in semantics, to run under OpenCL, which, in turn, is a driver for running massively-parallel computations on our GPU – on our Graphics Processing Unit. Assuming we have a sufficiently-strong graphics card.

That ray-tracer is known as “LuxCoreRender”.

Well previously I had spent a long time, trying to get this to run on my computer, which I name ‘Plato’, using the wrong methodologies, and thus consuming a whole day of my time, and tiring me out.

What I finally did was to register with the user forums of LuxCoreRender, and asking them for advice. And the advice they gave me, solved that problem within about 30 minutes.

Their advice was:

  1. Download up-to-date Blender 2.79b, from the Web-site, not the package-installed 2.78a, and then just unpack the binary into a local, user-space folder on my Linux computer,
  2. Insert the binary BlendLuxCore / Tell Blender 2.79b to install it directly from its .ZIP-File.

It now works like a charm, and I am able to define 3D Scenes, which I want LuxCoreRender to render! :-)  And running the code on the GPU – using OpenCL – also works fully.

luxcore_test1

I suppose that one advantage which ray-tracing affords, over raster-based graphics, is that the Material which we give models is not limited to U,V-mapped texture images and their H/W-based shaders, but can rather be complex Materials, which Blender allows us to edit using its Node-Editor, and the input-textures for which can be mathematically-based noise-patterns, as shown above.

Dirk