Now, I have LuxCoreRender working on one of my computers.

In This Earlier Posting, I wrote that a rendering-engine exists, which is actually a (‘software-based’) ray-tracer, making it better in picture-quality, from what raster-based rendering can achieve, the latter of which is more-commonly associated with hardware acceleration, but that this rendering engine has been written in a variant of C, which makes it suitable both in syntax and in semantics, to run under OpenCL, which, in turn, is a driver for running massively-parallel computations on our GPU – on our Graphics Processing Unit. Assuming we have a sufficiently-strong graphics card.

That ray-tracer is known as “LuxCoreRender”.

Well previously I had spent a long time, trying to get this to run on my computer, which I name ‘Plato’, using the wrong methodologies, and thus consuming a whole day of my time, and tiring me out.

What I finally did was to register with the user forums of LuxCoreRender, and asking them for advice. And the advice they gave me, solved that problem within about 30 minutes.

Their advice was:

  1. Download up-to-date Blender 2.79b, from the Web-site, not the package-installed 2.78a, and then just unpack the binary into a local, user-space folder on my Linux computer,
  2. Insert the binary BlendLuxCore / Tell Blender 2.79b to install it directly from its .ZIP-File.

It now works like a charm, and I am able to define 3D Scenes, which I want LuxCoreRender to render! :-)  And running the code on the GPU – using OpenCL – also works fully.

luxcore_test1

I suppose that one advantage which ray-tracing affords, over raster-based graphics, is that the Material which we give models is not limited to U,V-mapped texture images and their H/W-based shaders, but can rather be complex Materials, which Blender allows us to edit using its Node-Editor, and the input-textures for which can be mathematically-based noise-patterns, as shown above.

Dirk

 

Problems getting LuxCoreRender to work on one of my computers.

In This Earlier Posting, I had written that I had used a benchmarking tool named “LuxMark”, to test whether my GPU even works, on the computer which I name ‘Plato’. In the process, I ran across the discovery, that there exists a type of rendering-engine named ‘LuxCoreRenderer’, which is a ray-tracer, but which will do its ray-tracing with code, that runs under OpenCL.

I had assumed that the benchmarking tool was free, but that users would need to be paying customers, before they can use ‘LuxCoreRender’, to render their own scenes. To my great approval I’ve discovered that LuxRender and LuxCoreRender are also, Free, Open-Source Software. :-D  But to my disappointment I’ve learned, that there is no feasible way in which I could use that, on ‘Plato’.

The reason is fairly straightforward. The devs have renamed their renderer, from LuxRender to LuxCoreRender, at which point they also carried out a rebuild of their code, so that version 1.6 was discontinued, and ‘a cleaner build’ was started, as version 2.x . In order to make practical use of a rendering engine, I need a GUI, to create scenes for that engine to render. Well, LuxCoreRender has as minimum requirement, Blender v2.79b , while the Blender-version I have installed on ‘Plato’ is only v2.78a . The requirements that the devs state is strict, because Blender versions before 2.79 contained a bug, which would cause crashes. Not only that, but in this case, a user-space application would crash, for which there are considerable processes running on the GPU, which can cause severe memory-leaks, as I wrote Here.

Now, there does exist a stand-alone version of LuxCoreRender, v2.x , which in fact runs on ‘Plato’, but which remains rather useless to me, because it can only load and then render scene-descriptions, which have been stored to a file which is totally based on Lux, and not on any other standards.

Continue reading Problems getting LuxCoreRender to work on one of my computers.

Another Caveat, To GPU-Computing

I had written in previous postings, that I had replaced the ‘Nouveau’ graphics-drivers, that are open-source, with proprietary ‘nVidia’ drivers, that offer more capabilities, on the computer which I name ‘Plato’. In this previous posting, I described a bug that had developed between these recent graphics-drivers, and ‘xscreensaver’.

Well there is more, that can go wrong between the CPU and the GPU of a computer, if the computer is operating a considerable GPU.

When applications set up ‘rendering pipelines’ – aka contexts – they are loading data-structures as well as register-values, onto the graphics card and onto its graphics memory. Well, if the application, that would according to older standards only have resided in system memory, either crashes, or gets forcibly closed using a ‘kill -9′ instruction, then the kernel and the graphics driver will fail to clean up, whatever data-structures it had set up on the graphics card.

The ideal behavior would be, that if an application crashes, the kernel not only clean up whatever resources it was using in system memory, and within the O/S, but also, belonging to graphics memory. And for all I know, the programmers of the open-source drivers under Linux may have made this a top priority. But apparently, nVidia did not.

And so a scenario which can take place, is that the user needs to kill a hung application that was making heavy use of the graphics card, and that afterward, the state of the graphics card is corrupted, so that for example, ‘OpenCL‘ kernels will no longer run on it correctly.

Continue reading Another Caveat, To GPU-Computing

I’ve finally installed the proprietary nVidia graphics drivers.

In this earlier posting, I had written about the fact that the project was risky, to switch from the open-source ‘Nouveau’ graphics drivers, which are provided by a set of packages under Debian / Linux that contain the word ‘Mesa’, to the proprietary ‘nVidia’ drivers. So risky, that for a long time I faltered at doing this.

Well just this evening I made the switch. Under Debian / Stretch – aka Debian 9, this switch is relatively straightforward to accomplish. What we do is to switch to a text-session, using <Ctrl>+<Alt>+F1, and then kill the X-server. From there, we essentially just need to give the command (as root):

apt-get install nvidia-driver nvidia-settings nvidia-xconfig

Giving this command essentially allows the Debian package-managers to perform all the post-install steps, such as black-listing the Nouveau drivers. One should expect that this command has much work as its side-effects, as it pulls in quite a few dependencies.

(Edit 04/30/2018 :

In addition, the user must have up-to-date kernel / Linux -headers installed, because to install the graphics driver, also requires to build DKMS kernel modules. But, it’s always my assumption that I’d have kernel headers installed myself. )

When I gave this command the first time, apt-get suggested additional packages to me, which I wrote down on a sheet of paper. And then I answered ‘No’ to the question of whether or not to proceed (without those), so that I could add all the suggested packages onto a new command-line.

(Update 05/05/2018 :

The additional, suggested packages which I mentioned above, offer the ‘GLVND’ version of GLX. With nVidia, there are actually two ways to deliver GLX, one of which is an nVidia-centered way, and the other of which is a generic way. ‘GLVND’ provides the generic way. It’s also potentially more-useful, if later-on, we might  want to install the 32-bit versions as well.

However, if we fail to add any other packages to the command-line, then, the graphics-driver will load, but we won’t have any OpenGL capabilities at all. Some version of GLX must also be installed, and my package manager just happened to suggest the ‘GLVND’ packages.

Without OpenGL at all, the reader will be very disappointed, especially since even his desktop-compositing will not be running – at first.

The all-nVidia packages, which are not the ‘GLVND’ packages, offer certain primitive inputs from user-space applications, which ‘GLVND’ does not implement, because those instructions are not generically a part of OpenGL. Yet, certain applications do exist, which require the non-‘GLVND’ versions of GLX to be installed, and I leave it up to the reader to find out which packages do that – if the reader needs them – and to write their names on a sheet of paper, prior to switching drivers.

It should be noted, that once we’ve decided to switch to either ‘GLVND’- or the other- version of GLX, trying to change our minds, and to switch to the other version, is yet another nightmare, which I have not even contemplated so far. I’m content with the ‘GLVND’- GLX version. )

(Edited 04/30/2018 :

There is one aspect to installing up-to-date nVidia drivers which I should mention. The GeForce GTX460 graphics card does not support 3rd-party frame-buffers. These 3rd-party frame-buffer drivers would normally allow, <Ctrl>+<Alt>+F1, to show us not only a text-session, but one with decent resolution. Well, with the older, legacy graphics-chips, what I’d normally do is to use the ‘uvesafb’ frame-buffer drivers, just to obtain that. With modern nVidia hardware and drivers, this frame-buffer driver is incompatible. It even causes crashes, because with it, essentially, two drivers are trying to control the same hardware.

Just this evening, I tried to get ‘uvesafb’ working one more time, to no avail, just as it does work on the computer I name ‘Phoenix’. )

So the way it looks now for me, the text-sessions are available, but only in very low resolution. They only exist for emergencies now.

But this is the net result I obtained, after I had disabled the ‘uvesafb’ kernel module again:

 


dirk@Plato:~$ infobash -v
Host/Kernel/OS  "Plato" running Linux 4.9.0-6-amd64 x86_64 [ Kanotix steelfire-nightly Steelfire64 171013a LXDE ]
CPU Info        8x Intel Core i7 950 @ clocked at Min:1600.000Mhz Max:2667.000Mhz
Videocard       NVIDIA GF104 [GeForce GTX 460]  X.Org 1.19.2  [ 1920x1080 ]
Processes 262 | Uptime 1:16 | Memory 3003.9/12009.6MB | HDD Size 2000GB (6%used) | GLX Renderer GeForce GTX 460/PCIe/SSE2 | GLX Version 4.5.0 NVIDIA 375.82 | Client Shell | Infobash v2.67.2
dirk@Plato:~$

dirk@Plato:~$ clinfo | grep units
  Max compute units                               7
dirk@Plato:~$ clinfo | grep multiple
  Preferred work group size multiple              32
dirk@Plato:~$ clinfo | grep Warp
  Warp size (NV)                                  32
dirk@Plato:~$


 

So what this means in practice, is that I have OpenGL 4.5 on the computer named ‘Plato’ now, as well as having a fully-functional install of ‘OpenCL‘ and ‘CUDA‘, contrarily to what I had according to this earlier posting.

Therefore, GPU-computing will not just exist in theory for me now, but also in practice.

And this displays, that the graphics card on that machine ‘only’ possesses 224 cores after all, not the 7×48 which I had expected earlier, according to a Windows-based tool – no longer installed.

(Updated 04/29/2018 … )

Continue reading I’ve finally installed the proprietary nVidia graphics drivers.