Now, I have LuxCoreRender working on one of my computers.

In This Earlier Posting, I wrote that a rendering-engine exists, which is actually a (‘software-based’) ray-tracer, making it better in picture-quality, from what raster-based rendering can achieve, the latter of which is more-commonly associated with hardware acceleration, but that this rendering engine has been written in a variant of C, which makes it suitable both in syntax and in semantics, to run under OpenCL, which, in turn, is a driver for running massively-parallel computations on our GPU – on our Graphics Processing Unit. Assuming we have a sufficiently-strong graphics card.

That ray-tracer is known as “LuxCoreRender”.

Well previously I had spent a long time, trying to get this to run on my computer, which I name ‘Plato’, using the wrong methodologies, and thus consuming a whole day of my time, and tiring me out.

What I finally did was to register with the user forums of LuxCoreRender, and asking them for advice. And the advice they gave me, solved that problem within about 30 minutes.

Their advice was:

  1. Download up-to-date Blender 2.79b, from the Web-site, not the package-installed 2.78a, and then just unpack the binary into a local, user-space folder on my Linux computer,
  2. Insert the binary BlendLuxCore / Tell Blender 2.79b to install it directly from its .ZIP-File.

It now works like a charm, and I am able to define 3D Scenes, which I want LuxCoreRender to render! :-)  And running the code on the GPU – using OpenCL – also works fully.

luxcore_test1

I suppose that one advantage which ray-tracing affords, over raster-based graphics, is that the Material which we give models is not limited to U,V-mapped texture images and their H/W-based shaders, but can rather be complex Materials, which Blender allows us to edit using its Node-Editor, and the input-textures for which can be mathematically-based noise-patterns, as shown above.

Dirk

 

Problems getting LuxCoreRender to work on one of my computers.

In This Earlier Posting, I had written that I had used a benchmarking tool named “LuxMark”, to test whether my GPU even works, on the computer which I name ‘Plato’. In the process, I ran across the discovery, that there exists a type of rendering-engine named ‘LuxCoreRenderer’, which is a ray-tracer, but which will do its ray-tracing with code, that runs under OpenCL.

I had assumed that the benchmarking tool was free, but that users would need to be paying customers, before they can use ‘LuxCoreRender’, to render their own scenes. To my great approval I’ve discovered that LuxRender and LuxCoreRender are also, Free, Open-Source Software. :-D  But to my disappointment I’ve learned, that there is no feasible way in which I could use that, on ‘Plato’.

The reason is fairly straightforward. The devs have renamed their renderer, from LuxRender to LuxCoreRender, at which point they also carried out a rebuild of their code, so that version 1.6 was discontinued, and ‘a cleaner build’ was started, as version 2.x . In order to make practical use of a rendering engine, I need a GUI, to create scenes for that engine to render. Well, LuxCoreRender has as minimum requirement, Blender v2.79b , while the Blender-version I have installed on ‘Plato’ is only v2.78a . The requirements that the devs state is strict, because Blender versions before 2.79 contained a bug, which would cause crashes. Not only that, but in this case, a user-space application would crash, for which there are considerable processes running on the GPU, which can cause severe memory-leaks, as I wrote Here.

Now, there does exist a stand-alone version of LuxCoreRender, v2.x , which in fact runs on ‘Plato’, but which remains rather useless to me, because it can only load and then render scene-descriptions, which have been stored to a file which is totally based on Lux, and not on any other standards.

Continue reading Problems getting LuxCoreRender to work on one of my computers.

I’ve just benchmarked my GPU’s ability to run OpenCL v1.2 .

Recently I’ve come into some doubt, about whether the GPU-computing ability of my graphics hardware specifically, might be defective somehow. But, given that ability, there exist benchmarks which people can run.

One such benchmark is called “LuxMark“, and I just ran it, on the computer I name ‘Plato’.

The way LuxMark works, is that it uses software to ray-trace a scene, thereby explicitly not using the standard, ‘raster-based rendering’, which graphics hardware is most famous for. But as a twist, this engine compiles the C-code which performs this task, using OpenCL instead of using a general C compiler for the CPU. Therefore, this software runs as C, but on the GPU.

This is similar to what a demo-program once did, which nVidia used to ship with their graphics cards, which showed a highly-realistic sports-car, because ray-tracing produces greater realism, than raster-based graphics would.

Here is the result:

screenshot_20180504_140224_c

I suppose that people who are intrigued by CGI – as I am – might eventually be interested in acquiring the LuxCoreRender engine, which would allow software-customers to render scenes which they choose. LuxMark just uses LuxCoreRender, in order to benchmark the GPU with one specific, preset scene.

But what this tells me is that there is essentially still nothing wrong at the hardware-level, with my GPU, or its ability to compute using OpenCL v1.2 . And, some version of OpenCL was also what the BOINC Project was using, whose GPU Work Units I was completing for several recent days.

One question which I’d want to know next, is whether a score of “2280” is good or bad. The site suggest that visitors exist whose GPUs are much stronger. But then, I’d need to have an account with LuxCoreRender to find out… :-D  The answer to that question is logical. My graphics card is ‘only’ a series-400. Because users exist with series-900, or series-1000 graphics cards, obviously, theirs will result in much faster benchmarks.

Dirk