I’ve just benchmarked my GPU’s ability to run OpenCL v1.2 .

Recently I’ve come into some doubt, about whether the GPU-computing ability of my graphics hardware specifically, might be defective somehow. But, given that ability, there exist benchmarks which people can run.

One such benchmark is called “LuxMark“, and I just ran it, on the computer I name ‘Plato’.

The way LuxMark works, is that it uses software to ray-trace a scene, thereby explicitly not using the standard, ‘raster-based rendering’, which graphics hardware is most famous for. But as a twist, this engine compiles the C-code which performs this task, using OpenCL instead of using a general C compiler for the CPU. Therefore, this software runs as C, but on the GPU.

This is similar to what a demo-program once did, which nVidia used to ship with their graphics cards, which showed a highly-realistic sports-car, because ray-tracing produces greater realism, than raster-based graphics would.

Here is the result:

screenshot_20180504_140224_c

I suppose that people who are intrigued by CGI – as I am – might eventually be interested in acquiring the LuxCoreRender engine, which would allow software-customers to render scenes which they choose. LuxMark just uses LuxCoreRender, in order to benchmark the GPU with one specific, preset scene.

But what this tells me is that there is essentially still nothing wrong at the hardware-level, with my GPU, or its ability to compute using OpenCL v1.2 . And, some version of OpenCL was also what the BOINC Project was using, whose GPU Work Units I was completing for several recent days.

One question which I’d want to know next, is whether a score of “2280” is good or bad. The site suggest that visitors exist whose GPUs are much stronger. But then, I’d need to have an account with LuxCoreRender to find out… :-D  The answer to that question is logical. My graphics card is ‘only’ a series-400. Because users exist with series-900, or series-1000 graphics cards, obviously, theirs will result in much faster benchmarks.

Dirk