I just installed Sage (Math) under Debian / Stretch.

One of the mundane limitations which I’ve faced in past years, when installing Computer Algebra Systems etc., under Linux, that were supposed to be open-source, was that the only game in town – almost – was either ‘Maxima’ or ‘wxMaxima’, the latter of which is a fancy GUI, as well as a document exporter, for the former.

Well one fact which the rest of the computing world has known about for some time, but which I am newly finding for myself, is that software exists called ‘SageMath‘. Under Debian / Stretch, this is ‘straightforward’ to install, just by installing the meta-package from the standard repositories, named ‘sagemath’. If the reader also wants to install this, then I recommend also installing ‘sagemath-doc-en’ as well as ‘sagetex’ and ‘sagetex-doc’. Doing this will literally pull in hundreds of actual packages, so it should only be done on a strong machine, with a fast Internet connection! But once this has been done, the result will be enjoyable:

screenshot_20180915_201139

I have just clicked around a little bit, in the SageMath Notebook viewer, which is browser-based, and which I’m sure only provides a skeletal front-end to the actual software. But there is a feature which I already like: When the user wishes to Print his or her Worksheet, doing so from the browser just opens a secondary browser-window, from which we may ‘Save Page As…’ , and when we do, we discover that the HTML which gets saved, has its own, internal ‘MathJax‘ server. What this seems to suggest at first glance, is that the equations will display typeset correctly, without depending on an external CDN. Yay!

I look forward to getting more use out of this in the near future.

(Update 09/15/2018, 21h30 : )

Continue reading I just installed Sage (Math) under Debian / Stretch.

I’ve just benchmarked my GPU’s ability to run OpenCL v1.2 .

Recently I’ve come into some doubt, about whether the GPU-computing ability of my graphics hardware specifically, might be defective somehow. But, given that ability, there exist benchmarks which people can run.

One such benchmark is called “LuxMark“, and I just ran it, on the computer I name ‘Plato’.

The way LuxMark works, is that it uses software to ray-trace a scene, thereby explicitly not using the standard, ‘raster-based rendering’, which graphics hardware is most famous for. But as a twist, this engine compiles the C-code which performs this task, using OpenCL instead of using a general C compiler for the CPU. Therefore, this software runs as C, but on the GPU.

This is similar to what a demo-program once did, which nVidia used to ship with their graphics cards, which showed a highly-realistic sports-car, because ray-tracing produces greater realism, than raster-based graphics would.

Here is the result:

screenshot_20180504_140224_c

I suppose that people who are intrigued by CGI – as I am – might eventually be interested in acquiring the LuxCoreRender engine, which would allow software-customers to render scenes which they choose. LuxMark just uses LuxCoreRender, in order to benchmark the GPU with one specific, preset scene.

But what this tells me is that there is essentially still nothing wrong at the hardware-level, with my GPU, or its ability to compute using OpenCL v1.2 . And, some version of OpenCL was also what the BOINC Project was using, whose GPU Work Units I was completing for several recent days.

One question which I’d want to know next, is whether a score of “2280” is good or bad. The site suggest that visitors exist whose GPUs are much stronger. But then, I’d need to have an account with LuxCoreRender to find out… :-D  The answer to that question is logical. My graphics card is ‘only’ a series-400. Because users exist with series-900, or series-1000 graphics cards, obviously, theirs will result in much faster benchmarks.

Dirk