How To Install Yafaray Under Linux

One of the computing subtopics I dabble in, is the acquisition of 3D-graphics software. Therefore, I already have “Blender 2.78a”, which has its own built-in software-rendering engine, and I have several other rendering engines installed on my Linux-based computers.

Further, the rendering engines by themselves can be useless, unless they integrate well with a GUI (such as with Blender). And so one undertaking which I’ll typically reach with a given computer, is to install “Yafaray”, which used to be ‘Yafray’, which stood for ‘Yet Another Free Ray-Tracer’. If it’s installed properly, Blender can render its scenes, using Yafaray, but from within Blender.

Yafray used to be a much simpler piece of software to install than it has become. But I’m sure the effort I put into it this evening, will be well-worth it eventually. What I’m used to doing is to download a source-tree, and if it’s CMake-based, to run ‘cmake-gui‘ on it, to custom-pick my build options, and to go. But as it happens with Yafaray, this approach led to near chaos. What this did, was to compile all the source-code properly into libraries, but then to install those libraries to nonsensical locations within my system folders. One reason was the fact that a part of the project was to create Python 3 bindings, and another was the need for the Blender-integration, where modern Blender versions are based on Python 3. In any case I was sure to install all the build dependencies via my package-manager, but doing so was not enough to obtain working outcomes.


Continue reading How To Install Yafaray Under Linux

More about Framebuffer Objects

In the past, when I was writing about hardware-accelerated graphics – i.e., graphics rendered by the GPU – such as in this article, I chose the phrasing, according to which the Fragment Shader eventually computes the color-values of pixels ‘to be sent to the screen’. I felt that this over-simplification could make my topics a bit easier to understand at the time.

A detail which I had deliberately left out, was that the rendering target may not be the screen in any given context. What happens is that memory-allocation, even the allocation of graphics-memory, is still carried out by the CPU, not the GPU. And ‘a shader’ is just another way to say ‘a GPU program’. In the case of a “Fragment Shader”, what this GPU program does can be visualized better as shading, whereas in the case of a “Vertex Shader”, it just consists of computations that affect coordinates, and may therefore be referred to just as easily as ‘a Vertex Program’. Separately, there exists the graphics-card extension, that allows for the language to be the ARB-language, which may also be referred to as defining a Vertex Program. ( :4 )

The CPU sets up the context within which the shader is supposed to run, and one of the elements of this context, is to set up a buffer, to which the given, Fragment Shader is to render its pixels. The CPU sets this up, as much as it sets up 2D texture images, from which the shader fetches texels.

The rendering target of a given shader-instance may be, ‘what the user finally sees on his display’, or it may not. Under OpenGL, the rendering target could just be a Framebuffer Object (an ‘FBO’), which has also been set up by the CPU as an available texture-image, from which another shader-instance samples texels. The result of that would be Render To Texture (‘RTT’).

Continue reading More about Framebuffer Objects