An Ode to Cinepaint

People who are knowledgeable about Linux, and up-to-date, will explain, that Cinepaint bit the dust of bit-rot several years ago, and is effectively uninstallable on modern Linux computers. I have to accept that. It has no future. The latest Debian version it’s still installable on in theory, is Debian 8, which is also called ‘Debian Jessie’. But there is a tiny niche of tasks which it can perform, which virtually no other open-source graphics application can, and that consists of, being given a sequence of numbered images that make up a video stream, and to perform frame-by-frame edits on those images. Mind you, Cinepaint will not even stream a video file, into such a set of numbered images – I think the best tool to do that is ‘ffmpeg’ – but, once given such a set of images, Cinepaint will allow them to be added to its ‘Flipbook’ quickly, from which they can be processed manually, yet efficiently. I suppose that this is a task which users don’t often have, and if they do, they’re probably also in a position to purchase software that will carry it out.

But, another big advantage which Cinepaint has over ‘GIMP’ is, that Cinepaint will process High Dynamic-Range images, such as, ones that have half-precision, 16-bit floating-point numbers for each of their colour channels. And wouldn’t the reader have guessed it, I still happen to have a Debian / Jessie laptop that’s fully functional! So, honouring glory that once was, I decided to custom-compile Cinepaint one more time, on that laptop, which I named ‘Klystron’. I was still successful today, with the exception that there is a key functionality of the application which I cannot evoke from it, and which I will mention below. First, here are some screen-shots, of what Cinepaint was once able to do…

 

Cinepaint_1

 

Cinepaint_2

 

Cinepaint_3

 

Cinepaint_4

 

That fourth screen-shot is what one obtains, when one chooses ‘Bracketing to HDR’ is the method to import an image, and if the person then specifies no images, because that person never uses the bracketed shooting mode of his DSLR (me).


 

 

One of the tasks which would be futile is, to try to work with images seriously, that have more than 8 bits per channel, without also working with ‘Colour Profiles’, aka ‘Colour Spaces’. Therefore, Cinepaint has as a required feature, that it work with version 1, not version 2, of the ‘Little Colour Management System’, aka, ‘lcms v1.19′. Here begin the hurdles in getting this to compile. A legitimate concern that the reader could already have is that Debian Jessie had transitioned to ‘lcms v2′. In certain cases, custom-compiling an older version of this, while the correct version is already installed from the package-manager, could pose a risk to the computer. And so, before proceeding, I verified that the library names, and the names of the header files, of the package-installed ‘lcms v2′, have the major version number appended to their file-names. What this means is, that when ‘lcms v1.19′ is installed under ‘/usr/local/lib’ and ‘/usr/local/include’, there is no danger that a future linkage of code, could actually link to the wrong development bundle. There is only the danger that some future custom-compile could actually detect the presence of the wrong development bundle. And this will be true, as long as one is only installing the libraries, and not executables!

 

Continue reading An Ode to Cinepaint

Quickie: How 2D Graphics is just a Special Case of 3D Graphics

I have previously written in-depth, about what the rendering pipeline is, by which 3D graphics are rendered to a 2D, perspective view, as part of computer games, or as part of other applications that require 3D, in real time. But one problem with my writing in-depth might be, that people fail to see some relevance in the words, if the word-count goes beyond 500 words. :-)

So I’m going to try to summarize it more-briefly.

Vertex-Positions in 3D can be rotated and translated, using matrices. Matrices can be composited, meaning that if a sequence of multiplications of position-vectors by known matrices accomplishes what we want, then a multiplication by a single, derived matrix can accomplish the same thing.

According to DirectX 9 or OpenGL 2.x , 3D objects consisted of vertices that formed triangles, the positions and normal-vectors of which were transformed and rotated, respectively, and where vertices additionally possessed texture-coordinates, which could all be processed by “Vertex Pipelines”. The output from Vertex Pipelines was then rasterized and interpolated, and fed to “Pixel Pipelines”, that performed per-screen-pixel computations on the interpolated values, and on how these values were applied to Texture Images which were sampled.

All this work was done by dedicated graphics hardware, which is now known as a GPU. It was not done by software.

One difference that exists today, is that the specialization of GPU cores into Vertex- and Pixel-Pipelines no longer exists. Due to something called Unified Shader Model, any one GPU-core can act either as a Vertex- or as a Pixel-Shader, and powerful GPUs possess hundreds of cores.

So the practical question does arise, how any of this applies to 2D applications, such as Desktop Compositing. And the answer would be, that it has always been possible to render a single rectangle, as though oriented in a 3D coordinate system. This rectangle, which is also referred to as a “Quad”, first gets Tessellated, which means that it receives a diagonal subdivision into two triangles, which are still references to the same 4 vertices as before.

When an application receives a drawing surface, onto which it draws its GUI – using CPU-time – the corners of this drawing surface have 2D texture coordinates that are combinations of [ 0 ] and ( +1 ) . The drawing-surfaces themselves can be input to the GPU as though Texture Images. And the 4 vertices that define the position of the drawing surface on the desktop, can simply result from a matrix, that is much simpler than any matrix would have needed to be, that performed rotation in 3D etc., before a screen-positioning could be formed from it. Either way, the Vertex Program only needs to multiply the (notional) positions of the corners of a drawing surface, by a single matrix, before a screen-position results. This matrix does not need to be computed from complicated trig functions in the 2D case.

And the GPU renders the scene to a frame-buffer, just as it rendered 3D games.

Continue reading Quickie: How 2D Graphics is just a Special Case of 3D Graphics

I just custom-compiled Tupi.

While I have spent a lot of time pursuing the subject of 3D graphics, obviously, 2D graphics also exist. And there exists an application named ‘Tupi’, which is a toolkit for creating 2D animations, in a cel- or storyboard- kind of way.

I had tried to install the version of Tupi which comes from the package manager for Debian / Jessie, but apparently the Debian Maintainers compiled that, and then did not make sure that it works. This version had a bug, which caused the application to crash, as soon as a new project was created.

So I felt that the only solution – just on the laptop I name ‘Klystron’ – was to custom-compile a later version, which is ‘version 0.2-git07′. This time around, the custom-compilation was somewhat difficult.

One reason for this difficulty was the fact, that the developers specifically neglect Debian builds, and focus on Ubuntu builds. This fact may also have thwarted the Debian Maintainer this time around.

Yet, with much effort, I was able to get the higher version of this application to compile, and also to launch, and to create a new project without crashing. Yaay!

Dirk