## An Ode to Cinepaint

People who are knowledgeable about Linux, and up-to-date, will explain, that Cinepaint bit the dust of bit-rot several years ago, and is effectively uninstallable on modern Linux computers. I have to accept that. It has no future. The latest Debian version it’s still installable on in theory, is Debian 8, which is also called ‘Debian Jessie’. But there is a tiny niche of tasks which it can perform, which virtually no other open-source graphics application can, and that consists of, being given a sequence of numbered images that make up a video stream, and to perform frame-by-frame edits on those images. Mind you, Cinepaint will not even stream a video file, into such a set of numbered images – I think the best tool to do that is ‘ffmpeg’ – but, once given such a set of images, Cinepaint will allow them to be added to its ‘Flipbook’ quickly, from which they can be processed manually, yet efficiently. I suppose that this is a task which users don’t often have, and if they do, they’re probably also in a position to purchase software that will carry it out.

But, another big advantage which Cinepaint has over ‘GIMP’ is, that Cinepaint will process High Dynamic-Range images, such as, ones that have half-precision, 16-bit floating-point numbers for each of their colour channels. And wouldn’t the reader have guessed it, I still happen to have a Debian / Jessie laptop that’s fully functional! So, honouring glory that once was, I decided to custom-compile Cinepaint one more time, on that laptop, which I named ‘Klystron’. I was still successful today, with the exception that there is a key functionality of the application which I cannot evoke from it, and which I will mention below. First, here are some screen-shots, of what Cinepaint was once able to do…

That fourth screen-shot is what one obtains, when one chooses ‘Bracketing to HDR’ is the method to import an image, and if the person then specifies no images, because that person never uses the bracketed shooting mode of his DSLR (me).

One of the tasks which would be futile is, to try to work with images seriously, that have more than 8 bits per channel, without also working with ‘Colour Profiles’, aka ‘Colour Spaces’. Therefore, Cinepaint has as a required feature, that it work with version 1, not version 2, of the ‘Little Colour Management System’, aka, ‘lcms v1.19′. Here begin the hurdles in getting this to compile. A legitimate concern that the reader could already have is that Debian Jessie had transitioned to ‘lcms v2′. In certain cases, custom-compiling an older version of this, while the correct version is already installed from the package-manager, could pose a risk to the computer. And so, before proceeding, I verified that the library names, and the names of the header files, of the package-installed ‘lcms v2′, have the major version number appended to their file-names. What this means is, that when ‘lcms v1.19′ is installed under ‘/usr/local/lib’ and ‘/usr/local/include’, there is no danger that a future linkage of code, could actually link to the wrong development bundle. There is only the danger that some future custom-compile could actually detect the presence of the wrong development bundle. And this will be true, as long as one is only installing the libraries, and not executables!

## One method to convert Text to SVG-File.

The problem can exist, that we want to import text into an application, which nevertheless expects a graphics file, but that the application is strong enough to accept SVG-Files as an available graphics-file format.

In studying this problem, I came to a discovery which was new to me about what SVG-Files are. In fact, SVG is a markup-language similar to HTML or to XML, so that by default, SVG-Files are actually text-files ! This also means, that if our Web-authoring software offers to embed SVG, this is not done with an <embed> -tag, as if the file was to be treated as some sort of image, but rather, using an actual <svg> -tag.

The main difference in SVG-Files would seem to be, that they prepend an <xml> -tag, making the file a self-contained document.

What this also means, is that text can be converted into SVG-Files most-efficiently, using a text-editor, where we’d first set up a template, then copy that to a new file-name every time we need a working SVG-File, and then just edit the text…

The following is a type of template which has worked for me, in experiments I carried out:

<?xml version="1.0" encoding="UTF-8" standalone="no"?>

<!--
Instructions for Windows users:
This file will probably need to be renamed
From: Template.svg.txt (where .txt was hidden)
To:   Template.svg

And then placed in a folder with other images.
A console window would need to be navigated to
the same directory...

Linux Usage:

cp Template.svg TextFile.svg
edit text/*:TextFile.svg

Windows Usage:

copy Template.svg TextFile.svg (Hypothetical Name)

For either Linux or Windows,
assuming Inkscape is installed and in the PATH:

inkscape -z -e TextFile.svg.png TextFile.svg

OR

inkscape -z -T -l TextFile-G.svg TextFile.svg

-->

<svg height="90" width="200">
<g>
<text x="10" y="15" style="fill:black;"
font-size="12" font-family="Liberation">Several lines:
<tspan x="10" dy="15">Second line.</tspan>
<tspan x="10" dy="15">Third line.</tspan>
</text>
</g>
</svg>


One assumption made in creating this template was, that Inkscape is installed in such a way, as to recognize the stated font-family. This parameter can just be omitted, in which case Inkscape would use whatever its default font is. But, to state such information provides consistent, predictable results. In contrast, I needed to set the font-size. Inkscape could default to an unexpected font-size, which in turn would lead to garbled output, in the resulting PNG-File. And, the default font-size Inkscape uses, appears to be the one last-set when the GUI was used.

(Edit 03/15/2018 :

By now, this template only serves as a working basis, for a shell-script I have written, which allows me to create such text-images with a single command. I have posted the script to my blog. But, if readers are nevertheless interested in understanding the workings of SVG-Files, I’m always leaving my existing ruminations as written blow… )

## Why some people might still want to put Polarizers on their Cameras

One concept which exists in digital photography, is that we can remove any need for special filters, just by using software to modify or rearrange the colors within a photo or video we have shot. And one problem with this claim is, the fact that software can only change the contents of an image, based on information already stored in its pixels. Hence, the color-vectors of resulting pixels, need to be derived from those of captured pixels.

Thus, if we have taken a photo of a gray, hazy day scene, and if we wanted the sky to look more blue, and if we wanted features in the scene to look more yellow, then we could be creative in the coding of our software, so that it performs a per-channel gamma-correction, raising the blue channel to an exponent greater than one, while raising the red and green channels to an exponent less than one. And we might find that regions within the image which were already more blue, will seem blue more-strongly, while regions which did not, will end up looking more yellow, as if sunlit.

(I suppose that while we are at it, we would also want to normalize each color-vector first, and store its original luminance in a separate register, so that our effect only influences coloration in ways not dependent on luminance, and so that the original luminance can be restored to the pixel afterward.

At that stage of the game, a linear correction could also be computed, with the intent that purely gray pixels should remain gray. )

(Edit 02/24/2018 :

Actually, such an effect plug-in might just as easily keep the other channels, Red and Green in this case, as they are. )

The problem remains, that the entire image could have colors  washed out, so that the sky looks gray, and the subject does as well. So then, our software would have nothing on which to base its differentiation.

But light that occurs naturally in scenes tends to be polarized. Hence, light that came from the sky will have an angle of plane-polarization to it, while light which has been scattered by the scene will have more-randomized polarization. Hence, if we have a DSLR camera, we can mount polarization filters which tend to absorb blue light more, if it is polarized along one plane, while absorbing yellow light more, which is polarized at right-angles to the same plane.

The idea is that the filter could be mounted on our camera-lens, in whatever position gives the sky a blue appearance, and we can hope that the entire landscape-photo also looks as if sunlit.

(Edit 02/24/2018 :

After actually giving it some thought, I’d suggest that light which comes from the sky is horizontally-polarized, and that the use of this filter will make both the sky, and horizontally-facing bodies of water look more blue, which both would, on a sunny day. In comparison, the rest of the scene would end up looking ‘more yellow’, suggesting sunlit appearance. )

Then, the actual pixels of the camera will have captured information in a way influenced by polarization, which they would normally not do, any more than Human Eyes would normally do so.

(Updated 02/23/2018 : )