An example of something, which isn’t AI.

One of the behaviours which has been trending for several years now, is to take an arbitrary piece of information, and just to call it AI.

This should not be done.

As an example of what I mean, I can give the following image:


This is an image, which has been palletized. That means that the colours of individual pixels have been reduced to ‘some palette of 256 colours’. What any conventional software allows me to do – including ‘GIMP’ – is next, to assign a new colour palette to this image, so that all these colours get remapped, as follows:


What I could do – although I won’t – is claim somehow that this has ‘artistic merit’. But what I cannot legitimately do is, to claim that the second image is ‘an example of AI’.

(Updated 7/06/2021, 16h30… )

(As of 6/30/2021, 15h50: )

Now, I suppose that a question which the reader could ask, which would be closer to legitimate, would be, ‘By what means can an image, perhaps represented by triples of (R,G,B) values, also called a TrueColor Image, be simplified into a representation, in which each pixel has exactly 1 value out of 256, and, so that the colours in this palette represent the original image optimally?’ Hence, the question could next be, ‘Why was the first image not an example of AI?’

And the answer I’d give is that, in theory, it would be possible to devise machine learning methodologies, to do what is already accessible through conventional methodologies. But, why do so, if the results obtained through conventional methodologies, are already as close to optimum as possible.


(Update 7/03/2021, 1h15: )

The standard method used to palletize the images is, the Median Cut Algorithm.


(Update 7/03/2021, 9h50: )

Based on what I have read in the past few days, the following two source files, are the notional C++ which I’d say best describes, how a system of (R,G,B,A) colours gets translated into a palletized format, in real applications today. This is only approximate, but writing and testing this code for syntax, satisfied my own curiosity…


(Update 7/04/2021, 19h55: )

I have just made some revisions to the header file and source file linked to above, some of which were minor, but some of which were critical. The critical changes fixed issues which, at run-time, would have caused infinite recursion…

I am finally satisfied with the code.


(Update 7/05/2021, 0h25: )

I have just added a little exercise to the source-code, which populates a bogus 1280×720 pixel image with 24 colour-values, and then palletizes the resulting set. What I found was, that the earlier versions of my code contained a grave error. To create STL sets, the keys for which can be any datum, the programmer must define a comparison operator for that datum, that will sort it linearly.

If the datum consists of 6 values, then this operator must compare each of them, in case the previous comparisons revealed equality. Failure to do this will cause odd behaviour, in my case, such as not to hold elements in one set, that have the same colour, but that differ in screen-position. Those will just be ignored as duplicates then.

In some, more-optimized version of the code, this might in fact be useful. But in its current version, this type of set operation just bogs the program tremendously. Running the current version of this program causes it to consume about 180 MBytes of RAM, and the test completes after about 18 seconds.


I think that, for some future purpose, it might actually be better, to generate the palette in a way that only keeps track of distinct colours – which was in fact, how my original code was malfunctioning – but, which then assigns the original image’s pixels to one of the palette colours, according to closeness.


(Update 7/05/2021, 8h20: )

The current version of my algorithm now greatly reduces the total number of pixel-values in each set, into values only differentiated according to (R), (G) and (B). It ‘works’ in that, when indexing a 1280×720 image, it ‘only’ takes up 13MB of RAM, and requires ~2 seconds of time to complete everything, from creating a bogus image to indexing it.

Most of the RAM which my algorithm is allocating, stems from the fact that a TrueColor Image is really being stored, at a pixel-depth of 5*16 bits. Yes, I’m storing an Alpha channel, but not organizing the palette according to it. I’m also storing per pixel, what the maximum channel-values will be, just in case they are greater than 255.


(Update 7/06/2021, 16h30: )

While working on improving my little code exercise, I discovered quite by accident, that there was an instruction which the previous version of the code was executing, 1280x720x24 times, when in fact that instruction only needed to be executed 1280×720 times. In other words, I could not quite figure out, why it was being accomplished so quickly by my code, to determine the supposedly optimal palette, even from a 1280×720 pixel image, yet, why it was taking so long afterwards, to index the same image, given a palette.

Now that I have corrected this mistake, which was in no way required by the nature of the exercise, my whole test program finishes in 1.4 seconds, no longer in 2.




Print Friendly, PDF & Email

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>