Performed another experiment, with Genetic Algorithms.

When working with Genetic Algorithms, I know of essentially two types. One is an example, where a mutation algorithm generates a unit, which is itself an algorithm. A Windows-based program which used to work in this way, a long time ago, was named ‘Discipulus‘.

Another type of Genetic Algorithm which can exist, which can also be described as an ‘Evolutionary Programming’ example, is one which generates a unit, which is really just an arbitrary array of data, for which an externally-defined program must determine a fitness-level, so that the mutation algorithm can try to find units, which achieve greatest possible fitness. An example of such a system, which is still maintained today, is named ‘µGP3‘.

I would guess that something like ‘µGP3′ is more useful in Engineering, where a deterministic approach can be taken to determine how well a hypothetical machine would work, which was tweaked by the Evolved Data-Set, but according to a set of rules, which is not known to have an inverse.

‘Discipulus’ might be of greater use in AI, where the Genetic Algorithm is assumed to take a range of input parameters, and is required either to take an action based on those, or to arrive at an interpretation of those parameters, for which the AI was trained using numerous examples of input-value-sets, and for which a most-accurate result is known for each (training) set of simultaneous input-values. In the case of ‘Discipulus’, there exist two types of training exercises: Approximation, or Classification. And a real-world example where such a form would be useful, is in the computerized recognition of faces. Or of shapes, from other sorts of images.

Actually, I think that the way facial recognition works in practice today is, that a 2D Fourier Transform is computed of a rectangle, the dimensions of which in pixels have been tweaked, but in such a way that the conformity of the Fourier Transform to known Fourier Transforms pretty well guarantees that a given face is to be recognized.

But other examples may exist, in which the relationship between Input variables and Output Values is essentially of an initially-unknown nature. And then, even if we might not want to embed an actual GA into our AI, the use of GAs may provide some insight, as to how Input Values are in fact related to Output Values – through Human Interpretation of the GAs which result.

Recently, I’ve been doing experiments with ‘Discipulus’, in which I gave it a set of 24 input values each time, which were associated in some way with an output value, 100 times the output value being (0), and 100 times the output value being (1), hence, a classification exercise. But, my examples were also very particular, in the input-values having a pseudo-random nature to them. This can make it ‘harder’ for a GA to evolve, that can interpret the input-values correctly. And especially so, since ‘Discipulus’ will mutate the GA by adding instructions only some of the time, which actually read a new input-value. This results in GAs, which only utilize maybe 1/4 the input values – thus typically ~6 – and, because the one output-value is supposed to be based on 24 of them, all pseudo-random in nature, some rate of misclassification is unavoidable.

‘Discipulus’ actually performs better, when fewer input-values are offered, and can in that case generate highly-accurate GAs.

In this latest example, the 24 input values arose, because a series of 24 pseudo-random numbers was either passed through a low-pass filter, or a high-pass filter, thus treating them as though they were consecutive signal-values. The GA was only expected to determine, whether a Low-Pass or a High-Pass Filter had been used, and again, only a small subset of input-variables ended up being utilized by the winning GA. The URL at which the reader can find the exercise’s text files is here:

The best Validation Accuracy was 80%, and the GA which achieved that, only used such a disappointingly-small subset of input-variables.




Print Friendly, PDF & Email

One thought on “Performed another experiment, with Genetic Algorithms.”

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>