What can go wrong, when implementing complex numbers in C++ (Possible Solution).

One of the ideas which exist in computer programming, and with object-oriented languages such as C++, is that a header file can define a ‘complex’ data-type, which has a non-complex base-type, such that the Mathematical definition of Complex Numbers is observed, that define them as:

( a + b i )

Where (a) and (b) are of the base-type, which in pure Math is the set of Real Numbers. According to object-oriented programming, a mere header file can then overload how to perform the standard math operations on these complex objects, based on a super-set of math operations already being defined for the base-type. And the complex object can be defined as a template class, to make that as easy as possible.

Well I have already run in to a programming exercise, where I discovered that the header files that ship with Debian / Stretch (which was finally based on GCC v6.3.0), botched the job. The way in which a bug can begin, is that according to what I just wrote, (a) and (b) could be of the type ‘integer’, just because all the required math operations can be defined to exist entirely for integers, including the ‘integer square root’, which returns an integer even when its parameter is not a perfect square.

This type of complex object makes no sense according to real math, but does according to the compiler.

One of the things which can go wrong with this is, that when creating a special ‘absolute function’, only a complex object could be specified as the possible parameter-type. But, complex objects can have a set of ‘type-conversion constructors’, that accept first an integer, then a single-precision, and then a double-precision floating-point number, and which, depending on which type the parameter can match, convert that single parameter into a temporary complex object, that has this parameter as its real component, and that has zero as its imaginary component, so that the absolute-function-call can be computed on the resulting complex object.

When the compiler resorts to “Standard Conversions” (see first article linked to above), then it is willing to perform conversions between internal types as well as programmer-defined conversions.

If somebody did choose this inefficient way of implementing the absolute function of complex objects, in a way that also computes the absolute of ‘real numbers’, then one trap to avoid would be, only to define a type-conversion constructor, that can initialize the complex object from an integer, and never from a double-precision floating-point number. This first type-conversion to an integer would succeed, and would compute its absolute, resulting in a non-negative integer.

This is obviously totally counter to what a programmer would plausibly want his code to do, but one of the first facts which are taught in Programming Courses, is that compilers will choose non-obvious, incorrect ways to behave, if their code gives them an opportunity to do so.

If the programmer wants to do this deliberately, the conversion to ‘integer’ is referred to as ‘the floor function (of the initial floating-point number)’.

Yet, this type of error seems less likely in the implementation of square roots of complex numbers, that rely on square roots of real numbers, etc.

The correct thing to do is to declare a template function, which accepts the data-type of the parameter as its template variable. And then the programmer would need to write a series of template specializations, in which this template variable matches certain data-types. Only, in the case of the ‘absolute function’ under Debian / Stretch, the implementers seem to have overlooked a template specialization, to compute the absolute of a double-precision floating-point number.

However, actually solving the problem may often not be so easy, because The template-variable could indicate a complex object, which is itself of a template class, with a template variable of its own (that mentioned base-type)

One fact to note about all this is, that there is not one set of headers. There are many versions of headers, each of which ship with a different compiler version. Further, not all people use the GNU compilers; some people use Microsoft’s Visual Studio for example… I just happened to base much of my coding on GCC v6.3.0.

An additional fact to observe is, that the headers to be ‘#include’d are written ‘<complex>’, not, ‘<complex.h>’ . What the missing ‘.h’ means, is that they are “precompiled headers”, which do not contain any text. All this makes verification very difficult. GNU is currently based on GCC v9.2, but I was building my projects, actually using ‘STDC++2014′, which was an available command-line option.

Additionally, when programmers go to Web-sites like this one, the information contained is merely meant as a quick guide, on how to program, using these types of tools, and not an exact match of any code that was ever used to compile my headers.

One way in which I can tell that that code is not literally correct, is by the fact that no version information was provided on the Web-site. Another is by the fact that while the site uses data-types such as “double” and “float”, when programmers compile compilers, they additionally tend to use data-types like ‘double_t’, which will refer to the exact register-size on some FPUs, that may actually be 80-bit. Further, the types ‘int32′ and ‘int64′ would be less ambiguous at the binary level, than the declarations ‘int’ or ‘long int’ would be, if there was ever any explicit support for signed integers… Hence, if my code got ‘complex<double_t>’ to work, but that type was never specified on the site, then the site can just as easily have overlooked the type ‘int64′

According to what I read, C and C++ compilers are intentionally vague about what the difference between ‘double’ and ‘long double’ is, only guaranteeing that ‘long double’ will give at least as much precision as ‘double’. But, If the contents of an 80-bit (floating-point) register are stored in a 64-bit RAM location, then some least-significant bits of the significand are discarded, in addition to the power of two being given a new offset. In order to implement that, the compiler both uses and offers the type, that refers to the exact register-contents, which may be 80 bits or may be 64 bits, for a 64-bit CPU…

(Updated 9/17/2019, 18h00 … )

Continue reading What can go wrong, when implementing complex numbers in C++ (Possible Solution).

I’ve just custom-compiled OpenCV.

One trend in computing is AI, and a subject related to AI, is Computer Image Recognition, which could also be called ‘Computer Vision’. And there exists an Open-Source library for that, called ‘OpenCV‘. While I tend to think of it mainly as a Linux thing, it’s also possible to download and install OpenCV on Windows.

The version of OpenCV which Linux users obtain from the package manager, tends to be an outdated version, which under Debian / Stretch, is version 2.4.9 . I have a Debian / Stretch, Debian 9 computer I name ‘Plato’, and its hardware is decently strong. But one thing I just wanted to do, was to install a more up-to-date version of OpenCV on it, that being version 3.4.2 . The way to do this under Linux, is to custom-compile. And so doing that was an overnight project this morning.

One drawback to using OpenCV remains, that it does not seem to have many working applications out-of-the-box. It offers an API, and one needs to be a very good C++ programmer, to make any use of that API. But interestingly enough, there is a demonstration application these days, called “OpenCV Demonstrator“. If one has the appropriate version of OpenCV installed, one can also custom-compile the Demonstrator.

screenshot_20180826_055750

I made some observations about these two projects the hard way this morning.

Continue reading I’ve just custom-compiled OpenCV.

Three types of Constants that exist in C++

In a previous posting, I explained that I am able to write C++ programs, that test some aspect of how the CPU performs simplistic computations, even though C++ is a high-level language, which its compiler tries to optimize, before generating a machine-language program, the latter of which is called the run-time.

As I pointed out in that posting, there exists a danger that the computation may not take place the way in which the code reads to plain inspection, more specifically, in the fact that certain computations will be performed at compile-time, instead of at run-time. In reality, both the compile-time and the run-time computations as such still take place on the CPU, but in addition, a compiler can parse and then examine larger pieces of code, while a CPU needs to produce output, usually just based on the contents of a few registers – in that case, based on the content of up to two input-registers.

I feel that in contrast with the example which I wrote will work, I should actually provide an example, which will not work as expected. The following example is based on the fact that in C++, there exist three basic types of constants:

  1. Literals,
  2. Declared constants,
  3. Constant Expressions.

Constant expressions are any expressions, the parameters of which are never variables, always operators, functions, or other constants. While they can increase the efficiency with which the program works, the fact about constant expressions which needs to be remembered, is that by definition, the compiler will compute their values at compile-time, so that the run-time can make use of them. The example below is an example in which this happens, and which exists as an antithesis of the earlier example:

 


// This code exemplifies how certain types of
// computations will be performed by the compiler,
// and not by the compiled program at run-time.

// It uses 3 types of constants that exist in C++ .

#include <cstdio>

int main () {
	
	// A variable, initialized with a literal
	double x = 0;
	
	// A declared constant
	const double Pi = 3.141592653589;
	
	// A constant expression, being assigned to the variable
	x = ( Pi / 6 );
	
	printf("This is approximately Pi / 6 : %1.12f\n", x);
	
	return 0;
}


 


dirk@Plato:~/Programs$ ./simp_const
This is approximately Pi / 6 : 0.523598775598
dirk@Plato:~/Programs$


 

If this was practical code, there would be no problem with it, purely because it outputs the correct result. But if this example was used to test anything about how the run-time behaves, its innocent Math will suffer from one main problem:

When finally assigning a computed value to the double-precision floating-point variable ‘x’ , the value on the right-hand side of the assignment operator, the ‘=’ , is computed by the compiler, before a machine-language program has been generated, because all of its terms are either literals or declared constants.

This program really only tests, how well the ‘printf()’ function is able to print a double-precision value, when instructed exactly in what format to do so. That value is contained in the machine-code of the run-time, and not computed by it.

BTW, There exists a type of object in C, called a Compound Literal, which is written as an Initializer List, preceded by a data-type, as if to type-cast that initializer list to that data-type. But, because all manner of syntax that involves initializer lists is distinctly C and not C++ , I’ve never given much attention to compound literals.

(Edit 11/13/2017 : )

In other words, if the code above contained a term of the form

( 1 / ( 1 / Pi ))

Then chances are high, that because this also involves a constant expression, the compiler may simplify it to

( Pi )

Without giving the programmer any opportunity, to test the ability of the CPU, to compute any reciprocals.

Dirk

 

CSEditing

In This Posting, I wrote that I had completed my testing of ‘Crystal Space 2.1‘ on the laptop I name ‘Klystron’ – for now, by installing the ‘Blender‘ add-on script, that would actually allow a user to create content.

But there was yet another facet of this open-source game engine, which I have not yet tested. This is the ‘CSEditing‘ extension. On the surface, this plug-in is supposed to permit in-game editing of content.

Digging a bit deeper reveals a flaw, in what my expectations were.

Crystal Space is not a game, but a set of libraries with an API, that allows its users to create games, but which also allows its users to create any type of application, which will then benefit from complex 3D-graphic output. If such a game or application has in-game editing, it will be because individual users gave their creations this ability. It is not as if any of the CS demos actually show off this ability, and thus, there are few or no 3D applications yet written, that use this additional API.

When we compile the libraries that comprise CSEditing, we also compile an executable – a run-time program, which is meant just to prove that the API exists and can be used from an application. This actual run-time is not in itself a comprehensive editor.

In fact, the loading of the shared libraries, which make this feature work, still needs to be taken care of by the user, who wants to use Crystal Space in C++ to create his application. AFAIK, CSEditing does not depend on the Crystal Entity Layer.

The actual CSEditing API only has sparse documentation, which would be very valuable in the future, seeing as users will be trying to integrate such an advanced feature into their indie creations. But this also seems to suggest, that this is very much a work in progress.

What the demo application does show, is the stated capability of subdividing the window into panels, each of which either display a 3D view, or which display certain types of information panels, and which can be used to select elements of the 3D scene. Based on this last capability, a C++ program should also be able to grab more properties from each of the selected nodes, than this scant run-time does in practice.

But then, this run-time only display certain information about ‘the scene graph’ as it were, without allowing its user to edit anything. It would be up to the user himself, to design a¬†better application that uses this API.

And so, users like me are more likely just to appreciate such full-featured applications as Blender, to do actual editing. Without targeting an audience ourselves, with a transferred ability to edit content.

Continue reading CSEditing