I’ve finally figured out, why I was having so many problems, with complex numbers, in a C++ program.

In This earlier posting, I had written that many things could go wrong, with the way C++ templates define complex numbers. Well, after studying the programming exercise which I was basing that posting on, I think I’ve finally found out, what the single problem in fact was.

In my programming exercise, I had defined the data-type ‘complex<double_t>‘. This in itself caused a lot of problems, without being obvious to me as the culprit. The way C++ templates define complex numbers, will often derive the base-type of one complex number, from the base-type of another, preexisting one, just by transferring a template parameter. However, there are two situations where the templates, defined in the headers, can run into trouble with this:

  1. They can try to convert a ‘real number’ to a complex number, where the base-type of the derived complex number was never declared,
  2. They can try to mix mathematical operations between complex numbers and ‘real numbers’, in such a way that the type of the real number must match the base-type of the complex number exactly.

Specifically in situation (1) above, the templates will try the specializations of ‘float‘, ‘double‘, and ‘long double‘, as educated guesses, for what type of complex number is required. And the problem may well be, that only these 3 template-specializations are attempted, not, ‘double_t’.

And in situation (2) above, I was not taking into consideration that my code was often providing literal numbers, that were of type ‘double‘ by nature, not of type ‘double_t‘. This created a mismatch.

In any case, now that I realize what my mistake was, as well as having removed all the ‘mixed computations, between complex and real numbers’, resulting in code that no longer generates errors, I am more confident that I only was days ago, in the C++ template-definitions, for complex numbers.

 


 

(Update 9/17/2019, 10h50 : )

An added observation would be, that when my code tried to find the absolute, of the real component, of a complex number, that real component was also a ‘double_t‘ (deterministically), but the absolute function that’s predefined, again, only recognizes parameter-types ‘int‘, ‘float‘, ‘double‘, and ‘long double‘, in certain versions of the GCC compiler, which can again result in an incorrect match, with a working version of the absolute function.

Dirk

 

What can go wrong, when implementing complex numbers in C++ (Possible Solution).

One of the ideas which exist in computer programming, and with object-oriented languages such as C++, is that a header file can define a ‘complex’ data-type, which has a non-complex base-type, such that the Mathematical definition of Complex Numbers is observed, that define them as:

( a + b i )

Where (a) and (b) are of the base-type, which in pure Math is the set of Real Numbers. According to object-oriented programming, a mere header file can then overload how to perform the standard math operations on these complex objects, based on a super-set of math operations already being defined for the base-type. And the complex object can be defined as a template class, to make that as easy as possible.

Well I have already run in to a programming exercise, where I discovered that the header files that ship with Debian / Stretch (which was finally based on GCC v6.3.0), botched the job. The way in which a bug can begin, is that according to what I just wrote, (a) and (b) could be of the type ‘integer’, just because all the required math operations can be defined to exist entirely for integers, including the ‘integer square root’, which returns an integer even when its parameter is not a perfect square.

This type of complex object makes no sense according to real math, but does according to the compiler.

One of the things which can go wrong with this is, that when creating a special ‘absolute function’, only a complex object could be specified as the possible parameter-type. But, complex objects can have a set of ‘type-conversion constructors’, that accept first an integer, then a single-precision, and then a double-precision floating-point number, and which, depending on which type the parameter can match, convert that single parameter into a temporary complex object, that has this parameter as its real component, and that has zero as its imaginary component, so that the absolute-function-call can be computed on the resulting complex object.

When the compiler resorts to “Standard Conversions” (see first article linked to above), then it is willing to perform conversions between internal types as well as programmer-defined conversions.

If somebody did choose this inefficient way of implementing the absolute function of complex objects, in a way that also computes the absolute of ‘real numbers’, then one trap to avoid would be, only to define a type-conversion constructor, that can initialize the complex object from an integer, and never from a double-precision floating-point number. This first type-conversion to an integer would succeed, and would compute its absolute, resulting in a non-negative integer.

This is obviously totally counter to what a programmer would plausibly want his code to do, but one of the first facts which are taught in Programming Courses, is that compilers will choose non-obvious, incorrect ways to behave, if their code gives them an opportunity to do so.

If the programmer wants to do this deliberately, the conversion to ‘integer’ is referred to as ‘the floor function (of the initial floating-point number)’.

Yet, this type of error seems less likely in the implementation of square roots of complex numbers, that rely on square roots of real numbers, etc.

The correct thing to do is to declare a template function, which accepts the data-type of the parameter as its template variable. And then the programmer would need to write a series of template specializations, in which this template variable matches certain data-types. Only, in the case of the ‘absolute function’ under Debian / Stretch, the implementers seem to have overlooked a template specialization, to compute the absolute of a double-precision floating-point number.

However, actually solving the problem may often not be so easy, because The template-variable could indicate a complex object, which is itself of a template class, with a template variable of its own (that mentioned base-type)

One fact to note about all this is, that there is not one set of headers. There are many versions of headers, each of which ship with a different compiler version. Further, not all people use the GNU compilers; some people use Microsoft’s Visual Studio for example… I just happened to base much of my coding on GCC v6.3.0.

An additional fact to observe is, that the headers to be ‘#include’d are written ‘<complex>’, not, ‘<complex.h>’ . What the missing ‘.h’ means, is that they are “precompiled headers”, which do not contain any text. All this makes verification very difficult. GNU is currently based on GCC v9.2, but I was building my projects, actually using ‘STDC++2014′, which was an available command-line option.

Additionally, when programmers go to Web-sites like this one, the information contained is merely meant as a quick guide, on how to program, using these types of tools, and not an exact match of any code that was ever used to compile my headers.

One way in which I can tell that that code is not literally correct, is by the fact that no version information was provided on the Web-site. Another is by the fact that while the site uses data-types such as “double” and “float”, when programmers compile compilers, they additionally tend to use data-types like ‘double_t’, which will refer to the exact register-size on some FPUs, that may actually be 80-bit. Further, the types ‘int32′ and ‘int64′ would be less ambiguous at the binary level, than the declarations ‘int’ or ‘long int’ would be, if there was ever any explicit support for signed integers… Hence, if my code got ‘complex<double_t>’ to work, but that type was never specified on the site, then the site can just as easily have overlooked the type ‘int64′

According to what I read, C and C++ compilers are intentionally vague about what the difference between ‘double’ and ‘long double’ is, only guaranteeing that ‘long double’ will give at least as much precision as ‘double’. But, If the contents of an 80-bit (floating-point) register are stored in a 64-bit RAM location, then some least-significant bits of the significand are discarded, in addition to the power of two being given a new offset. In order to implement that, the compiler both uses and offers the type, that refers to the exact register-contents, which may be 80 bits or may be 64 bits, for a 64-bit CPU…

(Updated 9/17/2019, 12h10 … )

Continue reading What can go wrong, when implementing complex numbers in C++ (Possible Solution).