What can go wrong, when implementing complex numbers in C++ (Possible Solution).

One of the ideas which exist in computer programming, and with object-oriented languages such as C++, is that a header file can define a ‘complex’ data-type, which has a non-complex base-type, such that the Mathematical definition of Complex Numbers is observed, that define them as:

( a + b i )

Where (a) and (b) are of the base-type, which in pure Math is the set of Real Numbers. According to object-oriented programming, a mere header file can then overload how to perform the standard math operations on these complex objects, based on a super-set of math operations already being defined for the base-type. And the complex object can be defined as a template class, to make that as easy as possible.

Well I have already run in to a programming exercise, where I discovered that the header files that ship with Debian / Stretch (which was finally based on GCC v6.3.0), botched the job. The way in which a bug can begin, is that according to what I just wrote, (a) and (b) could be of the type ‘integer’, just because all the required math operations can be defined to exist entirely for integers, including the ‘integer square root’, which returns an integer even when its parameter is not a perfect square.

This type of complex object makes no sense according to real math, but does according to the compiler.

One of the things which can go wrong with this is, that when creating a special ‘absolute function’, only a complex object could be specified as the possible parameter-type. But, complex objects can have a set of ‘type-conversion constructors’, that accept first an integer, then a single-precision, and then a double-precision floating-point number, and which, depending on which type the parameter can match, convert that single parameter into a temporary complex object, that has this parameter as its real component, and that has zero as its imaginary component, so that the absolute-function-call can be computed on the resulting complex object.

When the compiler resorts to “Standard Conversions” (see first article linked to above), then it is willing to perform conversions between internal types as well as programmer-defined conversions.

If somebody did choose this inefficient way of implementing the absolute function of complex objects, in a way that also computes the absolute of ‘real numbers’, then one trap to avoid would be, only to define a type-conversion constructor, that can initialize the complex object from an integer, and never from a double-precision floating-point number. This first type-conversion to an integer would succeed, and would compute its absolute, resulting in a non-negative integer.

This is obviously totally counter to what a programmer would plausibly want his code to do, but one of the first facts which are taught in Programming Courses, is that compilers will choose non-obvious, incorrect ways to behave, if their code gives them an opportunity to do so.

If the programmer wants to do this deliberately, the conversion to ‘integer’ is referred to as ‘the floor function (of the initial floating-point number)’.

Yet, this type of error seems less likely in the implementation of square roots of complex numbers, that rely on square roots of real numbers, etc.

The correct thing to do is to declare a template function, which accepts the data-type of the parameter as its template variable. And then the programmer would need to write a series of template specializations, in which this template variable matches certain data-types. Only, in the case of the ‘absolute function’ under Debian / Stretch, the implementers seem to have overlooked a template specialization, to compute the absolute of a double-precision floating-point number.

However, actually solving the problem may often not be so easy, because The template-variable could indicate a complex object, which is itself of a template class, with a template variable of its own (that mentioned base-type)

One fact to note about all this is, that there is not one set of headers. There are many versions of headers, each of which ship with a different compiler version. Further, not all people use the GNU compilers; some people use Microsoft’s Visual Studio for example… I just happened to base much of my coding on GCC v6.3.0.

An additional fact to observe is, that the headers to be ‘#include’d are written ‘<complex>’, not, ‘<complex.h>’ . What the missing ‘.h’ means, is that they are “precompiled headers”, which do not contain any text. All this makes verification very difficult. GNU is currently based on GCC v9.2, but I was building my projects, actually using ‘STDC++2014′, which was an available command-line option.

Additionally, when programmers go to Web-sites like this one, the information contained is merely meant as a quick guide, on how to program, using these types of tools, and not an exact match of any code that was ever used to compile my headers.

One way in which I can tell that that code is not literally correct, is by the fact that no version information was provided on the Web-site. Another is by the fact that while the site uses data-types such as “double” and “float”, when programmers compile compilers, they additionally tend to use data-types like ‘double_t’, which will refer to the exact register-size on some FPUs, that may actually be 80-bit. Further, the types ‘int32′ and ‘int64′ would be less ambiguous at the binary level, than the declarations ‘int’ or ‘long int’ would be, if there was ever any explicit support for signed integers… Hence, if my code got ‘complex<double_t>’ to work, but that type was never specified on the site, then the site can just as easily have overlooked the type ‘int64′

According to what I read, C and C++ compilers are intentionally vague about what the difference between ‘double’ and ‘long double’ is, only guaranteeing that ‘long double’ will give at least as much precision as ‘double’. But, If the contents of an 80-bit (floating-point) register are stored in a 64-bit RAM location, then some least-significant bits of the significand are discarded, in addition to the power of two being given a new offset. In order to implement that, the compiler both uses and offers the type, that refers to the exact register-contents, which may be 80 bits or may be 64 bits, for a 64-bit CPU…

(Updated 9/17/2019, 18h00 … )

Continue reading What can go wrong, when implementing complex numbers in C++ (Possible Solution).

Simplifying the approach, to finding roots of polynomials.

In some cases, the aim of my postings is to say, ‘I am able to solve a certain problem – more or less – and therefore, the problem is solvable.’ It follows from this position that my solutions are not assumed to be better by any means, than mainstream solutions. So recently, I suggested an approach to finding the roots of polynomials numerically, again just to prove that it can be done. And then one observation which my readers might have made would be, that my approach is only accurate to within (10-12), while mainstream solutions are accurate to within (10-16). And one possible explanation for this would be, that the mainstream solutions polish their roots, which I did not get into. (:1)

(Edit 2/8/2019, 6h40 : )

A detail which some of my readers might have missed is, that when I refer to a ‘numerical solution’, I’m generally referring to an approximation.

(End of Edit, 2/8/2019, 6h40 . )

But another observation which I made, was that Mainstream Code Examples are much tighter, than what I suggested, which poses the obvious question: ‘Why can mainstream programmers do so much, with much less code complexity?’ And I think I know one reason.

The mainstream example I just linked to, bypasses a concept which I had suggested, which was to combine conjugate complex roots into quadratic terms, which could be factorized out of the original polynomial as such. What the mainstream example does is to assume that the coefficients of the derived polynomials could be complex, even though the original one only has real coefficients. And then, if a complex root has been found, factorizing it out results in such a polynomial with complex coefficients, after which to factorize out the conjugate, causes the coefficients of the quotient to become real again.

(Edited 1/30/2019, 8h50… )

(Updated 2/9/2019, 23h50… )

I’ve just written some source-code of my own, to test my premises…

Continue reading Simplifying the approach, to finding roots of polynomials.