One of the ideas which exist in computer programming, and with object-oriented languages such as C++, is that a header file can define a ‘complex’ data-type, which has a non-complex base-type, such that the Mathematical definition of Complex Numbers is observed, that define them as:

( a + b *i* )

Where (a) and (b) are of the base-type, which in pure Math is the set of Real Numbers. According to object-oriented programming, a mere header file can then overload how to perform the standard math operations on these complex objects, based on a super-set of math operations already being defined for the base-type. And the complex object can be defined as a template class, to make that as easy as possible.

Well I have already run in to a programming exercise, where I discovered that the header files that ship with Debian / Stretch **(which was finally based on GCC v6.3.0)**, ~~botched the job~~. The way in which a bug can begin, is that according to what I just wrote, (a) and (b) could be of the type ‘integer’, just because all the required math operations can be defined to exist entirely for integers, including the ‘integer square root’, which returns an integer even when its parameter is not a perfect square.

This type of complex object makes no sense according to real math, but does according to the compiler.

One of the things which can go wrong with this is, that when creating a special ‘absolute function’, only a complex object could be specified as the possible parameter-type. But, complex objects can have a set of ‘type-conversion constructors’, that accept first an integer, then a single-precision, and then a double-precision floating-point number, and which, depending on which type the parameter can match, convert that single parameter into a temporary complex object, that has this parameter as its real component, and that has zero as its imaginary component, so that the absolute-function-call can be computed on the resulting complex object.

**When the compiler resorts to “Standard Conversions” (see first article linked to above), then it is willing to perform conversions between internal types as well as programmer-defined conversions**.

If somebody did choose this inefficient way of implementing the absolute function of complex objects, in a way that also computes the absolute of ‘real numbers’, then *one* trap to avoid would be, only to define a type-conversion constructor, that can initialize the complex object from an integer, and never from a double-precision floating-point number. This first type-conversion *to* an integer would succeed, and would compute *its* absolute, *resulting in a non-negative integer*.

This is obviously totally counter to what a programmer would plausibly want his code to do, but one of the first facts which are taught in Programming Courses, is that compilers will choose non-obvious, incorrect ways to behave, if their code gives them an opportunity to do so.

If the programmer wants to do this deliberately, the conversion to ‘integer’ is referred to as ‘the floor function (of the initial floating-point number)’.

Yet, this type of error seems less likely in the implementation of square roots of complex numbers, that rely on square roots of real numbers, etc.

The correct thing to do is to declare a template function, which accepts the data-type of the parameter as its template variable. And then the programmer would need to write a series of template specializations, in which this template variable matches certain data-types. Only, in the case of the ‘absolute function’ under Debian / Stretch, the implementers seem to have overlooked a template specialization, to compute the absolute ~~of a double-precision floating-point number~~.

However, actually solving the problem may often not be so easy, because The template-variable could indicate a complex object, which is itself of a template class, with a template variable of its own (that mentioned base-type)…

*One fact to note about all this is, that there is not one set of headers. There are many versions of headers, each of which ship with a different compiler version. Further, not all people use the GNU compilers; some people use Microsoft’s Visual Studio for example… I just happened to base much of my coding on GCC v6.3.0.*

*An additional fact to observe is, that the headers to be ‘#include’d are written ‘<complex>’, not, ‘<complex.h>’ . What the missing ‘.h’ means, is that they are “precompiled headers”, which do not contain any text. All this makes verification very difficult. GNU is currently based on GCC v9.2, but I was building my projects, actually using ‘STDC++2014′, which was an available command-line option.*

*Additionally, when programmers go to Web-sites like this one, the information contained is merely meant as a quick guide, on how to program, using these types of tools, and not an exact match of any code that was ever used to compile my headers.*

*One way in which I can tell that that code is not literally correct, is by the fact that no version information was provided on the Web-site. Another is by the fact that while the site uses data-types such as “double” and “float”, when programmers compile compilers, they additionally tend to use data-types like ‘double_t’, which will refer to the exact register-size on some FPUs, that may actually be 80-bit. Further, the types ‘int32′ and ‘int64′ would be less ambiguous at the binary level, than the declarations ‘int’ or ‘long int’ would be, if there was ever any explicit support for signed integers… Hence, if my code got ‘complex<double_t>’ to work, but that type was never specified on the site, then the site can just as easily have overlooked the type ‘int64′*…

According to what I read, C and C++ compilers are intentionally vague about what the difference between ‘double’ and ‘long double’ is, only guaranteeing that ‘long double’ will give at least as much precision as ‘double’. But, If the contents of an 80-bit (floating-point) register are stored in a 64-bit RAM location, then some least-significant bits of the significand are discarded, in addition to the power of two being given a new offset. In order to implement that, the compiler both uses and offers the type, that refers to the exact register-contents, which may be 80 bits or may be 64 bits, for a 64-bit CPU…

(Updated 9/17/2019, 12h10 … )

(As of 9/14/2019, 19h15 : )

I think I can be more precise, as to where I imagine this bug may stem from.

When C++ relies on a series of template specializations to decide its behaviour, the (pre-)compiler will go through them in the order they were specified, until it finds one that matches all the required patterns, and applies that one. If in such a case, more than one specialization would have matched the required patterns, then this does not even result in ‘An ambiguous type-conversion’, differently from how it would be, if there could just be more than one matching, simple function prototype.

*The consistency with which this is followed is such, that template definitions can be made recursive, and specializations can be used to control the recursion. Needless to say, programmers do not receive error messages, for having written recursive template-definitions, as long as those do not contain errors*.

I suppose that I should admit that my own experience with this subject was limited to a one-file program. In (more mainstream) cases where programming is divided between declarations in header files, and implementations in source files, the specializations need to be declared in the header files. And then, the order in which they are tried, is the order in which they appear *in the header file*. (:1)

In other words, it might be possible to write only one constructor for a template-class, that receives a single parameter, and that uses the same template parameter for the function, that was also used for the definition of the template class. If that were done, then the exact data-type of the resulting object would always follow in a constant way, from the data-type of this one parameter.

But the implementers of the ‘complex’ data-type did not have the option of doing things this way, because a constructor with a single parameter could need to act as a copy-constructor, as easily as it could need to act as a type-conversion constructor. If its parameter was another complex object, but the coders were expecting a simple variable, then the data-type of the complex object which results would become misconstructed, as containing two complex objects. *Further, I’ll assume that the assignment operator wouldn’t be defined anymore, for the base-type, when that base type has uselessly become a complex object itself*…

And so a solution which strikes me as more likely would be, to use one template parameter for the class, and another for a certain type of member function, that being the type-conversion constructor. And then, because there would be two template parameters, the behaviour would next need to follow from specializations…

The danger does not really exist, that the compiler would offer to convert a complex object into a simple variable.

But what this would also seem to imply, is that no function to be performed on real numbers, should ever need to rely on those being converted into complex numbers first, for which the function is finally defined.

```
// A potential source of problems, if
// a function relies on standard conversion,
// to receive an actual 'complex<double>',
// because a programmer supplied a 'double'.
template<class T>
class complex {
public:
template<>
complex(complex<T> Copied_Object);
template<>
complex(T Base_Variable);
private:
T real;
T imag;
};
template<> class complex<double>;
template<> class complex<float>;
template<> class complex<int>;
// The problem? The compiler could get
// 'complex<int>' to work, even if the programmer
// had supplied a 'double'...
template<class T>
complex<T>::complex(complex<T> X) {
real = X.real;
imag = X.imag;
}
template<>
complex<int>::complex(int X) {
real = X;
imag = 0;
}
template<>
complex<float>::complex(float X) {
real = X;
imag = 0.0f;
}
template<>
complex<double>::complex(double X) {
real = X;
imag = 0.0;
}
// An alternative:
template<class T>
complex<T>::complex(complex<T> X) {
real = X.real;
imag = X.imag;
}
template<class T>
complex<T>::complex(T X) {
real = X;
imag = 0;
}
```

Now, simply to remove the complex type that was based on an integer, will not solve the problem fully, because as long as the base-type of the complex number is not specified by the program which is using the template, the pre-compiler will still select the first specialization which matches, which could next be a ‘complex<float>’, even though a ‘double’ was provided.

My own source code (see second link at the beginning of this posting), actually defined the base-type of the complex number, *when initializing it using a simple number*, in which case, had I put an ‘int’ as the simple number, that would simply have been converted into a ‘double’.

Finally, I think the problem can be solved, by reversing the order of certain specializations in the header file / class declaration, so that *If the base-class of the complex number was not defined*, the highest simple variable will match first, and will also set the complex number to the one with the corresponding base-type.

(Update 9/15/2019, 15h25 : )

**1:)**

Since I’m making this posting into a kind of guide, as to what can go wrong, when relying on C++ template classes to provide complex numbers, I suppose that there’s another potential source of errors which I should point out.

For every *declared* template specialization, there must be exactly one, matching *implementation*. The question of whether one does exist, is orthogonal to the question, of whether a given template specialization is selected or not.

Therefore, it can happen that an implementer is conscientiously creating template specializations for all the cases he knows are valid mathematically, like so:

```
// Another potential problem, in
// providing complex numbers, using C++
// template-classes (copied and pasted):
// Essential:
template<class T> constexpr complex<T> operator/(const complex<T>&, const complex<T>&);
// Potential source of a problem:
template<class T> constexpr complex<T> operator/(const complex<T>&, const T&);
```

What can go wrong here, is that a programmer making use of the templates can specify a division of a complex number by a real number, but that the implementer failed to provide an implementation, for the second template specialization declared above. This would result in an error message, and if an implementer aware of such an error message wanted to remedy it, he would basically have two ways to do so:

- Provide an implementation for the second template, in which he actually defines how to divide a complex number by a real one, or
- Simply get rid of the second template.

The reason why solution (2) would at least give mathematically correct results would be, the fact that the compiler is asking for a complex number to be provided as the second parameter, and the *notion* that standard type-conversions will produce one. And, the basic flaw which I outlined above, in how complex numbers would be derived from real numbers, does not even become actual in this case, because the code defines what kind of complex number is expected:

One that matches the complex number on the LHS of the operator, according to the reuse of the template parameter ‘T’.

The real advantage to actually using solution (1) above would be, that the formal way to divide a complex number by one real number, is faster to compute, than it is, to convert the real number into a complex one first, and then to divide between two complex numbers.

(Update 9/15/2019, 16h35 : )

There is yet another thing which can go wrong. Depending on what version of header files a programmer is relying on, for reasons out of the blue, the multiplication operator ‘*’ may be defined with a complex number on the LHS, and a real number on the RHS, but not, a corresponding operator for which those parameter-types are reversed. I have actually written the following source-code:

```
// CC xb = exp(CC(0.0, M_PI / (N + 1)));
CC xb = exp((M_PI / (N + 1)) * 1i);
```

Those lines occur near line 160 of my program. The simple fact that my program compiles, as those lines are written above, means that my header files do not contain the omission to which I’m referring. Yet, if this omission was present in another person’s header files, then that person should uncomment the first of those two lines, and comment out the second, and that problem should go away.

(Update 9/17/2019, 12h10 : )

According to a recent realization about how complex numbers are defined in C++, I also have a better idea now, as to why the last snippet of code above really did not generate any sort of error message. It’s most likely, that the data-type of the mathematical constant ‘`M_PI`

‘ is actually a ‘long double’. If that is so, then dividing it by an integer, causes another ‘long double’ to be generated, and the literal constant ‘`1i`

‘ is of the type ‘complex<long double>’. What that means is that, between the parentheses, a multiplication between a type of real number and a matching complex number is taking place, that produces no error messages.

Next, when the ‘exp()’ function of this is computed, an object of type ‘complex<long double>’ is generated. But apparently assigning this to one of type ‘complex<double_t>’ causes no problems, because the implementation of a simple assignment is a question of assigning the real part, and then the imaginary part. GNU GCC has never had any problem, assigning a simple ‘long double’ to a simple ‘double_t’, or vice-versa.

BTW, when C++ coders simply write a floating-point number as a literal, the data-type it defaults to is ‘double’.

Dirk

## One thought on “What can go wrong, when implementing complex numbers in C++ (Possible Solution).”