Certain wrong places, to put recursion, into a program?

One of the subjects which I was pursuing recently, was not just of why, before March of 2019, I had gotten some error messages, when trying to compile a certain program, but also of why or if, other people, using other types of computers, might continue to obtain error messages, long after I was no longer obtaining them.

And a basic concept which I had referenced was that C++ compilers, when not given an exact match in the data-types of a function prototype, to the data-type of the arguments passed to those functions, will first try to perform a type-conversion which is referred to as a “promotion”, and if that fails, will attempt what’s referred to as a “standard conversion”, the latter of which can transform a ‘higher’ type of built-in number to a ‘lower type’, etc.. There was a basic question which I had not provided any sort of answer to, nor which I even acknowledged explicitly could exist. That question has to do with what happens, when more than one type-conversion has the ability to go from the argument-type, to the parameter-type of a function prototype.

Theoretically, it would be possible to design a compiler such, that every time a type-conversion is being sought, both types are tried, in a pattern which is referred to as ‘recursion’, but which can also just be referred to as an ‘exhaustive search’. There’s every possibility that this is not how the compiler is programmed in fact. What can happen instead would be, that the compiler is willing to perform a promotion, in the service of a potential, standard conversion, but that it will go no further.

And in that context, as well as in the mentioned context of recursive template definitions, casting a derived class to one of its parent classes, counts as a promotion.

There’s every possibility that if recursion was placed in the code of the compiler, to re-attempt a standard conversion, to execute prior to another standard conversion that will not fit yet, the result could become some sort of endless loop, while the real behaviour of a compiler needs to be more stable. And this would be a valid reason, for which certain standard template declarations will first try to instantiate the templates using a ‘float’, then a ‘double’, and then a ‘long double’, in the form of specializations. The programmers will assume that a standard conversion needs to receive a value, that can be the result of a promotion. And in that case, the first template specialization may not work, while a later one might, just because to convert, say, a ‘double_t’ to a ‘long double’ will be a promotion, while to convert a ‘double_t’ to a ‘float’ would not, but would in fact be a standard conversion, in the service of another, standard conversion (that should not happen).

Dirk

 

What can go wrong, when implementing complex numbers in C++ (Possible Solution).

One of the ideas which exist in computer programming, and with object-oriented languages such as C++, is that a header file can define a ‘complex’ data-type, which has a non-complex base-type, such that the Mathematical definition of Complex Numbers is observed, that define them as:

( a + b i )

Where (a) and (b) are of the base-type, which in pure Math is the set of Real Numbers. According to object-oriented programming, a mere header file can then overload how to perform the standard math operations on these complex objects, based on a super-set of math operations already being defined for the base-type. And the complex object can be defined as a template class, to make that as easy as possible.

Well I have already run in to a programming exercise, where I discovered that the header files that ship with Debian / Stretch (which was finally based on GCC v6.3.0), botched the job. The way in which a bug can begin, is that according to what I just wrote, (a) and (b) could be of the type ‘integer’, just because all the required math operations can be defined to exist entirely for integers, including the ‘integer square root’, which returns an integer even when its parameter is not a perfect square.

This type of complex object makes no sense according to real math, but does according to the compiler.

One of the things which can go wrong with this is, that when creating a special ‘absolute function’, only a complex object could be specified as the possible parameter-type. But, complex objects can have a set of ‘type-conversion constructors’, that accept first an integer, then a single-precision, and then a double-precision floating-point number, and which, depending on which type the parameter can match, convert that single parameter into a temporary complex object, that has this parameter as its real component, and that has zero as its imaginary component, so that the absolute-function-call can be computed on the resulting complex object.

When the compiler resorts to “Standard Conversions” (see first article linked to above), then it is willing to perform conversions between internal types as well as programmer-defined conversions.

If somebody did choose this inefficient way of implementing the absolute function of complex objects, in a way that also computes the absolute of ‘real numbers’, then one trap to avoid would be, only to define a type-conversion constructor, that can initialize the complex object from an integer, and never from a double-precision floating-point number. This first type-conversion to an integer would succeed, and would compute its absolute, resulting in a non-negative integer.

This is obviously totally counter to what a programmer would plausibly want his code to do, but one of the first facts which are taught in Programming Courses, is that compilers will choose non-obvious, incorrect ways to behave, if their code gives them an opportunity to do so.

If the programmer wants to do this deliberately, the conversion to ‘integer’ is referred to as ‘the floor function (of the initial floating-point number)’.

Yet, this type of error seems less likely in the implementation of square roots of complex numbers, that rely on square roots of real numbers, etc.

The correct thing to do is to declare a template function, which accepts the data-type of the parameter as its template variable. And then the programmer would need to write a series of template specializations, in which this template variable matches certain data-types. Only, in the case of the ‘absolute function’ under Debian / Stretch, the implementers seem to have overlooked a template specialization, to compute the absolute of a double-precision floating-point number.

However, actually solving the problem may often not be so easy, because The template-variable could indicate a complex object, which is itself of a template class, with a template variable of its own (that mentioned base-type)

One fact to note about all this is, that there is not one set of headers. There are many versions of headers, each of which ship with a different compiler version. Further, not all people use the GNU compilers; some people use Microsoft’s Visual Studio for example… I just happened to base much of my coding on GCC v6.3.0.

An additional fact to observe is, that the headers to be ‘#include’d are written ‘<complex>’, not, ‘<complex.h>’ . What the missing ‘.h’ means, is that they are “precompiled headers”, which do not contain any text. All this makes verification very difficult. GNU is currently based on GCC v9.2, but I was building my projects, actually using ‘STDC++2014′, which was an available command-line option.

Additionally, when programmers go to Web-sites like this one, the information contained is merely meant as a quick guide, on how to program, using these types of tools, and not an exact match of any code that was ever used to compile my headers.

One way in which I can tell that that code is not literally correct, is by the fact that no version information was provided on the Web-site. Another is by the fact that while the site uses data-types such as “double” and “float”, when programmers compile compilers, they additionally tend to use data-types like ‘double_t’, which will refer to the exact register-size on some FPUs, that may actually be 80-bit. Further, the types ‘int32′ and ‘int64′ would be less ambiguous at the binary level, than the declarations ‘int’ or ‘long int’ would be, if there was ever any explicit support for signed integers… Hence, if my code got ‘complex<double_t>’ to work, but that type was never specified on the site, then the site can just as easily have overlooked the type ‘int64′

According to what I read, C and C++ compilers are intentionally vague about what the difference between ‘double’ and ‘long double’ is, only guaranteeing that ‘long double’ will give at least as much precision as ‘double’. But, If the contents of an 80-bit (floating-point) register are stored in a 64-bit RAM location, then some least-significant bits of the significand are discarded, in addition to the power of two being given a new offset. In order to implement that, the compiler both uses and offers the type, that refers to the exact register-contents, which may be 80 bits or may be 64 bits, for a 64-bit CPU…

(Updated 9/17/2019, 18h00 … )

Continue reading What can go wrong, when implementing complex numbers in C++ (Possible Solution).