Installing Visual Studio Code under Linux.

Linux users have often been avid followers, but left thirsting for some ability to run the proprietary applications, that Windows and Mac users have had access to since the beginning of Computing for the Masses. In fact, the narrow supply of Open-Source Applications for various Linux distributions has been aggravated by the fact that many Linux distributions exist, and when one follows the subject to its smallest detail, one finds that every Linux computer evolves into a slightly different version of Linux, because they can all be configured slightly differently, which means that some users will configure their Linux boxes in their own, personalized way. Actually, this is not a very good thing to do, unless you really know what you’re doing. But the mere fact that many, professionally configured Linux distributions exist, has also meant that packages destined for one distribution would either not install on another, or that packages which were not meant to be installed on a given distribution, could actually break it, if the user supplied his ‘root’ privileges, to so-install the package anyhow.

At the same time, the total amount of programming time available to open-source development has always been scarce, which means for the sake of this blog posting, that programming hours ended up divided between different Linux distributions. (:2)

In recent Linux distributions, there have been two main mechanisms developed over the years, to reduce the severity of this problem. In fact, since Debian 9 / Stretch, both these solutions have been available:

  • Flatpaks,
  • Snaps.

For the moment, I’m going to ignore that Flatpaks exist, as a very viable way to install software, just because Flatpaks had as their main purpose, to install purely Linux software, but on a wider variety of Linux distributions. So, why do both ‘Flatpak’ and ‘Snap’ exist? I suppose that one reason they both exist is the fact that each was invented, and that in principle, both work. But another reason why these two vehicles exist is, the fact that ‘Snaps’ are really disk images, which get mounted as loopback devices, and that therefore, ‘Snaps’ can install software which is binary in nature and therefore, not open-source, yet, install such software on a Linux computer, where the emphasis has traditionally been on open-source software. (:3)

Both mechanisms for installing software have a limited interface, of which features on the host computer the guest application is meant to have access to, since, if both methods of installing software were completely unrestricted, Linux users would lose the security which they initially gained, through their choice of Linux. I think that the way it is, ‘Snaps’ tend to have more-severe security restrictions than ‘Flatpaks’ do, and this is also how it should be.

What all of this inspired in Linux users, was the hope that eventually, they would also start to be able to install certain proprietary applications. And, the main goal of this posting is to assess, to what extent that hope seems to have been materializing. And I’m just going to ignore the fact for the moment, that some ‘Snaps’ are really just Linux applications, which their programmers compiled to the additional target, that being a ‘Snap’, and that for this reason, some Snaps just don’t work, usually because their programmers did not take into consideration that on an eventual host computer, each Snap only has access to the Interfaces which the security model allows, even though, when residing on Linux computers natively, the same application ‘just works fine’. For the sake of argument, software developers might exist, who are professional enough in what they do, to compile Snaps as Snaps, which in turn do work as intended.

An idea which could make some Linux users uneasy would be, that the supply of proprietary software available as Snaps, may not have grown as much as hoped, and that Linux users could be looking at a bleak future. Well, in order to get a full idea of how many Snaps are presently available, user can just visit ‘the Snap store’, and browse to see what it has to offer. And this would be the URL:

https://snapcraft.io/

What most Computer Users would seem to notice is the fact, that there is not a huge abundance of software, at least, according to my tastes, and at the time I’m writing this. Also, users do not need to pay for anything at this so-called Snap store. However, I have at least one Snap installed, of which I know, that if I activated that, I’d need to make a one-time payment to its developers, before it would actually function as one user-license.

What I’d just like to explore for the moment is the possibility that a User might want to program and compile code he wrote himself, in his favourite language, such as, in C / C++, or in C#, and that additionally, said user might prefer “Visual Studio Code” as his Editor, as well as his IDE. In reality, Linux users do not depend very strongly on the ability to use ‘VSCode’, as it’s also called, because under Linux, we actually have numerous IDEs to choose between. But let’s say I wanted to write code in these 2(3) languages, and, to use ‘VSCode’ to do so…

(Updated 5/04/2020, 17h50… )

Continue reading Installing Visual Studio Code under Linux.

Certain wrong places, to put recursion, into a program?

One of the subjects which I was pursuing recently, was not just of why, before March of 2019, I had gotten some error messages, when trying to compile a certain program, but also of why or if, other people, using other types of computers, might continue to obtain error messages, long after I was no longer obtaining them.

And a basic concept which I had referenced was that C++ compilers, when not given an exact match in the data-types of a function prototype, to the data-type of the arguments passed to those functions, will first try to perform a type-conversion which is referred to as a “promotion”, and if that fails, will attempt what’s referred to as a “standard conversion”, the latter of which can transform a ‘higher’ type of built-in number to a ‘lower type’, etc.. There was a basic question which I had not provided any sort of answer to, nor which I even acknowledged explicitly could exist. That question has to do with what happens, when more than one type-conversion has the ability to go from the argument-type, to the parameter-type of a function prototype.

Theoretically, it would be possible to design a compiler such, that every time a type-conversion is being sought, both types are tried, in a pattern which is referred to as ‘recursion’, but which can also just be referred to as an ‘exhaustive search’. There’s every possibility that this is not how the compiler is programmed in fact. What can happen instead would be, that the compiler is willing to perform a promotion, in the service of a potential, standard conversion, but that it will go no further.

And in that context, as well as in the mentioned context of recursive template definitions, casting a derived class to one of its parent classes, counts as a promotion.

There’s every possibility that if recursion was placed in the code of the compiler, to re-attempt a standard conversion, to execute prior to another standard conversion that will not fit yet, the result could become some sort of endless loop, while the real behaviour of a compiler needs to be more stable. And this would be a valid reason, for which certain standard template declarations will first try to instantiate the templates using a ‘float’, then a ‘double’, and then a ‘long double’, in the form of specializations. The programmers will assume that a standard conversion needs to receive a value, that can be the result of a promotion. And in that case, the first template specialization may not work, while a later one might, just because to convert, say, a ‘double_t’ to a ‘long double’ will be a promotion, while to convert a ‘double_t’ to a ‘float’ would not, but would in fact be a standard conversion, in the service of another, standard conversion (that should not happen).

Dirk

 

I’ve finally figured out, why I was having so many problems, with complex numbers, in a C++ program.

In This earlier posting, I had written that many things could go wrong, with the way C++ templates define complex numbers. Well, after studying the programming exercise which I was basing that posting on, I think I’ve finally found out, what the single problem in fact was.

In my programming exercise, I had defined the data-type ‘complex<double_t>‘. This in itself caused a lot of problems, without being obvious to me as the culprit. The way C++ templates define complex numbers, will often derive the base-type of one complex number, from the base-type of another, preexisting one, just by transferring a template parameter. However, there are two situations where the templates, defined in the headers, can run into trouble with this:

  1. They can try to convert a ‘real number’ to a complex number, where the base-type of the derived complex number was never declared,
  2. They can try to mix mathematical operations between complex numbers and ‘real numbers’, in such a way that the type of the real number must match the base-type of the complex number exactly.

Specifically in situation (1) above, the templates will try the specializations of ‘float‘, ‘double‘, and ‘long double‘, as educated guesses, for what type of complex number is required. And the problem may well be, that only these 3 template-specializations are attempted, not, ‘double_t’.

And in situation (2) above, I was not taking into consideration that my code was often providing literal numbers, that were of type ‘double‘ by nature, not of type ‘double_t‘. This created a mismatch.

In any case, now that I realize what my mistake was, as well as having removed all the ‘mixed computations, between complex and real numbers’, resulting in code that no longer generates errors, I am more confident that I only was days ago, in the C++ template-definitions, for complex numbers.

 


 

(Update 9/18/2019, 12h55 : )

An added observation would be, that when my code tried to find the absolute, of the imaginary component, of a complex number, that component was also a ‘double_t‘ (deterministically), but the absolute function that’s predefined, again, only recognizes parameter-types ‘int‘, ‘float‘, ‘double‘, and ‘long double‘, in certain versions of the GCC compiler, which can again result in an incorrect match, with a working version of the absolute function.

Dirk

 

What can go wrong, when implementing complex numbers in C++ (Possible Solution).

One of the ideas which exist in computer programming, and with object-oriented languages such as C++, is that a header file can define a ‘complex’ data-type, which has a non-complex base-type, such that the Mathematical definition of Complex Numbers is observed, that define them as:

( a + b i )

Where (a) and (b) are of the base-type, which in pure Math is the set of Real Numbers. According to object-oriented programming, a mere header file can then overload how to perform the standard math operations on these complex objects, based on a super-set of math operations already being defined for the base-type. And the complex object can be defined as a template class, to make that as easy as possible.

Well I have already run in to a programming exercise, where I discovered that the header files that ship with Debian / Stretch (which was finally based on GCC v6.3.0), botched the job. The way in which a bug can begin, is that according to what I just wrote, (a) and (b) could be of the type ‘integer’, just because all the required math operations can be defined to exist entirely for integers, including the ‘integer square root’, which returns an integer even when its parameter is not a perfect square.

This type of complex object makes no sense according to real math, but does according to the compiler.

One of the things which can go wrong with this is, that when creating a special ‘absolute function’, only a complex object could be specified as the possible parameter-type. But, complex objects can have a set of ‘type-conversion constructors’, that accept first an integer, then a single-precision, and then a double-precision floating-point number, and which, depending on which type the parameter can match, convert that single parameter into a temporary complex object, that has this parameter as its real component, and that has zero as its imaginary component, so that the absolute-function-call can be computed on the resulting complex object.

When the compiler resorts to “Standard Conversions” (see first article linked to above), then it is willing to perform conversions between internal types as well as programmer-defined conversions.

If somebody did choose this inefficient way of implementing the absolute function of complex objects, in a way that also computes the absolute of ‘real numbers’, then one trap to avoid would be, only to define a type-conversion constructor, that can initialize the complex object from an integer, and never from a double-precision floating-point number. This first type-conversion to an integer would succeed, and would compute its absolute, resulting in a non-negative integer.

This is obviously totally counter to what a programmer would plausibly want his code to do, but one of the first facts which are taught in Programming Courses, is that compilers will choose non-obvious, incorrect ways to behave, if their code gives them an opportunity to do so.

If the programmer wants to do this deliberately, the conversion to ‘integer’ is referred to as ‘the floor function (of the initial floating-point number)’.

Yet, this type of error seems less likely in the implementation of square roots of complex numbers, that rely on square roots of real numbers, etc.

The correct thing to do is to declare a template function, which accepts the data-type of the parameter as its template variable. And then the programmer would need to write a series of template specializations, in which this template variable matches certain data-types. Only, in the case of the ‘absolute function’ under Debian / Stretch, the implementers seem to have overlooked a template specialization, to compute the absolute of a double-precision floating-point number.

However, actually solving the problem may often not be so easy, because The template-variable could indicate a complex object, which is itself of a template class, with a template variable of its own (that mentioned base-type)

One fact to note about all this is, that there is not one set of headers. There are many versions of headers, each of which ship with a different compiler version. Further, not all people use the GNU compilers; some people use Microsoft’s Visual Studio for example… I just happened to base much of my coding on GCC v6.3.0.

An additional fact to observe is, that the headers to be ‘#include’d are written ‘<complex>’, not, ‘<complex.h>’ . What the missing ‘.h’ means, is that they are “precompiled headers”, which do not contain any text. All this makes verification very difficult. GNU is currently based on GCC v9.2, but I was building my projects, actually using ‘STDC++2014′, which was an available command-line option.

Additionally, when programmers go to Web-sites like this one, the information contained is merely meant as a quick guide, on how to program, using these types of tools, and not an exact match of any code that was ever used to compile my headers.

One way in which I can tell that that code is not literally correct, is by the fact that no version information was provided on the Web-site. Another is by the fact that while the site uses data-types such as “double” and “float”, when programmers compile compilers, they additionally tend to use data-types like ‘double_t’, which will refer to the exact register-size on some FPUs, that may actually be 80-bit. Further, the types ‘int32′ and ‘int64′ would be less ambiguous at the binary level, than the declarations ‘int’ or ‘long int’ would be, if there was ever any explicit support for signed integers… Hence, if my code got ‘complex<double_t>’ to work, but that type was never specified on the site, then the site can just as easily have overlooked the type ‘int64′

According to what I read, C and C++ compilers are intentionally vague about what the difference between ‘double’ and ‘long double’ is, only guaranteeing that ‘long double’ will give at least as much precision as ‘double’. But, If the contents of an 80-bit (floating-point) register are stored in a 64-bit RAM location, then some least-significant bits of the significand are discarded, in addition to the power of two being given a new offset. In order to implement that, the compiler both uses and offers the type, that refers to the exact register-contents, which may be 80 bits or may be 64 bits, for a 64-bit CPU…

(Updated 9/17/2019, 18h00 … )

Continue reading What can go wrong, when implementing complex numbers in C++ (Possible Solution).