Some Observations about Roots with Multiplicities.

One of the subjects which I had visited in my past, was to write a C++ program that approximates the roots of polynomials, but that needed to deal with ‘Doubled Roots’ in a special way, just because the search algorithm within that program, which searches for the roots, cannot deal with doubled roots. And what I found was, roots cannot only be doubled, but can have multiplicities higher than (2). After not having pondered that programming project from the past, for some time, I now come back to rethink it, and find that it can be given a lecture of sorts, all to itself.

So, this is a link to a work-sheet, in which I try to explain Roots with Multiplicities, maybe to people with limited knowledge in Calculus:

Link to Letter-Sized PDF

Link to EPUB File, for viewing on Mobile Devices

And as those two documents mention, the following is a link to the earlier blog posting, from which readers can download the C++ program, as well as to find instructions on how to compile it, on Linux computers:

Link to Earlier Blog Posting

Hence, the C++ program linked to in the earlier blog posting, needed to make use of the subject, that the two PDF Files describe.

N.B.:

(Updated 5/06/2019, 13h15 … )

(As of 4/24/2019 : )

The C++ program I linked to is not foolproof. For example, it will eerily fail to compute the square root(s) of 100,000,000. However, it will easily compute the square root(s) of 10,000,000,000. The mechanism for this malfunction is known to me…

I have set the search algorithm to allow for a maximum number of ‘misses’ – in which the most-recently computed value of abs(y) exceeds the previous value – to be (N + 2), where (N) is the degree of the current polynomial. This results in a very tight budget for quadratics.

For square roots, the behaviour can be to start with a suggestion for the roots that has an absolute of merely (1). But then, successive guesses for (x) will need to become larger than that, each time counting as a miss. The search algorithm can overshoot its budget of such misses in some trivial examples.

(4/24/2019, 22h50 : )

Solved.

(Update 4/25/2019, 7h15 : )

I had previously used “wxMaxima”, the Computer Algebra System, as if it was a typesetting program, to create two PDF Files above. The results were sub-optimal.

Just yesterday evening, I recomposed both Work-Sheets, using The ‘LyX’ Integration of ‘SageTex’. This resulted in a better PDF, as well as in an EPUB File now, the latter of which should be viewable on most mobile devices.

By doing this I have proved, that there is in fact some practical benefit, to integrating SageTex with LyX. But I had to follow my own advice, from my own Blog-posting, on how to edit the .TEX-File for use with ‘other than pdflatex’, and then to run the commands twice…


 

(Update 4/26/2019, 20h30 : )

I seem to have found an interesting weakness in my root-approximation program.

When told to find the square roots of 10-6, or the cube roots of -10-9, the first root(s) found are accurate, but the last root is slightly inaccurate. After some pondering, I think I know why this is happening.

The first root(s) that the program finds are polished and factored out of the original polynomial, so that what is left is a linear equation, of which the last root must be found. When it does so, my program turns polishing off. But that, itself, would not be enough of a reason for that last-found root to become inaccurate.

According to pure Math this should not be the case, but perhaps due to some round-off errors in dealing with complex numbers, it seems to be the case nevertheless.

Because a linear equation’s root is never polished, this results in a last-found root only accurate to within 10-12, while 14 significant digits are being printed.

 


dirk@Phosphene:~$ poly_solve
Usage:
./roots_2 [!] coeff1 coeff2 coeff3 coeff4 ...
In the above case, prints the solutions to:
coeff1*x^3 + coeff2*x^2 + coeff3*x + coeff4 = 0.0
All command-line arguments must be numbers,
separated by spaces. The only way to omit a term,
is to put a zero.
Minimum degree: 1 (a linear equation).

Because all the polynomials need to be normalized,
i.e., the coefficient of the highest exponent
made equal to 1.0, values deemed too low as the first
argument, will arbitrarily be set to +1.0 .

Precede coefficients-list with an exclamation-mark,
i.e., with a ! , to turn polishing off.
Sometimes, polishing can break finding roots.

Imaginary components can be added to coefficients, by typing
a comma after the real component, followed without space
by the imaginary component(s).

dirk@Phosphene:~$ poly_solve 1 0 -1e-4

-0.01  +  0I
0.01  +  0I


dirk@Phosphene:~$ poly_solve 1 0 -1e-6

-0.001  +  0I
0.00099999999999989  +  0I


dirk@Phosphene:~$ poly_solve 1 -1

1  +  0I


dirk@Phosphene:~$ poly_solve 1 -1e-3

0.00099999999999989  +  0I


dirk@Phosphene:~$ 

 

The only way to solve this problem would be to write a separate solver, just for linear equations.

(Update 4/26/2019, 21h50 : )

Done.


 

OTOH, If I just wanted to brag, I could show the world what the twelve, twelfth roots of (+1) are:

 


dirk@Phosphene:~$ poly_solve 1 0 0 0 0 0 0 0 0 0 0 0 -1

-1  +  0I
-0.5  +  0.86602540378444I
-0.5  +  -0.86602540378444I
1  +  0I
0.5  +  0.86602540378444I
0.5  +  -0.86602540378444I
0.86602540378444  +  0.5I
0.86602540378444  +  -0.5I
-0.86602540378444  +  0.5I
-0.86602540378444  +  -0.5I
0  +  1I
0  +  -1I


dirk@Phosphene:~$

 

 


 

(Observation 4/27/2019, 15h10 : )

There is a detail which I already mentioned in the earlier blog posting, but which I should reiterate here. Using C++, the ability to work with complex numbers, as simplified as it may look in the resulting source-code, is not built-in, but is rather a function of pre-compiled header files. This means that while in standard C++ the behaviour is completely predictable, as to what happens when we divide a ‘double’ by an ‘int’, with complex numbers this behaviour is inherently unpredictable.

According to certain header files, the operation to divide an object of type ‘complex’ by an ‘int’ may be defined, but according to other compilers, and therefore according to other header files, it may be undefined. And this will result in most error messages, if other people, working on other platforms, try to compile my suggested source code, which I can really only attest compiles fine, on my own platforms: ‘Debian / Jessie’ and ‘Debian / Stretch’ (as previously stated).

I have taken care of most of the arithmetic operations, where a real number needs to be mixed with a complex object, by creating a temporary complex object first, for which the arithmetic operation is defined, as existing between complex objects.

Creating temporary objects improves the reliability with which the source code will compile, but also slows down execution. Therefore, whether to use temporary objects or not represents a choice between priorities for me:

  • Do I want to make sure that the code can be compiled by everybody?
  • Do I want the code, which compiles fine on my own platforms, to execute as fast as possible?

There is a certain situation in which my source-code sacrifices compatibility for speed: Assignment and initialization / construction operators: I have found that under ‘Debian / Stretch’, it’s perfectly fine to assign a ‘double’ to a ‘complex’, even though it was not fine to divide them! Furthermore, with the ‘Debian / Stretch’ compiler, it’s equally fine to initialize a ‘complex’ object, or to assign to one, from an ‘int’!

And so what I finally chose to do this afternoon, was to replace assignment from ‘int’ to ‘complex’, with assignment from ‘double’, in all cases. Firstly, this improves speed further, and secondly, it exaggerates less, in what it requires from the header files, that are pre-installed with any compiler.

However, if my readers find that the code still won’t compile, then this is where they should next try to adapt it. Look for any place where ‘double’ is either being assigned to ‘complex’, or being used to initialize such an object, and put a temporary object instead, such as

CC(1.0, 0.0)

or

CC(0.0, 0.0)

or

CC power(1.0, 0.0);

 


 

(Update 4/28/2019, 15h35 : )

Even though it should follow implicitly from what I wrote above, I should now add the explicit contribution, that having added the specialized solver, for the case of the linear equations, did in fact get rid of the inaccuracy, which I had commented on above, as follows:

 


dirk@Phosphene:~$ poly_solve 1 0 -1e-4

-0.01  +  0I
0.01  +  0I


dirk@Phosphene:~$ poly_solve 1 0 -1e-6

-0.001  +  0I
0.001  +  0I


dirk@Phosphene:~$ poly_solve 1 0 -1e-8

-0.0001  +  0I
0.0001  +  0I


dirk@Phosphene:~$ poly_solve 2 0 -2e-8

-0.0001  +  0I
0.0001  +  0I


dirk@Phosphene:~$ poly_solve 1 0 0 1e-9

0.0005  +  0.00086602540378444I
0.0005  +  -0.00086602540378444I
-0.001  +  0I


dirk@Phosphene:~$ poly_solve 2 0 0 2e-9

0.0005  +  0.00086602540378444I
0.0005  +  -0.00086602540378444I
-0.001  +  0I


dirk@Phosphene:~$ poly_solve 17 0 0 1.7e-8

0.0005  +  0.00086602540378444I
0.0005  +  -0.00086602540378444I
-0.001  +  0I


dirk@Phosphene:~$

 

Hint:

Even though I did so above for testing purposes, I would not generally say that putting coefficients smaller than 10-6, will yield accurate results.

Added Observation:

When my program ‘normalizes’ the polynomials, it replaces the coefficient of the highest exponent with (1.0), and divides all the others by what that coefficient first was. But because all the coefficients are complex numbers, real numbers are a special case of that, and their division by a complex number is being computed theoretically. The question could be asked: ‘Because floating-point fractions are being stored in binary form, if the complex number to be divided by is something other than a power of two, will round-off errors take place?’

Well, for complex divisions, the imaginary components of which are zero, this is in fact what gets computed (as well as numerous multiplications with, and additions of zero):

 


n = numerator
d = denominator
q = quotient

q := nd/(d2)

 

In the special case where (d) above is an integer, and (n) is already some binary multiple of (d), No, there should not be any round-off error, that wasn’t present before. There should only be the wasted CPU cycles to contend with, especially in the form of those multiplications by zero, and the additions of the zeroes that result, which do not get optimized out of the code.

The fact must duly be noted however, that simply the term (10-9) contains some error after being converted into binary, because any negative power of (10), just like any negative power of (5), is a repeating fraction in binary, (5) being coprime with the base of binary numbers, (2). That repeating fraction gets truncated. Yet, if (10-9) did not break the system, multiplying that by an integer twice, and then dividing it by the same integer, twice, should not break the system either.


 

(Update 5/06/2019, 13h15 : )

A question which I’ve blogged about before, which some of my readers may have read my postings about, is how or when it’s possible for me to generate an EPUB File, that contains typeset Math, and which users can generally view on their mobile devices. According to This Posting, the problem that I had encountered was, that the typeset Math glyphs can be converted into small PNG-Images, which can be flowed with the HTML Text. If I do that, there is an issue with the alignment of the images with the text: The bottoms of the images always line up with the text. Certain forms used in Math, such as Matrices, must not be displayed in this way because the expression will end up looking as though the Matrix has been raised to some power, or been given a subscript. A Matrix must always be aligned with its middle, to the line of HTML text it is flowed in.

And so, one way to bypass this problem is to try to export the LaTeX as HTML or as XHTML with MathML, so that the formatting which results can be viewed in a mobile device’s EPUB3 reader, that supports MathML. But I did run in to a snag.

When I create LaTeX documents using LyX, and specifically when using the SageMath integration with LyX, none of the mechanisms inherent in my build of LaTeX are capable of generating valid MathML.

And so in this posting I have offered an EPUB File, which does not attempt to use MathML, instead using flowed images. I have not solved any problems in how to typeset Math. I’ve only encountered examples that contain no Matrices, where to flow an image with its bottom aligned to the text, happens to produce correct results.

The question remains unanswered as to what I will do with future SageMath work-sheets, that were typeset within LyX. They will again not generate any valid MathML, and I’ll need to deal with those on a case-by-case basis.

Dirk

 

Print Friendly, PDF & Email

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

 

This site uses Akismet to reduce spam. Learn how your comment data is processed.