One weakness of my algorithm, regarding polynomials.

Several months ago, I wrote a C++ program (with C coding-style) that computes numerical approximations, of the roots of an arbitrary polynomial. Even though I changed the code slightly since March, mainly to improve the chances that it will compile on other people’s computers, the basic algorithm hasn’t changed. Yet, because I wrote the algorithm, I know that it has certain weaknesses, and I’m about to divulge one of those…

The main search algorithm isn’t able to determine whether (x) is sufficiently close to a root. It’s really only able to determine whether (y) is sufficiently close to zero. And so an idea that an implementer could have about this would be, to name the maximum allowable error in (x) ‘epsilon’, to compute the derivative of the polynomial for the given value of (x), and then to multiply ‘epsilon’ by this derivative, perhaps naming the derived, allowable error in (y) ‘epsilond’. But I consciously chose not to go that route because:

  • Computing the real derivative of the polynomial each time, though certainly doable, would have made my code complex beyond what I was willing to tolerate,
  • Even had I done so, certain polynomials will have a uselessly low derivative near one of their roots. This can happen because of a doubled root, or simply because of two roots that are too close together. ‘epsilon’ is as small as what would be practical, given the number of bits the variables have to work with, that were, a 52-bit significand.

So my program, instead, computes a ‘notional derivative’ of the polynomial, near (x), that is really just the derivative of the highest exponent by itself. This little detail could actually make it difficult for some observers to understand my code. At the same time, my program limits the lower extent of this derivative, to an arbitrary, modest fraction, just so that (x) won’t need to be impossibly precise, just to satisfy the computation of how precise (y) would need to be.

But then, a predictable problem becomes, that the real derivative of a given polynomial might be much smaller than just, the derivative of this highest exponent. And if it is, then satisfying … for (y) will no longer satisfy … for (x).

The following output illustrates an example of this behaviour:

 


"Maxima" Preparation:

(%i1) expand((x-100)*(x-101)*(x-102));
(%o1) x^3-303*x^2+30602*x-1030200

(%i2) expand((x-100)*(x-110)*(x-120));
(%o2) x^3-330*x^2+36200*x-1320000

(%i3) expand((x-100)*(x-101));
(%o3) x^2-201*x+10100


Approximations from Program:

dirk@Phosphene:~$ poly_solve 1 -303 30602 -1030200

101.00000000013  +  0I
99.999999999938  +  0I
101.99999999993  +  0I


dirk@Phosphene:~$ poly_solve 1 -330 36200 -1320000

100  +  0I
110  +  0I
120  +  0I


dirk@Phosphene:~$ poly_solve 1 -201 10100

100  +  0I
101  +  0I


dirk@Phosphene:~$ 


 

 


 

(Updated 9/22/2019, 5h40 … )

(As of 9/21/2019, 19h05 : )

As of today, it has become possible for anybody who might be using my program, to change what the base-type of the complex numbers is, by only changing one line of code. I.e., the observer may put ‘typedef complex<long double> CC;‘ on line 22 and not change anything else, and the program will compile.

The way it still was only yesterday, I had one remaining line of code, which would subtract a ‘complex<double_t>‘ from a ‘double‘. That offence has been removed.


(Update 9/22/2019, 5h40 : )

In addition to what I did yesterday, I have now improved the program somewhat, in that if the observer decides to change Line 22 to:

 


typedef complex<long double> CC;

 

Not only will the code still compile, but it will now choose ‘epsilon’ as well as the printed precision, automatically, to be able to display results that are more accurate.

 


"Maxima" Preparation:

(%i1) expand((x-100)*(x-101)*(x-102));
(%o1) x^3-303*x^2+30602*x-1030200


Approximations from Program:

dirk@Phosphene:~$ poly_solve 1 -303 30602 -1030200

101  +  0I
100  +  0I
102  +  0I


dirk@Phosphene:~$ poly_solve 1 0 0 -1e-12

-5e-05  +  8.660254037844386e-05I
-5e-05  +  -8.660254037844386e-05I
0.0001  +  0I


dirk@Phosphene:~$ poly_solve 1 0 0 0 0 0 0 0 0 0 0 0 -1

-1  +  0I
-0.5  +  0.8660254037844386I
-0.5  +  -0.8660254037844386I
1  +  0I
0.5  +  0.8660254037844386I
0.5  +  -0.8660254037844386I
0.8660254037844386  +  0.5I
0.8660254037844386  +  -0.5I
-0.8660254037844386  +  0.5I
-0.8660254037844386  +  -0.5I
0  +  1I
0  +  -1I


dirk@Phosphene:~$

 

In this process I’ve also learned something important. When programmers use the ‘long double’ format, even on platforms where this is longer than ‘double’, the ‘long double’ format stores numbers in RAM in a 128-bit format, that is the standard quadruple-precision format. But actually to perform 128-bit, quadruple-precision calculations, is still not standard on PCs today. What PCs and laptops will do instead, is to use a register in the Floating-Point Unit (the ‘FPU’) which ‘only’ has 80 bits of precision, and when loading its contents into RAM, an explicit conversion is made to the 128-bit format.

What this means is that, even when ‘long double’ is declared and working as designed, the maximum number of decimal places is actually 18, not 33. And the latest version of my program will therefore choose a value for ‘epsilon’ of 1.0E-15, and a maximum number of printed digits, 16.

Yet, as the second set of results above demonstrates, I have something to show for it.

As long as Line 22 of the code is left as first published, it will only print 14 decimal places, that are not always accurate.

Dirk

 

Print Friendly, PDF & Email

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

 

This site uses Akismet to reduce spam. Learn how your comment data is processed.