Simplifying the approach, to finding roots of polynomials.

In some cases, the aim of my postings is to say, ‘I am able to solve a certain problem – more or less – and therefore, the problem is solvable.’ It follows from this position that my solutions are not assumed to be better by any means, than mainstream solutions. So recently, I suggested an approach to finding the roots of polynomials numerically, again just to prove that it can be done. And then one observation which my readers might have made would be, that my approach is only accurate to within (10-12), while mainstream solutions are accurate to within (10-16). And one possible explanation for this would be, that the mainstream solutions polish their roots, which I did not get into. (:1)

(Edit 2/8/2019, 6h40 : )

A detail which some of my readers might have missed is, that when I refer to a ‘numerical solution’, I’m generally referring to an approximation.

(End of Edit, 2/8/2019, 6h40 . )

But another observation which I made, was that Mainstream Code Examples are much tighter, than what I suggested, which poses the obvious question: ‘Why can mainstream programmers do so much, with much less code complexity?’ And I think I know one reason.

The mainstream example I just linked to, bypasses a concept which I had suggested, which was to combine conjugate complex roots into quadratic terms, which could be factorized out of the original polynomial as such. What the mainstream example does is to assume that the coefficients of the derived polynomials could be complex, even though the original one only has real coefficients. And then, if a complex root has been found, factorizing it out results in such a polynomial with complex coefficients, after which to factorize out the conjugate, causes the coefficients of the quotient to become real again.

(Edited 1/30/2019, 8h50… )

(Updated 2/9/2019, 23h50… )

I’ve just written some source-code of my own, to test my premises…

(As of 1/26/2019, 20h15: )

At first glance this might cause the object-code to seem more complicated. But it causes simpler source code because a variable can just be declared to be of complex type, both in the FORTRAN example linked to, and in other high-level programming languages, such as in C++.


 

1:) “Polishing A Root” is usually a practical trick used when solving them, which uses the remainder after the synthetic division (see “Deflation” according to the reference document linked to above), which would be zero if the divisor was exact, in order to make the divisor more exact, rather than to rely entirely on the search algorithm, to generate a root as accurately as possible.

(Edit 1/30/2019, 8h50 : )

I realize that this way of polishing the root differs, from what the reference document suggests, which I linked to above. (:3)

I have just gotten the full concept to work. The trick that I needed was, to use the root implied by the divisor, and to plug that in to the quotient, as my basis for estimating what the derivative of the remainder could be, with respect to the divisor. I make two passes.

This results in yet another situation, where a doubled root would prevent my underlying strategies from working – since they’d result in a derivative of zero as well – except for the fact that I’ve dealt with them explicitly, according to my own ‘Hypothetical Algorithm’, linked to above, which is no longer hypothetical.

The reason for which this works, could be as follows:

At the root of the divisor, the divisor should be equal to zero. But, assuming that the root is not doubled, the quotient should be non-zero. In theory, the two would be multiplied, to result in the dividend. Hence, the effect of small changes in the divisor, on the dividend, should be equal to the quotient. The only fact which makes this harder to understand, is that the divisor does not exactly correspond to a root of the dividend, for which reason there is a non-zero remainder

What can be observed in the case of the first-order dividend, which is that the derivative is always (-1) when the quotient is the constant term, is also to be applied in non-trivial cases, by using the negative of this quotient-value, at the root implied by the divisor. The remainder needs to be divided by this derivative, and then either subtracted from the divisor, or added to the estimated root. (:2)

What results is output accurate to within (10-14) instead of only to within (10-12).

http://dirkmittler.homeip.net/text/roots_1.cpp

(Code Updated 2/1/2019, 13h45. )

The source-file ‘roots_1.cpp’ is hard-coded, only to produce the test output below.

 


dirk@Plato:~/Programs/Dirk_Roots$ ./roots_1
x^3 -3x^2 -2x + 6 = 0 :

1.4142135623731  +  0I
-1.4142135623731  +  0I
3  +  0I


x^3 -3x^2 -2x + 15 = 0 :

2.4768359735688  +  -1.2422292374924I
2.4768359735688  +  1.2422292374924I
-1.9536719471376  +  0I


x^4 +2x^2 + 1 = 0 :

0  +  1I
0  +  1I
0  +  -1I
0  +  -1I


x^5 = 0 :

0  +  0I
0  +  0I
0  +  0I
0  +  0I
0  +  0I


x^8 + 3x^6 -3x^5 +x^4
 +x^3 +2x^2 + 1 = 0 :

-0.44236729147741  +  1.8678073968046I
-0.44236729147741  +  -1.8678073968046I
0.12390731304389  +  0.59992017540847I
0.12390731304389  +  -0.59992017540847I
-0.60417361259701  +  0.49571073126293I
-0.60417361259701  +  -0.49571073126293I                                        
0.92263359103053  +  -0.57704637832846I                                         
0.92263359103053  +  0.57704637832846I                                          
                                                                                
                                                                                
dirk@Plato:~/Programs/Dirk_Roots$ 

 

Now, in case the reader prefers a C++ program which, when compiled, gives a command-line from which to type in arbitrary polynomials, I have created a rudimentary command-line version of the same program:

http://dirkmittler.homeip.net/text/roots_2.cpp

(Link Added 2/1/2019, 21h25. )

(Comment 2/5/2019, 20h35 : )

Some readers might find it odd, that I used the header file <complex.h> instead of <complex>, in roots_1.cpp, along with the old function ‘cexp()’. The reason I did that is the fact that As of December 2016, the standard which Linux builds under was changed to GNU++14, replacing C++14, and creating issues.

I’m fully able to compile the code with the more-correct function overloads, if I set the flag ‘-std=c++14′. But what I really wanted, was some quick-and-dirty source-code, which would compile without my readers having to set compiler flags. If using the header file <complex>, in the function ‘search_root()’, near Line 148, the relevant line of code actually needs to be changed to:

 


	CC xb = exp(M_PI * 1i / CC(N + 1.0, 0.0));

 

Along the same lines, the operation is not always defined, to divide a complex number by an integer, or to multiply one so. Whether that’s defined or not may depend on one specific header file… Therefore, the more-correct thing to do, is to convert the integer to a complex number first, and then to divide by it…

(End of Comment, 2/5/2019, 20h35 . )

(Comment, 3/2/2019, 19h10 : )

In case the reader did not understand, the way to compile the second of the two source-files, which is the useful one, on an up-to-date Linux computer, is:

 


$ g++ -std=c++14 -Wall -o roots_2 roots_2.cpp

 

(End of Comment, 3/2/2019, 19h10. )

(Comment, 3/2/2019, 23h00 : )

The source code in ‘roots_1.cpp’ previously only compiled properly on an up-to-date, Debian / Stretch computer with ‘-std=gnu++14′. Because it’s generally important to be able to reproduce even test results, I have now edited this code slightly, so that it will also compile and run properly on a Debian / Jessie machine with:

 


$ g++ -std=gnu++11 -Wall -o roots_1 roots_1.cpp

 

(End of Comment, 3/2/2019, 23h00. )

(Warning, 2/9/2019, 23h50 : )

There exists a bizarre bug in the header files and/or libraries of some Linux computers, when trying to use the (correct) include <complex>. The overloaded absolute function ‘abs()’ now works for complex parameters, but no longer for real parameters! This affected how the secondary polishing works on Debian / Jessie computers. The effect is supposed to be, that imaginary or real components less than ‘epsilon’ be replaced with zero. The effect was that components less than 1.0 were replaced with zero.

Just in case the same bug exists on other architectures, I’ve written a workaround.

(End of Warning, 2/9/2019, 23h50 . )

 


(Update 2/2/2019, 10h55 : )

2:)

I suppose that a valid question which my reader could have, since the art of guessing the derivative of the remainder, with respect to the divisor, for the purpose of polishing the root, is admittedly guesswork on my part, is how I can be reasonably confident that it ‘works’.

And the answer is relatively straightforward. My C++ program actually tests each root it estimates, and does not just estimate a root, by plugging that root in to the polynomial, and verifying that the value of (y) which results is extremely small – small to whatever extent matches the level of accuracy, with which (x) was to be found. If the value of (y) is not as small as the situation requires, the algorithm stops computing any roots, and also stops outputting any results, for any given polynomial. The result is then no output.

Well, when I had tried other methods of estimating the derivative of the remainder, and had set the Boolean ‘polish’ to ‘true’, I was getting no output, for certain polynomials. But if I set ‘polish’ to ‘false’, I got output. What this means is, that earlier attempts to polish were throwing the value of (x) further away from the exact root, than (x) already was, so that a  value of (y) followed, which was no longer as close to zero as it was supposed to be. Then, those roots were omitted from the output because polishing them had actually made them inaccurate.

So what I know now is, that my present method of polishing the roots, does not make them less accurate than they were before, even though I’m doing so twice.

As for knowing that my accuracy has improved:

I can set the level of precision, with which the roots are printed, to 14 decimal digits. Without polishing, even in the case where I knew digits would be repeated, the last 2 digits of output, of real roots, were not repeated exactly. This would be consistent with having set ‘epsilon’ to (10-12). But if I set ‘polish’ to true, then all 14 printed decimal digits are consistent.


 

(Update 2/3/2019, 5h50 : )

3:)

I suppose that a valid question to ask might be, ‘Why doesn’t Dirk use the method described in the document linked to, to polish the roots?’

The answer is the fact that the document recommends, that the last root found by the search algorithm, should be plugged in to a second invocation of the search algorithm, as the neighbourhood of the root to find. The problem with that is the fact, that not all search algorithms become more-accurate when this is done. It just so happens that the LaGuerre method outputs a slightly inaccurate result, if the initial guess as to where the root might be, was greatly inaccurate.

But my custom search algorithm is not more or less accurate, based on whether the initial guess was close to the exact root or not. So in order to polish the root, I’d either need to implement two search algorithms, or devise a different way to polish the root – my chosen way being, to try to use the remainder of the deflation / factorization, which is not a foolproof method by nature.

For this reason, the command-line version of my C++ program allows the user to turn polishing off.


 

Similarly, the question could be asked of why, I use the complicated search algorithm to solve a linear equation, i.e., a polynomial of degree (1), as this was not what that algorithm was initially meant to do.

And the answer would be the fact that although a separate solver could easily be written, just for the case of degree-1 equations, I’d also need to put code that tests for this case, so that the additional solver will get used. This would tend to make my source-code longer overall, even though it would ultimately cause fewer CPU cycles to get spent.

Because my complicated search-algorithm can solve such linear equations in one iteration, its output is more-exact than the output would be for equations of a higher degree, and that solver can remain in-use, while also keeping my source-code simpler.

Dirk

Print Friendly, PDF & Email

6 thoughts on “Simplifying the approach, to finding roots of polynomials.”

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

 

This site uses Akismet to reduce spam. Learn how your comment data is processed.