Revisiting the subject of approximating roots of polynomials.

In an earlier posting, I had written about an approach, for how to find approximations of the roots of polynomials, of an arbitrary degree, but in such away, also to find all the complex roots. (:1)

But with such strategies, there are issues. One concept was, that a home-grown search algorithm would get close to the actual root. Next, polishing would make the result more accurate. And then, an augmented division would be computed, which is also referred to as “Deflation”, resulting in a deflated polynomial, as many times as the original polynomial’s degree, minus one.

Pondering this issue today, I realized that there was still a conceptual weakness in that approach, that being, the fact that some small amount of error is tolerated in the first root found, so that each successive deflated polynomial contains progressively greater degrees of error. What effectively happens next is, that accurate roots are found, of increasingly inaccurate polynomials, and, that there appeared to be few ways to detect and correct the resulting errors, in roots found afterwards. Theoretically, this problem could progress to the point, where doubt is evoked, in whether or not roots found later, were even roots of the original polynomial, since by that time, the object which the roots are being found of, is no longer that original polynomial.

(Update 6/08/2020, 18h35… )

(As of 6/05/2020, 19h10: )

And so today, I had the idea that, once a list of ostensible roots has been computed, they could be factored out of the original polynomial in the reverse order originally found, both to test their validity, and in some few cases, to make them more accurate.

The result of this idea can be found, both as source code and as binaries, in this folder on my site:

https://dirkmittler.homeip.net/binaries/

More specifically, the version of the program which re-verifies the roots in reversed order, can be found in the two archive-files named ‘RRoots_Dirk_Mittler…’

One big disappointment which resulted from this idea though, was the observation that, in the few cases where there were provable inaccuracies before:

  • My roots were found in order of ascending magnitude, and
  • Factoring them out again, in the reversed order, actually made them less accurate and not more accurate.

However, I’m led to believe that, if the roots had first been found by descending magnitude, then, to factor them out, the smallest first, would again lead to better accuracy.

This feature is not enabled by default, but can easily be invoked, as my entire program is used from the command-line.

I have yet to build polynomials, which have roots with magnitudes successively smaller than (1.0), to test my premise in a complete way.


 

(Update 6/05/2020, 22h45: )

I have optimized what the latest version of the program does, by also including a simple bubble-sort function, that sorts the roots by descending magnitude. After that, the second deflation of the polynomial can take place, from the smallest to the largest. That way, accuracy has been maximized, and validation also takes place.


 

(Update 6/06/2020, 1h10: )

A valid question which the reader might ask next would be, ‘What has become of multiplicities, when this re-deflation takes place? Previously, there was a mechanism, which computed the full sequence of derivatives of the original polynomial, normalizing each time, specifically because both the search function and the polishing method were vulnerable, to roots with multiplicity. The trick was, to solve the resulting linear equation first, then the quadratic, then all the way back up to the original polynomial, and with each found root, to verify within the parent polynomials, whether the same root was also a root there more than once.’

The answer comes somewhat of the form that, ‘Even though the polishing function treats the value of the quotient, at the found root of the dividend, as the negative of the derivative of the remainder with respect to that root, the polishing function already had a safeguard built-in, to test whether the derivative was small with respect to the remainder, in such a way that a derivative of zero was certain to fail the test. Therefore, with doubled roots, polishing internally turns itself off.’ The already-found roots will not be altered if they have multiplicity.

My ‘magic formula’ was, that the derivative needs to be larger than the square root of the remainder. Thus, if the remainder was as small as |10-10|, then the derivative would need to be larger than |10-5|, in order for the division of the remainder by the derivative to proceed, and polishing to take place. As a consequence in that case, the maximum adjustment to the next estimate of the root would remain smaller than |10-5|.

The search function itself is not reused.


 

(Update 6/07/2020, 20h40: )

1:)

An important question which some readers might have about this sort of approximation is, under what circumstances it might be useful. And the short answer is, ‘When the exact, analytical solution is not available.’

Exact, analytical solutions for quadratics are considered to be easy, in the form of their general solution. Even though considerably more complex, the general solution to the cubic equation is also known. The general solution to the quartic, was taught to me to be too complex to be printed out. And polynomials of degree higher than (4) do not have  a general solution. But the fact that they don’t have a general solution, does not mean that they do not have real roots.

There exist special cases where polynomials of degree (5) and higher do have exact, analytical solutions. The question of whether an analytical solution exists is actually separate from that, of whether this solution takes the form of rational roots, but in many common situations, people search for rational roots. ‘Rational numbers’ are numbers expressible exactly as fractions (hence, ‘ratios’), and do include ‘integers’, which in turn are whole numbers including the negatives and zero. (:2)

But hypothetically, a person could be ‘stuck’ with a polynomial of the 6th degree, which has no rational roots, and required to find approximations of the roots. This is exactly what my program is meant to do.

Using it to find the solutions to quadratics and cubics, can serve testing purposes, but not other real purposes.


 

(Update 6/08/2020, 18h35: )

2:)

A question which my paragraph above poses is, ‘What is the actual difference between an exact analytical solution, and a rational solution?’ And the short answer would be, that rational solutions are a subset of exact, analytical solutions.

Exact analytical solutions may contain radicals – i.e., square root symbols – that cannot be expressed exactly as fractions. But in theory, they may also contain other functions, that have been accepted into the vocabulary of defined functions, that return real numbers. A posting from several years ago describes an example.

Readers will need to have JavaScript enabled, to be able to view the Math in that earlier posting. And, the positive and negative square roots of (2) belong to the set of real numbers. Further, the set of rational numbers is just a subset, of the real numbers.

The example I linked to above is one, in which an established Computer Algebra System (‘CAS’), that being ‘SageMath’, proceeds on the assumption that the ‘LambertW’ function is defined and that some of its values are real numbers. SageMath just shortens the name of this function to ‘W()’, and offers a series of solutions to two tests that I typed in, none of which appear to be rational. All of them qualify as exact, analytical solutions.

Whether a solution is valid has a lot to do with, whether certain functions are accepted within it. For example, a person could argue, that a square root is acceptable, but nothing further. Well, in that case, the sine of an angle and the cosine of the same angle, would not be admissible as a part of the answer, because neither of them is a square root, for every angle. And yet, most people would agree that these trigonometric functions are also defined. Their values are also real numbers. The example which SageMath output above, just assumed that equally, the LambertW function is acceptable in the output.

The correct definition of the ‘LambertW()’ function(s), which can also just be shortened to ‘W()’, is:

 

x == y ey

W(x) := y

 

Alternatively, one can use Set-Builder Notation, and define that:

W(x) ∈ {y : x = y ey}

Dirk

 

Print Friendly, PDF & Email

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>