Secondary Polishing

When the project is undertaken to write programs, that would be sub-components to Computer Algebra Systems, but that produce floating-point numerical outputs, then an unwanted side effect of how those work is, they can be output in place of integers (whole numbers), but may differ from those integers by some very small fractional amount. Thus, instead of outputting (1) exactly, such a program might output:

0.9999999999999994

The problem is that such output can be visually misleading, and confusing because a Human user wants to know that the answer to a problem was (1). And so a possible step in the refinement of such programs is “Secondary Polishing”, which does not change the actual computations, but which makes the output ‘look nicer’.

I recently completed a project that approximates the roots of arbitrary polynomials, and also looked in to the need for secondary polishing. There was one specific situation in which this was not required: The root’s real or imaginary component could have an absolute of (1/10) or greater. In this case, the simple fact that I had set the precision of the printed output to (14), but that the roots found are more precise than to be within (10^-14), at least after the actual, primary polishing, that affects computed values, together with the way the standard output functions work in C++, will cause the example above to be output as a single-digit (1), even though what was stored internally might be different from that, by less than (10^-14). But a special case exists within the norms of C++, if the absolute of the numerical term to be output is less than (1/10).

(Updated 2/11/2019, 19h35 … )

Continue reading Secondary Polishing

Simplifying the approach, to finding roots of polynomials.

In some cases, the aim of my postings is to say, ‘I am able to solve a certain problem – more or less – and therefore, the problem is solvable.’ It follows from this position that my solutions are not assumed to be better by any means, than mainstream solutions. So recently, I suggested an approach to finding the roots of polynomials numerically, again just to prove that it can be done. And then one observation which my readers might have made would be, that my approach is only accurate to within (10-12), while mainstream solutions are accurate to within (10-16). And one possible explanation for this would be, that the mainstream solutions polish their roots, which I did not get into. (:1)

(Edit 2/8/2019, 6h40 : )

A detail which some of my readers might have missed is, that when I refer to a ‘numerical solution’, I’m generally referring to an approximation.

(End of Edit, 2/8/2019, 6h40 . )

But another observation which I made, was that Mainstream Code Examples are much tighter, than what I suggested, which poses the obvious question: ‘Why can mainstream programmers do so much, with much less code complexity?’ And I think I know one reason.

The mainstream example I just linked to, bypasses a concept which I had suggested, which was to combine conjugate complex roots into quadratic terms, which could be factorized out of the original polynomial as such. What the mainstream example does is to assume that the coefficients of the derived polynomials could be complex, even though the original one only has real coefficients. And then, if a complex root has been found, factorizing it out results in such a polynomial with complex coefficients, after which to factorize out the conjugate, causes the coefficients of the quotient to become real again.

(Edited 1/30/2019, 8h50… )

(Updated 2/9/2019, 23h50… )

I’ve just written some source-code of my own, to test my premises…

Continue reading Simplifying the approach, to finding roots of polynomials.

A Hypothetical Algorithm…

One of the ideas which I’ve written about often is, that when certain Computer Algebra Software needs to compute the root of an equation, such as of a polynomial, an exact Algebraic solution, which is also referred to as the analytical solution, or symbolic Math, may not be at hand, and that therefore, the software uses numerical approximation, in a way that never churned out the Algebraic solution in the first place. And while it might sound disappointing, often, the numerical solution is what Engineers really need.

But one subject which I haven’t analyzed in-depth before, was, how this art might work. This is a subject which some people may study in University, and I never studied that. I can see that in certain cases, an obvious pathway suggests itself. For example, if somebody knows an interval for (x), and if the polynomial function of (x), that being (y), happens to be positive at one end of the interval, and negative at the other end, then it becomes feasible to keep bisecting the interval, so that if (y) is positive at the point of bisection, its value of (x) replaces the ‘positive’ value of (x) for the interval, while if at that new point, (y) is negative, its value for (x) replaces the ‘negative’ value of (x) for the interval. This can be repeated until the interval has become smaller than some amount, by which the root is allowed to be inaccurate.

But there exist certain cases in which the path forward is not as obvious, such as what one should do, if one was given a polynomial of an even degree, that only has complex roots, yet, if these complex roots nevertheless needed to be found. Granted, in practical terms such a problem may never present itself in the lifetime of the reader. But if it does, I just had lots of idle time, and have contemplated an answer.

(Updated 1/30/2019, 13h00 … )

Continue reading A Hypothetical Algorithm…

How the general solution of polynomials, of any degree greater than 2, is extremely difficult to compute.

There are certain misconceptions that exist about Math, and one of them could be, that if a random system of equations is written down, the Algebraic solution of those equations is at hand. In fact, equations can arise easily, which are known to have numerical answers, but for which the exact, Algebraic (= analytical) answer, is nowhere in sight. And one example where this happens, is with polynomials ‘of a high degree’ . We are taught what the general solution to the quadratic is in High-School. But what I learned was, that the general solution to a cubic is extremely difficult to comprehend, while that of a 4th-degree polynomial – a quartic –¬† is too large to be printed out. These last two have in fact been computed, but not on consumer-grade hardware or software.

In fact, I was taught that for degrees of polynomials greater than 4, there is no general solution.

This seems to fly in the face of two known facts:

  1. Some of those equations have the full number of real roots,
  2. Computers have made finding numerical solutions relatively straightforward.

But the second fact is really only a testament, to how computers can perform numerical approximations, as their main contribution to Math and Engineering. In the case of polynomials, the approach used is to find at least one root – closely enough, and then, to use long division or synthetic division, to divide by (1 minus that root), to arrive at a new polynomial, which has been reduced in degree by 1. This is because (1 minus a root) must be a factor of the original polynomial.

Once the polynomial has arrived at a quadratic, computers will eagerly apply its general solution to what remains, thereby perhaps also generating 2 complex roots.

In the case of a cubic, a trick which people use, is first to normalize the cubic, so that the coefficient of its first term is 1. Then, the 4th, constant term is examined. Any one of its factors, or the factors of the terms it is a product of, positively or negatively, could be a root of the cubic. And if one of them works, the equation has been cracked.

In other words, if this constant term is the product of square roots of integers, the corresponding products of the square roots, of the factors of these integers, could lead to roots of the cubic.

(Updated 1/12/2019, 21h05 … )

Continue reading How the general solution of polynomials, of any degree greater than 2, is extremely difficult to compute.