A Hypothetical Algorithm…

One of the ideas which I’ve written about often is, that when certain Computer Algebra Software needs to compute the root of an equation, such as of a polynomial, an exact Algebraic solution, which is also referred to as the analytical solution, or symbolic Math, may not be at hand, and that therefore, the software uses numerical approximation, in a way that never churned out the Algebraic solution in the first place. And while it might sound disappointing, often, the numerical solution is what Engineers really need.

But one subject which I haven’t analyzed in-depth before, was, how this art might work. This is a subject which some people may study in University, and I never studied that. I can see that in certain cases, an obvious pathway suggests itself. For example, if somebody knows an interval for (x), and if the polynomial function of (x), that being (y), happens to be positive at one end of the interval, and negative at the other end, then it becomes feasible to keep bisecting the interval, so that if (y) is positive at the point of bisection, its value of (x) replaces the ‘positive’ value of (x) for the interval, while if at that new point, (y) is negative, its value for (x) replaces the ‘negative’ value of (x) for the interval. This can be repeated until the interval has become smaller than some amount, by which the root is allowed to be inaccurate.

But there exist certain cases in which the path forward is not as obvious, such as what one should do, if one was given a polynomial of an even degree, that only has complex roots, yet, if these complex roots nevertheless needed to be found. Granted, in practical terms such a problem may never present itself in the lifetime of the reader. But if it does, I just had lots of idle time, and have contemplated an answer.

(Updated 1/30/2019, 13h00 … )

Continue reading A Hypothetical Algorithm…

How the general solution of polynomials, of any degree greater than 2, is extremely difficult to compute.

There are certain misconceptions that exist about Math, and one of them could be, that if a random system of equations is written down, the Algebraic solution of those equations is at hand. In fact, equations can arise easily, which are known to have numerical answers, but for which the exact, Algebraic (= analytical) answer, is nowhere in sight. And one example where this happens, is with polynomials ‘of a high degree’ . We are taught what the general solution to the quadratic is in High-School. But what I learned was, that the general solution to a cubic is extremely difficult to comprehend, while that of a 4th-degree polynomial – a quartic –  is too large to be printed out. These last two have in fact been computed, but not on consumer-grade hardware or software.

In fact, I was taught that for degrees of polynomials greater than 4, there is no general solution.

This seems to fly in the face of two known facts:

  1. Some of those equations have the full number of real roots,
  2. Computers have made finding numerical solutions relatively straightforward.

But the second fact is really only a testament, to how computers can perform numerical approximations, as their main contribution to Math and Engineering. In the case of polynomials, the approach used is to find at least one root – closely enough, and then, to use long division or synthetic division, to divide by (1 minus that root), to arrive at a new polynomial, which has been reduced in degree by 1. This is because (1 minus a root) must be a factor of the original polynomial.

Once the polynomial has arrived at a quadratic, computers will eagerly apply its general solution to what remains, thereby perhaps also generating 2 complex roots.

In the case of a cubic, a trick which people use, is first to normalize the cubic, so that the coefficient of its first term is 1. Then, the 4th, constant term is examined. Any one of its factors, or the factors of the terms it is a product of, positively or negatively, could be a root of the cubic. And if one of them works, the equation has been cracked.

In other words, if this constant term is the product of square roots of integers, the corresponding products of the square roots, of the factors of these integers, could lead to roots of the cubic.

(Updated 1/12/2019, 21h05 … )

Continue reading How the general solution of polynomials, of any degree greater than 2, is extremely difficult to compute.

An observation about computing the Simpson’s Sum, as a numerical approximation of an Integral.

One of the subjects which I posted about before, is the fact that in practical computing, often there is an advantage to using numerical approximations, over trying to compute Algebraically exact solutions. And one place where this happens, is with integrals. The advantage can be somewhere, between achieving greater simplicity, and making a solution possible. Therefore, in Undergraduate Education, emphasis is placed in Calculus 2 courses already, on not just computing certain integrals Algebraically, but also, on understanding which numerical approximations will give good results.

One numerical approximation that ‘gives good results’, is The so-called Simpson’s Sum. What it does, is to perform a weighted summation of the most-recent 3 data-points, into an accumulated result. And the version of it which I was taught, sought to place a weight of 2/3 on the Midpoint, as well as to place a weight of 1/3 on the Trap Sum.

In general, the direction in which the Midpoint is inaccurate, will be opposite to the direction in which the Trap Sum is inaccurate. I.e., if the curve is concave-up, then the Midpoint will tend to underestimate the area under it, while the Trap Sum will tend to overestimate. And in general, with integration, consistent under- or over-estimation, is more problematic, than random under- or over-estimation would be.

But casual inspection of the link above will reveal, that this is not the only way to weight the three data-points. In effect, it’s also possible to place a weight of 1/2 on the Midpoint, plus a weight of 1/2 on the Trap Sum.

And so a logical question to ask would be, ‘Which method of summation is best?’

The answer has to do, mainly, with whether the stream of data-points is ‘critically sampled’ or not. When I was taking ‘Cal 2′, the Professor drew curves, which were gradual, and the frequencies of which, if the data-points did form a stream, would have been below half the Nyquist Frequency. With those curves, the Midpoint was visibly more-accurate, than the Trap Function.

But one parameter for how Electrical Engineers have been designing circuits more recently, not only states that an analog signal is to be sampled in the time domain, but that the stream of samples which results will be critically sampled. This means considerable signal-energy, all the way up to the Nyquist Frequency. What will result is effectively, that all 3 data-points will be as if random, with great variance between even those 3.

Under those conditions, there is a slight advantage to computing the average, between the Midpoint, and the Trap Sum.

(Update 12/08/2018, 20h25 : )

I’ve just prepared a small ‘wxMaxima’ work-sheet, to demonstrate the accuracy of the 2/3 + 1/3 method, applied to a sine-wave at 1/2 Nyquist Frequency:

Worksheet

Worksheet in EPUB2 for Phones

(Update 1/6/2019, 13h55 : )

Continue reading An observation about computing the Simpson’s Sum, as a numerical approximation of an Integral.