# How an exact solution can sometimes be found, without using the general solution.

One of the facts which I’ve been writing about is, that the general solution to a polynomial of a degree higher than (4), that is expected to produce Algebraically exact results, cannot be used because none exists. At the same time, I make a distinction between an exact solution, and the general solution. This distinction can also be explained in greater detail…

We are sometimes given a polynomial, which has at least one “rational root”, meaning a root that can be stated either as a whole number, which is actually referred to as an “integer”, or as a fraction. The following is an example:

x^3 -3*x^2 -2*x + 6 = 0

In this case it can be observed, that the coefficient of (x^3), which is not stated, corresponds to a (1), and that the constant term, which is visible as (+6), is an integer. What can be done here, is that all the factors of (6) can be used positively and negatively – not only the prime factors – and plugged in to see whether they do in fact constitute one root. Again, they do if and only if the equation is satisfied as resulting in zero.

Thus, as potential candidates, ±1, ±2, ±3, ±6 can all be tried.

(Updated 3/2/2019, 16h30 … )

(As of 2/9/2019, 19h40 : )

In this case, (+3) ‘just works’. Therefore, the term can be factorized out of the original polynomial, to result in the following equation:

(x – 3)*(a*x^2 + b*x + c) = 0

This essential step is also named “deflation”, and generally results in a polynomial one degree smaller than the original one. Because (x=3) was one working root, this produces no remainder either, and the problem has been cracked, because the remaining polynomial is a quadratic, which can always be solved.

I happen to know that, if I defined ‘sqrt(2)’ to stand for ‘the square root of (2)’, then this would be the ultimate solution:

(x – 3)*(x + sqrt(2))*(x – sqrt(2))

I know this because again, I constructed the original polynomial to have these 3 roots.

Why does this trick work? Well, if we assume for the moment that the coefficient of the term, with the highest exponent, was just an implicit (1), the constant term needs to follow from the expression:

(1)*(x – r1)*(x – r2)*(x – r3)

And what this also means is, that the constant term, which was (+6) is equal to (:1) …

(±r1)*(±r2)*(±r3)

Where, (r1 == +3).

Now, the original polynomial can be modified like so:

x^3 -3*x^2 -2*x + 5 = 0

I’ve replaced the (+6) with (+5).

Even though this polynomial has three real roots, all of them irrational, I cannot guess any of them. So in order to produce an exact solution, I’d need to apply the general solution of a cubic, which is already an ugly, impractical set of solutions to work with. And this would be even more so, in the case of a quartic, a polynomial with degree (4). As I keep saying, this option no longer exists, when the degree is greater than (4).

But, I can nevertheless generate numerical approximations all I want, like so:


dirk@Klystron:~$poly_solve 1 -3 -2 5 1.2016396757234 + 0I -1.330058739568 + 0I 3.1284190638446 + 0I dirk@Klystron:~$



(Observation Added 2/9/2019, 12h20 : )

I gave the exact problem cited above, to “SageMath” to approximate, in an earlier posting. SageMath generated 3 solutions as output, which contained 16 decimal places. If the last two of those decimal places are rounded up or down correctly, the result is a sequence of 14 digits, 3 times, that exactly match what my algorithm generated as output above.

The reader may notice, that the negative numeral output above only seems to contain 13 decimal places, not 14. The reason for which this happens is the fact that, the built-in C++ function for generating stream output from floating-point numbers, if, after rounding up or down, contains redundant, trailing zeroes, will output characters shortened, by omitting those trailing zeroes. This is also the real reason for which my approximation engine seems to output exact integers, in certain cases. Those ‘integers’ are presumably still stored within my program, as floating-point numbers that contain an error of at least (10^-16), but that are being simplified as output, to containing 1 decimal place instead of 14.

That negative root was an example, where SageMath had only output 15 decimal places.

(End of Observation, 2/9/2019, 12h20 . )

For me to have implemented an approximation engine, I also needed to consider what would happen, if the coefficient of the highest exponent, was something other than (1). In this case, if it was the goal to preserve the exact values of the polynomial, then that coefficient would first need to be ‘put in front’ of a derived polynomial, which in turn only keeps (1) in its place. This is done by dividing all the following coefficients by that first coefficient. This is why the roots could be fractions in place of just integers. The same rules apply, when looking for tricks to solve them – potentially.

Yet, if all we wanted to know was, what values of (x) produce zero, then a fact which simplifies the exercise is, that the entire polynomial can just be multiplied or divided by any constant excluding zero, and the polynomial will be zero for the same values of (x). I.e., it will still have the same roots. But then, the derived coefficients will still be the same fractions as before – as when wanting to preserve the value of the whole thing.

I suppose that one aspect to this question worth noting would be, that some Students are given an exercise which is simpler than that, in that only integers are to be ‘tested’ as possible roots, and where the fractions result from the deflation.

But if the search was more exhaustive, to include fractions, then for Human processing, the next issue would be legibility. It might seem illegible to write out the constant term as a faction, and then to try to factorize both the numerator and the denominator. It might just be more legible, to write down the original constant term, and the highest-order coefficient, as two separate integers, to factorize each of them, and then to test all the fractions which can result, when the factors of one are divided by the factors of the other…

(Update 2/9/2019, 19h30 : )

Another observation that I can add, is the fact that Factor Theorem itself continues to work, even if the highest-order coefficient is something other than (1). In other words, if (r1) was a root, then the following synthetic division would continue to produce valid results:

a1*x^3 + b1*x^2 + c1*x + d1 =

(1*x – r1)*(a1*x^2 + b2^x + c2)

Even though (a1) was not equal to (1). It would eventually lead to the expression:

(1*x – r1)*(1*x – r2)*(a1*x + b3)

As equal to the original polynomial. (r3) would then follow as:

-b3 / a1

The reason for which my approximation engine needed to ‘normalize’ the polynomials, into having a highest-degree coefficient of (1), had nothing to do with that.

(Update 2/9/2019, 21h40 : )

Challenge:

If the reader assumed that all that was required was, to test a cubic equation for integer roots, so that the deflation will reveal the fractional roots, then I’d challenge that reader to solve the following example:

12*x^3 −16*x^2 −5*x + 3

Hint: In this case, all the roots are rational.

(Update 3/2/2019, 16h30 : )

1: )

Actually, under the circumstances which I wrote about, the constant term still needs to follow as:

(-r1)*(-r2)*(-r3)

But unless two of the roots have already been found, this decrease in the indeterminacy, of the known constant coefficient, does not help solve the equation, because some unexpected combination of roots could be negative or positive, to result in the constant term, with a known sign. Therefore, positive and negative candidates for the roots need to be tried, initially.

Sincerely,
Dirk