## Performing a familiar task, just using the built-in packages of Maxima.

According to an earlier posting, I had suggested a recipe for ‘Perpendicularizing’ a matrix, that was to represent a Quadric Equation, according to methods which I learned in “Linear 1″. That approach used the application ‘wxMaxima’, which is actually a fancy front-end for the application ‘Maxima’. But the main drawback with the direct approach I had suggested, was that it depended on the package ‘lapack’, which I had written, takes a long time to compile.

Since writing that posting, I discovered that some users cannot even get ‘lapack’ to compile, making that a broken, unusable package for them. Yet, the desire could still exist, to carry out the same project. Therefore, I have now expounded on this, by using the package ‘eigen’, which is built in to Maxima, and which should work for more users, assuming there is no bug in the way Maxima was built.

The following work-sheet explains what initially goes wrong when using the package ‘eigen’, and, how to remedy the initial problem…

(Updated 6/17/2020, 14h35… )

## How the general solution of polynomials, of any degree greater than 2, is extremely difficult to compute.

There are certain misconceptions that exist about Math, and one of them could be, that if a random system of equations is written down, the Algebraic solution of those equations is at hand. In fact, equations can arise easily, which are known to have numerical answers, but for which the exact, Algebraic (= analytical) answer, is nowhere in sight. And one example where this happens, is with polynomials ‘of a high degree’ . We are taught what the general solution to the quadratic is in High-School. But what I learned was, that the general solution to a cubic is extremely difficult to comprehend, while that of a 4th-degree polynomial – a quartic –  is too large to be printed out. These last two have in fact been computed, but not on consumer-grade hardware or software.

In fact, I was taught that for degrees of polynomials greater than 4, there is no general solution.

This seems to fly in the face of two known facts:

1. Some of those equations have the full number of real roots,
2. Computers have made finding numerical solutions relatively straightforward.

But the second fact is really only a testament, to how computers can perform numerical approximations, as their main contribution to Math and Engineering. In the case of polynomials, the approach used is to find at least one root – closely enough, and then, to use long division or synthetic division, to divide by (1 minus that root), to arrive at a new polynomial, which has been reduced in degree by 1. This is because (1 minus a root) must be a factor of the original polynomial.

Once the polynomial has arrived at a quadratic, computers will eagerly apply its general solution to what remains, thereby perhaps also generating 2 complex roots.

In the case of a cubic, a trick which people use, is first to normalize the cubic, so that the coefficient of its first term is 1. Then, the 4th, constant term is examined. Any one of its factors, or the factors of the terms it is a product of, positively or negatively, could be a root of the cubic. And if one of them works, the equation has been cracked.

In other words, if this constant term is the product of square roots of integers, the corresponding products of the square roots, of the factors of these integers, could lead to roots of the cubic.

(Updated 1/12/2019, 21h05 … )

## Trying to bridge the gap to mobile-friendly reading of typeset equations, using EPUB3?

One of the sad facts about this blog is, that it’s not very mobile-friendly. The actual WordPress Theme that I use is very mobile-friendly, but I have the habit of inserting links into postings, that open typeset Math, in the form of PDF Files. And the real problem with those PDF Files is, the fact that when people try to view them on, say, smart-phones, the Letter-Sized page format forces them to pinch-zoom the document, and then to drag it around on their phone, not getting a good view of the overall document.

And so eventually I’m going to have to look for a better solution. One solution that works, is just to output a garbled PDF-File. But something better is in order.

A solution that works in principle, is to export my LaTeX -typeset Math to EPUB3-format, with MathML. But, the other EPUB and/or MOBI formats just don’t work. But the main downside after all that work for me is, the fact that although there are many ebook-readers for Android, there are only very few that can do everything which EPUB3 is supposed to be able to do, including MathML. Instead, the format is better-suited for distributing prose.

One ebook-reader that does support EPUB3 fully, is called “Infinity Reader“. But if I did publish my Math using EPUB3 format, then I’d be doing the uncomfortable deed, of practically requiring that my readers install this ebook-reader on their smart-phones, for which they’d next need to pay a small in-app purchase, just to get rid of the ads. I’d be betraying all those people who, like me, prefer open-source software. For many years, some version of ‘FBReader’ has remained sufficient for most users.

Thus, if readers get to read This Typeset Math, just because they installed that one ebook-reader, then the experience could end up becoming very disappointing for them. And, I don’t get any kick-back from ImeonSoft, for having encouraged this.

I suppose that this cloud has a silver lining. There does exist a Desktop-based / Laptop-based ebook-reader, which is capable of displaying all these EPUB3 ebooks, and which is as free as one could wish for: The Calibre Ebook Manager. When users install this either under Linux or under Windows, they will also be able to view the sample document I created and linked to above.

(Updated 1/6/2019, 6h00 … )

## An observation about computing the Simpson’s Sum, as a numerical approximation of an Integral.

One of the subjects which I posted about before, is the fact that in practical computing, often there is an advantage to using numerical approximations, over trying to compute Algebraically exact solutions. And one place where this happens, is with integrals. The advantage can be somewhere, between achieving greater simplicity, and making a solution possible. Therefore, in Undergraduate Education, emphasis is placed in Calculus 2 courses already, on not just computing certain integrals Algebraically, but also, on understanding which numerical approximations will give good results.

One numerical approximation that ‘gives good results’, is The so-called Simpson’s Sum. What it does, is to perform a weighted summation of the most-recent 3 data-points, into an accumulated result. And the version of it which I was taught, sought to place a weight of 2/3 on the Midpoint, as well as to place a weight of 1/3 on the Trap Sum.

In general, the direction in which the Midpoint is inaccurate, will be opposite to the direction in which the Trap Sum is inaccurate. I.e., if the curve is concave-up, then the Midpoint will tend to underestimate the area under it, while the Trap Sum will tend to overestimate. And in general, with integration, consistent under- or over-estimation, is more problematic, than random under- or over-estimation would be.

But casual inspection of the link above will reveal, that this is not the only way to weight the three data-points. In effect, it’s also possible to place a weight of 1/2 on the Midpoint, plus a weight of 1/2 on the Trap Sum.

And so a logical question to ask would be, ‘Which method of summation is best?’

The answer has to do, mainly, with whether the stream of data-points is ‘critically sampled’ or not. When I was taking ‘Cal 2′, the Professor drew curves, which were gradual, and the frequencies of which, if the data-points did form a stream, would have been below half the Nyquist Frequency. With those curves, the Midpoint was visibly more-accurate, than the Trap Function.

But one parameter for how Electrical Engineers have been designing circuits more recently, not only states that an analog signal is to be sampled in the time domain, but that the stream of samples which results will be critically sampled. This means considerable signal-energy, all the way up to the Nyquist Frequency. What will result is effectively, that all 3 data-points will be as if random, with great variance between even those 3.

Under those conditions, there is a slight advantage to computing the average, between the Midpoint, and the Trap Sum.

(Update 12/08/2018, 20h25 : )

I’ve just prepared a small ‘wxMaxima’ work-sheet, to demonstrate the accuracy of the 2/3 + 1/3 method, applied to a sine-wave at 1/2 Nyquist Frequency:

Worksheet

Worksheet in EPUB2 for Phones

(Update 1/6/2019, 13h55 : )