I take the unusual approach of Hosting my site and blog on my private PC at home. This is not a recommendation for other people to do the same thing, it’s just how I do it. But then a side effect of what I do is the fact, that the availability of this blog is only as good, as the (DSL) network connection of my PC at home.

In recent weeks, Quebec, the province I live in, has been pounded by extremely cold weather (below -20⁰C), as well as more than one snow-storm. And this has played havoc with many customers’ service, as ice and water can penetrate the cables.

For that reason the availability of my site and blog has also experienced many issues. I apologize to my readers if this has inconvenienced them.

For the moment it looks like my ISP has been able to stabilize my connection, by reducing the speed with which I connect, and I’m fine with that, since a stable but slower connection is preferable to no connection at all.

I’m looking forward to some better solution in the distant future, but am confident that in the short term, the connection has been made stable again.

Dirk

]]>

One of the ideas which I’ve written about often is, that when certain Computer Algebra Software needs to compute the root of an equation, such as of a polynomial, an exact Algebraic solution, which is also referred to as the analytical solution, or symbolic Math, may not be at hand, and that therefore, the software uses numerical approximation, in a way that never churned out the Algebraic solution in the first place. And while it might sound disappointing, often, the numerical solution is what Engineers really need.

But one subject which I haven’t analyzed in-depth before, was, how this art might work. This is a subject which some people may study in University, and I never studied that. I can see that in certain cases, an obvious pathway suggests itself. For example, if somebody knows an interval for (x), and if the polynomial function of (x), that being (y), happens to be positive at one end of the interval, and negative at the other end, then it becomes feasible to keep bisecting the interval, so that if (y) is positive at the point of bisection, its value of (x) replaces the ‘positive’ value of (x) for the interval, while if at that new point, (y) is negative, its value for (x) replaces the ‘negative’ value of (x) for the interval. This can be repeated until the interval has become smaller than some amount, by which the root is allowed to be inaccurate.

But there exist certain cases in which the path forward is not as obvious, such as what one should do, if one was given a polynomial of an even degree, that only has complex roots, yet, if these complex roots nevertheless needed to be found. Granted, in practical terms such a problem may never present itself in the lifetime of the reader. But if it does, I just had lots of idle time, and have contemplated an answer.

(Updated 1/22/2019, 9h45 … )

(As of 1/21/2019 : )

I observed that a method which gets used to blend two values together in CGI, is as follows:

x = xa + t (xb – xa)

x = (1 – t) xa + t xb

And that for blend-values outside the range [0 .. +1], this method still works; it only generates an extrapolation instead of an interpolation. Thus, inversely, (t) can be found like so:

t = (x – xa) / (xb – xa)

Or, in some cases, if (x=0):

t = -xa / (xb – xa)

Well I puzzled together a hypothetical algorithm, which would be far too complicated for me to implement, and I apologize in advance, if this does not conform to some people’s ideas of pseudo-code. But these are the schema in which I think.

Work-Sheet, Formatted in Letter-Sized PDF

Work-Sheet in EPUB3 Format, for Phones

N.B.

There exists a question which the work-sheet above does not answer, which would be, What to do about doubled roots, aka multiplicities, in which 2 or more roots occur at the same value of (x). These tend to be important in the real-number domain because, if the curve changes direction before reaching the x-axis, complex roots exist, while, if the curve changes direction after crossing the x-axis, two different, real roots exist. What makes doubled roots problematic, is the fact that they have a derivative of zero, exactly where the root is. And while measured values should not work out that way, Mathematical examples exist that do.

In order to solve the doubled roots, what needs to be done is that the derivative of the polynomial must be computed, which in itself is easy to do. But then, the roots of the derivative (only using real numbers) might be found, and the original polynomial tested where its derivative is zero, to see whether the original one is sufficiently close to zero at the same value of (x). If so, a doubled root has been found.

The problem with the summary above is, that it needs to be made recursive. I.e., while the original polynomial might have a doubled root at (x), its derivative might also have one, but at a different value of (x)… The main problem with this last situation is, the fact that it could interfere in the ability to compute the roots at all, of the derivative.

If an easier opportunity doesn’t present itself, to find the roots of the first-order derivative, then the next thing to try, is to compute *all the derivatives* down to the linear equation, and for each root of a derivative found, to factorize that out of the representation of the parent equation, if the parent equation is sufficiently close to zero at the same value of (x), so that *its* doubled root, no longer needs to be solved for. (:1)

The doubled root needs to be factorized out twice, from an equation ‘one up’ of the derivative, if the parent equation is close to zero where the derivative is. But from an equation ‘two up’ of the derivative, the root needs to be factorized out three times, and from an equation ‘three up’ it needs to be factorized four times… Assuming that all the parent equations were zero, where the lowest derivative was also zero. (:2)

And then, in the parts of all the equations which remain after factorizing, *again, the roots need to be found, and potentially tested for in the respective parent equations*. But, this does not require that more derivatives be computed.

For such reasons I can expect, that Computer Algebra Systems either

- Don’t check for multiplicities
*fully*, or - Only offer doing so as an option, which the user must enable.

The latter seems to be the case with “Sage”.

1: )

Another approach which could work would be, ‘only’ to compute the derivatives down to a quadratic, since the quadratic can always be solved using its general solution. In that case, whether it has doubled roots or not, can be determined according to whether:

b^2 == 4*a*c

The peculiarity of this approach would be, that in addition to solving the quadratic, the CAS would also need to record the multiplicity of its root(s) as just belonging to the quadratic, since next, the parent equation must be checked at those roots. And then, whether the quadratic produced two different, or one double root, needs to be treated differently when processing the parent equations(s).

(Update 1/22/2019, 9h45 : )

2: )

As another way to organize what this posting is explaining, a subroutine can be devised, which can be given a suggested root of a polynomial as a parameter, and which may factorize that root out of the polynomial as long as the remaining equation is close to zero, at the suggested value of (x). This subroutine may have as return value, the total number of times this was true.

Once the return-value is zero, the operation can be discontinued, to apply the suggested root to successive parent equations, of the present derivative.

If this is done, then that subroutine must also be given the task which my work-sheet above described, which is, in case a complex root is suggested, to factorize it out of the polynomial along with its conjugate, in the form of a quadratic equation, causing an equation to remain that only has real coefficients.

At some point when writing actual code, the decision would need to be made, which subroutines are responsible for what. And in many high-level languages, subroutines take the form of so-called ‘functions’.

It can be observed however, that:

- In my worksheet I defined a derived value for epsilon, which takes the present polynomial’s derviative into account, so that the original epsilon will define accuracy of (x) as much as possible, rather than accuracy of (y). But, some version of epsilon must be used to compare the absolute of (y) with, when testing the roots of the respective derivative-equation.
- If the roots of the derivatives have already been tested for in the present polynomial, and any ocurrence as roots here factorized out, then none of the roots which remain to be found, should be doubled,
- If the amount of error in the derivative is much smaller than (1), because the present polynomial is its integral, the amount of deviation of (y) from zero, in the present polynomial, that exists in spite of a doubled root, should be even smaller.

Therefore, an approach should still be possible, that uses a derived value of epsilon, actually to find roots.

Dirk

]]>

When I was a young teenager, I sometimes spoke to tech professionals, who were working on power-lines and/or telephone cables, the latter of which were strung above-ground from the usual telephone poles. Sometimes, those tech professionals were disposed to answer my curious questions.

What above-ground telephone cables had or have, is refrigeration stations at some of their connection-points, that refrigerate air to “-20⁰C”, which also makes the air very dry, and then to feed that air into the cables in compressed form. The purpose of this exercise is to prevent moisture buildup inside the telephone cables, that have hundreds of wires, if not thousands of wires.

*Assuming that such a unit is being used, the question remains unanswered* of how it’s supposed to work, if the outside air temperature is below -20⁰C. If the process continues, then air will be fed into the cables at a higher temperature than the ambient temperature, at which point technically, the air being fed in is also moister, than the saturation point of the ambient air. (:1) What could follow, is ice build-up in the cables, and, when the temperature outside rises suddenly, the ice can melt.

I’m not sure what the exact conductivities are, but think that liquid water conducts better than ice, so that liquid water can cause shorting of the telephone wires inside the cables. I suppose that if the ambient air stays warm long enough, continued feeding of cold, dry air into the cables can dry out the cables again…

(Updated 1/20/2019, 7h40 … )

(As of 1/16/2019 : )

But I think that this could be one way in which our infrastructure is finally not designed to handle Climate Change. ~~And it may be affecting me as a DSL user~~. Because of Climate Change, we’re seeing more continued stretches of time, during which the outside temperature is -20⁰C or colder. Tonight, January 16, is supposed to be yet another night during which this happens. (:2)

When I was a young teenager, we also had occasional days, when the thermometer dipped below -20⁰C. But those were exceptional days, while in the modern era, this weather comes in prolonged spells.

I suppose it might have been well if Scientists had had the foresight to equip these refrigeration systems with a ‘shutoff’, so that If the ambient temperature dips below -15⁰C or something like that, the unit just stops feeding air of any sort into the cables.

(Update 1/17/2019, 7h35 : )

In fact, something which nobody may be monitoring, is whether all those refrigeration-stations are actually reaching -20⁰C. Just as it goes with household air-conditioners, those stations could eventually suffer from refrigerant loss, which means that they won’t be able to achieve anything colder than -10⁰ – -15⁰ C. And, while it was true in my youth that even for-profit companies would spend the time and money to check each station periodically, the economic realities today could be such, that these stations run unattended for years, meaning that the telephone company may not even know how well each one is running.

*And this may also present a reason for the company, not to put one*.

What this would mean is, that the telephone cables could start icing up when the temperature outside drops to -15⁰C, and not even at -20⁰C.

(Update 1/17/2019, 14h20 : )

I suppose that one question which some readers might have, would be How such a refrigeration unit can collect moisture from the air, at -20⁰C, when all water would be ice. And the best answer I can think of would be, that such units ‘cycle’. In other words, such a unit would need to turn itself off periodically, so that an electric heating element can turn on, and heat up the block on which the ice collects, so that the water can run off in liquid form.

I think that a more important question would be, what such a unit does with the run-off, especially since, liquid water being drained into a sub-freezing exterior would freeze in the wrong places. And the answer, for hypothetical operation at external temperatures below freezing, would be, There could be a collection container, with another heating element, this time an element which applies gentler heat 100% of the time. And this tank could convert all the water into vapour, which could get vented into the exterior air, through a hose.

But when I was a young teenager, * I never even bothered to ask the telephone technician*, whether these “Air Dryers” were in fact meant to operate, when the external air was below freezing. I just accepted the sparse information which he had to give me.

(Update 1/19/2019, 13h00 : )

1: )

For the sake of brevity, I made some unspoken assumptions about how such a unit would work. What some readers might ask, who do not have a Scientific background, could be, ‘How can such an air-dryer feed compressed air into the cable, which is warmer than the exterior temperature, if the unit does not have a heating component?’

The fully-correct explanation would be, that the unit must at first compress the ambient air, and would then refrigerate the resulting compressed air, ostensibly to “-20⁰C”, at which point some moisture would be collected from it in the form of ice. After that, the dryer air would be fed into the cable, with whatever amount of pressure it has left.

When one compresses a gas, the initial result is adiabatic compression, which means that the gas has also become warmer, or hotter in some cases, than it initially was. It’s only after the compressed gas transfers heat to some type of exterior surface, that the compressed air can be brought down to the ambient temperature again, or to a lower temperature, at which point the compression has become diabatic.

So yes, just because it’s compressing air, such a unit *could be* feeding air into the cable, at a higher temperature than it first had.

And in practice, all adiabatic compression of gasses, is partially diabatic, because some loss of heat following the high-end of the compressor, is unavoidable.

(Update 1/20/2019, 7h40 : )

2: )

A different type of question which the subject of this sort of air-dryer might suggest is, ‘Should it not be more-important, that any refrigerant loop can fail, during the hottest Summer day, when the outside temperature reaches +35⁰C?’

And my thoughts on that subject suggest that indeed, any sort of refrigerant loop is most likely to fail under such conditions, in addition to which the refrigerant compressor can burn out. But I’ve been focusing my attention on the possibility of ‘partial refrigerant loss’, *the consequences of which* might not be as bad, when the telephone cables are actually +35⁰C warm.

When dried air at any temperature different from the ambient temperature enters a cable, then it will only travel a few feet, before its temperature adjusts to that of the cable. As long as that temperature is actually higher than the temperature with which the air entered, there will be no condensation or crystallization of ice.

It’s at any point in a pipeline, where the temperature of the air *decreases*, that condensation will take place… This would be the reason for which the concept was, to cool the compressed air below ambient temperature, before allowing it to enter the cable. That way, moisture will either condense or crystallize in the air-dryer and not in the cable.

As long as the compressed air is only slightly colder than the cable, before it enters, there should be fewer problems inside the cable. And this is *still* likely to be the case with *a compromised* refrigerant loop, when the cable is +35⁰C warm.

Dirk

]]>

My ISP is Bell Canada, and my LAN connects to a service of theirs called ‘Fibe 50′, which stands for a 50 Mbps, DSL connection, to a Local Node, which in turn is connected to Bell via Fibre Optics. And this connection of my LAN is accomplished through a Home Hub 3000 Modem / Router.

The fact that I use my home PC as my Web-server, also means that a stable Internet connection is especially important to me, even though officially, I’m just a Home User. Just recently, my site experienced some down-time, due to problems with my DSL. And I’d like to weigh in on how good or bad the Home Hub 3000 might be, based on personal experiences.

First off, this Modem / Router once had a very bad reputation, when it was first released for public use, in the year 2016. But because that release of the modem preceded my personal range of experiences, I’m going to ignore this piece of History for the moment. It could very well be that in the year 2016 the modem was not ready to be released yet, but that in the year 2019, it is. This would be one example, where the service provider did their best to patch the behaviour of the modem, with many firmware updates, but without any actual modifications to the circuitry being possible.

Just recently I experienced down-time with it, during which the modem’s 3 lights displayed changing colours, especially through the circled ”i” button changing from yellow to red, and through the modem displaying terse error messages in its message window.

My first phone-call to a tech support person, on January 15, resulted in a brief exercise, of disconnecting the low-voltage power-supply cable between the power-adapter and the modem, as well as disconnecting the actual, 2, DSL cables, and then waiting for 30 seconds or so, before reconnecting everything, and waiting for the modem to reboot. What this seemed to do, was to cause the modem to start working again, with 3 solid white lights. But unfortunately, within 30 minutes to 1 hour later, the malfunction just came back.

So what I needed to do next, within the same day, was to phone tech support a second time. And this time, I reported an error code to tech support, which was ‘Code 1101′, displayed in the modem’s message window. Apparently, Code 1101 means that there is a problem with the line, between the modem at my end, and the local node at Bell’s end. Because of that, the tech support person told me that he’d actually need to send a line-man to my location, to continue troubleshooting.

That was around 17h00, on January 15. I left everything turned on and without trying to manipulate anything in the meantime. This first line-man was due to arrive between 12h00 and 15h00, January 16. The next morning, by 9h00, I discovered that the problem seemed to have resolved itself. And in fact this behaviour does *not* strike me as strange enough, to be classified as ‘paranormal’. So what I did next was to cancel the visit from the line-man because I don’t believe in wasting the resources of Bell Canada, or even, eventually my resources. This may have been a bad decision.

But around 10h00 on January 16, and continuing to keep an eye on the Home Hub 3000, I formed a summary of this modem, the way its firmware allows it to work in the year 2019:

There is a huge merit, in the fact that the modem displays an error code, which can be looked up, and which actually means something. Apparently, when the unit is working properly, it can diagnose an issue with the DSL connection. The only fear I have about this, is that some tech support people might be tempted just to ignore the error code, and to apply a stereotyped solution to the problem. Yet, to apply a standard solution to the problem, when it’s reported for the first time, still ‘makes sense’.

As for the idea that there was a line problem, which within a few days just seemed to go away, this was also explainable. Until this past weekend, that is, until January 13, my neighbourhood was experiencing some extremely cold weather, meaning at or below -20⁰C. During such weather, water cannot be liquid outside, and such a malfunction would be difficult to explain. But from January 14 until January 16, the temperature outside actually became very mild and above freezing.

And this means that liquid water could easily have entered the phone cable, which in the suburbs of Canadian cities is often above-ground. And that water had enough time to seep out again – around 4h00 in the wee morning of January 16. As long as any liquid water might have entered the outdoor cable, this could easily have short-circuited the DSL wires.

I was just very lucky that the weather stayed mild long enough, for the water to clear out.

So far, the modem has scored well. The real problem that I see, is with the fact that the many high-tech services which we receive in Canada, still depend on such old, outdated infrastructure as above-ground telephone wires.

But there is finally a dark lining, to my rose-coloured view of the world of DSL modems. Between 13h00 and 14h00 on January 16, we had a blizzard again. And again, the modem’s LEDs started to flash slightly, showing problems with the line, and showing error code ‘1101’. So what I actually found myself doing next, was to phone the tech support at Bell *again*, and to make a new appointment for the line-man to come in the afternoon of January 17.

What I can’t really size up, is whether it will be the new norm, that in extreme weather situations, my DSL gets buggy. This did not happen while I had 12 Mbps service, but may be much more the norm, for 50 Mbps service. One idea which I’m still convinced about though, is that this set of malfunctions is due to the poor quality of the twisted-pairs of wires outside, and not due to the modem itself.

Dirk

]]>

I take the somewhat unusual approach, of Hosting my Web-site and my blog, from a PC at home, acting as my Web-server. I don’t really recommend that everybody do it this way. This is merely how I do it.

This has as consequence that the visibility of my blog and site, are only as good as that of my home Internet connection. Therefore, from time to time my blog will be inaccessible, simply because I’m having a networking problem.

Most people would be less affected by such problems because most people aren’t running a Web-server.

Well from about 22h00 EDT, January 14, until about 9h00, January 16, I was experiencing such issues.

I apologize to my readers, for any inconvenience which I might have caused them.

Dirk

]]>

According to an earlier posting, I had suggested a recipe for ‘Perpendicularizing’ a matrix, that was to represent a Quadric Equation, according to methods which I learned in “Linear 1″. That approach used the application ‘wxMaxima’, which is actually a fancy front-end for the application ‘Maxima’. But the main drawback with the direct approach I had suggested, was that it depended on the package ‘lapack’, which I had written, takes a long time to compile.

Since writing that posting, I discovered that some users cannot even get ‘lapack’ to compile, making that a broken, unusable package for them. Yet, the desire could still exist, to carry out the same project. Therefore, I have now expounded on this, by using the package ‘eigen’, which is built in to Maxima, and which should work for more users, assuming there is no bug in the way Maxima was built.

The following work-sheet explains what initially goes wrong when using the package ‘eigen’, and, how to remedy the initial problem…

Work-Sheet Formatted as a Letter-Sized PDF

Work-Sheet in EPUB3 for Phones

(Readers may need to enable JavaScript from ‘mathjax.org’ to be able to view the work-sheet below: )

I suppose that there’s an observation I should add. Using just a matrix of unit eigenvectors has as caveat, a possible outcome in which the eigenvectors are still not orthogonal. If that’s the case, then to use the transpose in place of the inverse is not acceptable.

If the reader is familiar with the exercise which I linked to at the top of this posting, he or she will notice that the matrix which I’m diagonalizing is *diagonally symmetrical*. This is because coefficients belonging to the quadric it represents, have either been given to one diagonal element, or distributed between two elements of the matrix equally.

In that case, the matrix of eigenvectors will be orthogonal.

Dirk

]]>

I’ve talked to people who did not distinguish, between a Quartic, and a Quadric.

The following is a Quartic:

y = ax^{4} + bx^{3} + cx^{2} + dx + e

It follows in the sequence from a linear equation, through a quadratic, through a cubic, to arrive at the quartic. What follows it is called a “Quintic”.

The following is a *Quadric*:

a1 x^{2} + a2 y^{2} + a3 z^{2} +

a4 (xy) + a5 (yz) + a6 (az) +

a7 x + a8 y + a9 z – C = 0

The main reason quadrics are important, is the fact that they represent 3D shapes such as Hyperboloids, Ellipsoids, and Mathematically significant, but mundanely insignificant shapes, that radiate away from 1 axis out of 3, but that are symmetrical along the other 2 axes.

If the first-order terms of a quadric are zero, then the mixed terms merely represent rotations of these shapes, while, if the mixed terms are also zero, then these shapes are aligned with the 3 axes. Thus, if (C) was simply equal to (5), and if the signs of the 3 single, squared terms, by themselves, are:

+x^{2} +y^{2} +z^{2} = C : Ellipsoid .

+x^{2} -y^{2} -z^{2} = C : Hyperboloid .

+x^{2} +y^{2} – z^{2} = C : ‘That strange shape’ .

The way in which quadrics can be manipulated with Linear Algebra is of some curiosity, in that we can have a regular column vector (X), which represents a coordinate system, and we can state the transpose of the same vector, (X^{T}), which forms the corresponding row-vector, for the same coordinate system. And in that case, the quadric can also be stated by the matrix product:

X^{T} M X = C

(Updated 1/13/2019, 21h35 : )

(As of yesterday : )

Where (C) is really just a simplification of something else, yet Mathematically valid. It implies that all the vector-elements resulting from the matrix multiplication, should add up, usually, to (+1). Therefore, if we had to state the quadric:

+x^{2} -2y^{2} +2(yz) -z^{2} +4(xz) = 1

As the matrix (M), this would be the matrix which follows:

```
|+1 0 +2|
| 0 -2 +1|
|+2 +1 -1|
```

And then, it would be possible to compute the Perpendicularization (P) of (M), through “Eigendecomposition”, yielding:

```
| |
| -0.8313 -0.4449 -0.3333 |
| |
| |
| -0.1258 0.7347 -0.6667 |
| |
| |
| -0.5415 0.5122 0.6667 |
| |
```

With the intent that:

D = P^{T} M P

```
| |
| +2.3028 0.0000 0.0000 |
| |
| |
| 0.0000 -1.3028 0.0000 |
| |
| |
| 0.0000 0.0000 -3.0000 |
| |
```

X = P Y

And then:

Y^{T} P^{T} M P Y = C

Y^{T} D Y = C

Where (D) is a diagonal matrix, hence, a matrix with the same shape aligned with the axes of (Y), that results when the coordinate system has been rotated by (P). What is interesting is that through computing (P), we can find the signs of the diagonal, non-zero elements of (D), and therefore, finally, determine the shape in (M). Because as long as there are mixed terms, it’s ambiguous what shape is defined, where the signs of the single, squared terms by themselves, is not enough to do so.

Therefore, in the above example, the shape is a Hyperboloid ( + – – ) .

If the first-order terms are also non-zero, then we have a translation, as well as the assumed rotation, of (X) with respect to (Y)… We compute the rotation first, assuming that it is still a rotation, from the mixed and the squared terms. And then, we use (P) in order to rotate the first-order terms, to arrive at the derived first-order terms, that are valid within (Y), not within (X), as the original first-order terms were:

Y = P^{T} X

And then, since (D) has no mixed terms, we can add the rotated first-order terms to the equation with (D), and ‘complete the square’ 3 times, in order to find the translation according to (Y)…

(Update 1/13/2019, 21h35 : )

One fact which I’ve learned about ‘Eigendecomposition’ is, that there exist more than one form of it. But only one specific type of matrix (P) will work for this exercise. This happens to be the type of matrix, which in my ‘Linear 1′ course, was just called a Perpendicular matrix, but we were given specific instructions on how it must be computed.

Each column of (P) must be an Eigenvector of (M), but must also be made a unit vector. This is also called the “*Right* Eigenvector Matrix”, as opposed to the “~~Left Eigenvector Matrix~~“.

When using Maxima to compute (P), the packages must be loaded:

load(diag)$

load(lapack)$

This last instruction will take very long to execute, when given for the first time by one user, because ‘lapack’ takes a long time to compile. But when that has finished, the function to compute (P) becomes:

PL: dgeev(M, true, false);

P: PL[2];

‘PL’ will be a list of three elements:

- (Always), The list of Eigenvalues of (M),
- (If the second parameter was set to True), The Right Eigenvectors, else False,
- (If the third parameter was set to True), The Left Eigenvectors, else False.

This exercise suggests the additional question, of whether situations that call for the inverse of a matrix to be used, allow the use of the transpose instead. And the answer is ‘Usually, not. ‘ The exception arises when the matrix is orthonormal, in which case the transpose is also the inverse. In this exercise, if (P) is computed correctly, it will be orthonormal.

The additional question seems appropriate, of whether the Mode Matrix of (M) can simply be used, and a Gram-Schmidt Orthonormalization performed on it. The short answer is ‘No’. There are two reasons why not:

- Gram-Schmidt will ensure that all the remaining vectors in the matrix are orthogonal, but not unit vectors,
- Gram-Schmidt has as its main idiosyncrasy, to take the direction of the first vector read from the matrix as being completely accurate, while modifying the directions of the following vectors successively more.

Problem (2) has the side effect that it matters, whether the first vector read, is actually the first row, or the first column of the matrix. Gram-Schmidt goes row-by-row, even though (P) used in this exercise needs to be accurate, column-by-column.

Dirk

]]>

There are certain misconceptions that exist about Math, and one of them could be, that if a random system of equations is written down, the Algebraic solution of those equations is at hand. In fact, equations can arise easily, which are known to have numerical answers, but for which the exact, Algebraic (= analytical) answer, is nowhere in sight. And one example where this happens, is with polynomials ‘of a high degree’ . We are taught what the general solution to the quadratic is in High-School. But what I learned was, that the general solution to a cubic is extremely difficult to comprehend, while that of a 4th-degree polynomial – a quartic – is too large to be printed out. These last two have in fact been computed, but ~~not on consumer-grade hardware or software~~.

In fact, I was taught that for degrees of polynomials greater than 4, there is no general solution.

This seems to fly in the face of two known facts:

- Some of those equations have the full number of real roots,
- Computers have made finding
*numerical*solutions relatively straightforward.

But the second fact is really only a testament, to how computers can perform numerical approximations, as their main contribution to Math and Engineering. In the case of polynomials, the approach used is to find at least one root – closely enough, and then, to use long division or synthetic division, to divide by (1 minus that root), to arrive at a new polynomial, which has been reduced in degree by 1. This is because (1 minus a root) must be a factor of the original polynomial.

Once the polynomial has arrived at a quadratic, computers will eagerly apply its general solution to what remains, thereby perhaps also generating 2 complex roots.

In the case of a cubic, a trick which people use, is first to normalize the cubic, so that the coefficient of its first term is 1. Then, the 4th, constant term is examined. Any one of its factors, or the factors of the terms it is a product of, positively or negatively, *could be* a root of the cubic. And if one of them works, the equation has been cracked.

In other words, if this constant term is the product of square roots of integers, the corresponding products of the square roots, *of the factors of* these integers, could lead to roots of the cubic.

(Updated 1/12/2019, 21h05 … )

Well this is also what the Computer Algebra System “Maxima” does, and because “Sage” uses some version of Maxima to do its actual Algebra, this is also what Sage does.

(Edit 1/11/2019, 15h50 : )

But I have learned that indeed, *this is not the only way* in which Maxima and Sage ‘know’ how to solve cubics (see below).

(End of Edited Insert. )

The following work-sheet tested this premise, and thereby, I also tested this weakness in Computer Algebra today. If none of the factors of the mentioned constant term act as a root, an unwieldy list of roots results, ~~which Jupyter can’t even coerce into complex numbers~~. The reader may need to enable JavaScript from ‘mathjax.org’ , in order to be able to view the page correctly:

Work-Sheet in EPUB3 for Phones

(Edit 1/11/2019, 13h50 : )

The above work-sheet shows, that to try to solve the altered cubic, in which I replaced a (+6) with a (+5), constant term, causes Jupyter to output a highly complex set of 3 equations, that are humanly incomprehensible. But if the alternative is chosen, to write:

F1(x).roots(x, ring=CC, multiplicities=False)

Then, the Computer Algebra System seems to solve its original purpose because this ‘.roots()’ function is a numerical approximation, which may be more practical than what I got before.

*What I needed to do*, was to determine whether the earlier output was in fact, an accurate, ‘True’ answer. If it was, then Jupyter and Sage are in fact able to apply the general solution. The way I tested this was, to extract elements [2], [0] and [1] from the list (S), *to extract the Right-Hand-Side* of each list item, and to compute a numerical approximation from it. What I discovered finally, was that this numerical approximation was almost identical to the one, that was asked for directly. The only difference in List Item [2] seems to be, a vestigial, non-zero imaginary component, with an order of magnitude of about (10^-16), which is irrelevant.

So what this means is that the extreme expression which the CAS outputs, consisting of 3 listed items, *is in fact correct*. The only reason I failed to determine this last night, was my forgetting to specify the Right-Hand-Side of the list items, *using the correct syntax*, which is:

((S[2]).rhs())

What I had forgotten to do last night, when trying to confirm this, was to put an empty set of parentheses on the right-hand-side of the ‘rhs’ function, which turns that into a function-call, instead of an abstract object-placeholder.

I will not bother the reader with yet another work-sheet, only to prove that the work-sheet above contains only True statements, from Jupyter.

(Update 1/12/2019, 21h05 : )

One happenstance of using Computer Algebra Systems is, that when ‘coerced’ into producing numerical answers to complicated equations, they will sometimes have small, non-zero components, where the exact answer possesses a zero. These are round-off errors, and if the term in question is at least 10 decimal orders of magnitude smaller than the main terms of the solution, a Human operator needs to keep in mind, that this could just be the way the CAS has attempted to arrive at zero exactly, especially if the term in question is an imaginary term, in a solution that’s known only to have real numbers.

I have delved further into the question, of why the solution-set (S) produced by Jupyter above, as an exact, algebraic solution to a cubic equation that has no rational roots, is in a form so difficult for Humans to understand. And I think I found my answer. In general, what Jupyter’s solution set states, is of the form:

In fact, the cubic needs to possess coefficients that are numbers and not symbolic variables, and after failing to factor it, this software applies such a template.

*Further, the real solution set seems to contain an H1 and an H2 expression, each of which is being rotated in opposite directions*. The cause of this is explainable. H1 results as (Z), while H2 results as (1/Z). If the imaginary component of (Z) becomes positive, then that of (1/Z) will naturally become negative, and vice-versa. (Z) is consistently the cube root of the same thing. But this also results in an expression which, itself, can only be computed to arrive at a numeric value, and which cannot be analyzed algebraically, by ordinary means.

The third line in the figure above states what I call a ‘rotator’ , by which I mean a complex number, that merely rotates another number in the complex plane. The rotator above gives 3 terms derived from (H), but 120⁰ apart in the complex plane, one of which has just not been rotated, but to all 3 of which (1) must be added. The imaginary part of this term contains (±) cosine of 30⁰, when not zero. The real part then contains (-) sine of 30⁰.

For finding complex roots, An earlier posting of mine already describes this phenomenon, but the reader will need to be able to view page-sized PDF-Files, to appreciate that posting.

The reason why the solution-set (S) above counts as ‘True’ , is the fact that when the CAS named “Jupyter” is given the command:

numeric_approx((S[2]).rhs())

All this command will do is to arrive at one numeric solution, that happens to be close to a real number. In fact, the way Jupyter and Maxima happen to work with fractional powers, may have as advantage, only to generate one answer each time.

Each of the solutions proposed by Jupyter as part of (S), *already implies *3 roots, but the term which I called their ‘rotator’ positions each of those roots closer to the Real-number-line, and thus further from the Imaginary-axis. But then, this form of a general solution to a cubic has 2 main disadvantages which I can think of:

- It does not reveal at first glance, whether the cubic equation in question does in fact have 3 real roots, or only 1, or even 2. It could be that neither of the other two translates into a real number, when the two rotations are applied…
- If any of these terms is plugged back in to the cubic equation in question, ‘F1(x)’ , doing so fails to generate a satisfactory value of zero, just because each of the list-items by itself, defines 3 theoretical values. Plugging all of them back in to ‘F1(x)’ at once will produce one zero, plus two complex values, that are in fact extraneous.

Therefore, even the output of a CAS, when told to solve difficult problems, needs some level of skill from the Human operator, to interpret.

Dirk

]]>

According to This preceding posting, I was experiencing some frustration over trying to typeset Math, for publication in EPUB2 format. EPUB3 format with MathML support was a viable alternative, though potentially hard on any readers I might have.

Well a situation exists in which either EPUB2 or MOBI can be used to publish typeset Math: Each lossless image can claim the entire width of a column of text, and each image can represent an entire equation. That way, the content of the document can alternate vertically between Text and Typeset Math.

In fact, if an author was to choose to do this, he or she could also use the Linux-based solutions ‘LyX’ , ‘ImageMagick’ , and ‘tex4ebook’ .

(Edited 1/9/2019, 15h35 … )

(As of 1/8/2019, 22h10 : )

One way to proceed would be, to create a single document using LyX, that has been formatted to whatever the maximum size of the card is, that is to hold each equation, and *in which each page holds one equation*. This document can be exported to a PDF-File.

(Updated 1/9/2019, 15h35 : )

*For this part of the exercise it might be good, to set the Page Style to ‘Empty’ within LyX, just to suppress failed attempts to print page numbers, as well as to use ‘ps2pdf’ to generate the actual PDF-File, just to force that document to accept arbitrary sizes*.

And then, the following shell script can be run on it:

```
#!/bin/bash
# pdf2figs.sh
# By Dirk Mittler
CONVERT='/usr/bin/convert'
if [ -z "$1" ] ; then
echo "Usage: ./pdf2figs.sh [color]"
echo "Input: basename.pdf"
echo "Output: basename-####.png"
exit 0
fi
PATTERN=$'[ \'"\t\n\r]'
if [[ $1 =~ $PATTERN ]] ; then
echo "Error: Parameter must not contain spaces."
exit 1
fi
if [ -n "$2" ] && [[ $2 =~ $PATTERN ]] ; then
echo "Error: Parameter must not contain spaces."
exit 1
fi
if ! [ -a "${1}.pdf" ] || [ -d "${1}.pdf" ] ; then
echo "Error: Cannot locate File ${1}.pdf"
exit 1
fi
$CONVERT -density 300 ${1}.pdf -define png:compression-level=9 \
+profile '*' -define png:format=png32 -fuzz 10 \
-crop 95% -trim +repage -transparent white \
+level-colors ${2:-black},${2:-black} ${1}-%04d.png
for IMAGEX in ${1}-????.png
do
if [ $(convert $IMAGEX -define histogram:unique-colors=true -format %c histogram:info:- | wc -l) -eq 1 ]
then
rm -f $IMAGEX
fi
done
```

Then, whatever software one prefers to create an EPUB or a MOBI -File can be used to write text, and to insert these images. That software could be LyX again, this time together with ‘tex4ebook’ .

The problem which I was reporting in my previous posting was merely, that such realizations don’t enable my linear processing of existing LaTeX Files, to obtain the desired result.

(Update 1/9/2019, 14h25 : )

I had an earlier recommendation above, in how to craft that ImageMagick command-line, which suffered from 4 separate bugs:

- The density with which the
*Native*PDF Files produced by LyX were to be scanned, was not set, and was therefore extremely low. It is currently set to 300DPI, which could be correct for viewing on some modern smart-phones, - The images output were not anti-aliased,
- The notation of using ‘-alpha off -fill black -alpha on’ did not succeed at filling the actual colour of the glyph,
- Auto-Cropping was not working at first.

Error (3) proved the hardest for me to correct, and the reason it was happening was, the fact that the ‘-fill’ command requires that the image be input – not output – in RGB format, in order to work. Simply setting the output format to ‘PNG32′ doesn’t solve this problem because the input started out as grey-scale. The actual notation of ‘-alpha off … -alpha on’ works just fine.

What I finally had to do, among other things, was to use the ‘+level-colors’ command-line option, which creates a gradient between the two colours specified, and which uses the brightness of the input pixel, to select a colour along this gradient. To set both endpoints of this gradient to the same colour might seem wasteful, but at least doing it this way provides functionality, when the input isn’t in the RGB pixel-format.

Alas, only recent, up-to-date ImageMagick versions accept ‘+level-colors … ‘ .

Problem (4) above seemed to have a simple cause: The ‘-trim’ command only works, in whichever directions the page received a non-trivial ‘-crop’ parameter, which can be anything we like, as long as we don’t crop away any part of the glyph.

Dirk

]]>

One of the sad facts about this blog is, that it’s not very mobile-friendly. The actual WordPress Theme that I use is *very* mobile-friendly, but I have the habit of inserting links into postings, that open typeset Math, in the form of PDF Files. And the real problem with those PDF Files is, the fact that when people try to view them on, say, smart-phones, the Letter-Sized page format forces them to pinch-zoom the document, and then to drag it around on their phone, not getting a good view of the overall document.

And so eventually I’m going to have to look for a better solution. One solution that works, is just to output a garbled PDF-File. But something better is in order.

A solution that works in principle, is to export my LaTeX -typeset Math to EPUB3-format, with MathML. But, the other EPUB and/or MOBI formats just don’t work. But the main downside after all that work for me is, the fact that although there are many ebook-readers for Android, there are only very few that can do everything which EPUB3 is supposed to be able to do, including MathML. Instead, the format is better-suited for distributing prose.

One ebook-reader that does support EPUB3 fully, is called “Infinity Reader“. But if I did publish my Math using EPUB3 format, then I’d be doing the uncomfortable deed, of practically requiring that my readers install this ebook-reader on their smart-phones, for which they’d next need to pay a small in-app purchase, just to get rid of the ads. I’d be betraying all those people who, like me, prefer open-source software. For many years, some version of ‘FBReader’ has remained sufficient for most users.

Thus, if readers get to read This Typeset Math, just because they installed that one ebook-reader, then the experience could end up becoming very disappointing for them. And, I don’t get any kick-back from ImeonSoft, for having encouraged this.

I suppose that this cloud has a silver lining. There does exist a Desktop-based / Laptop-based ebook-reader, which is capable of displaying all these EPUB3 ebooks, and which is as free as one could wish for: The Calibre Ebook Manager. When users install this either under Linux or under Windows, they will also be able to view the sample document I created and linked to above.

(Updated 1/6/2019, 6h00 … )

(As of 1/4/2019 : )

Hence, I could use the following logic:

- If people were able to read my PDF documents fully, they were already using either a Desktop or a laptop,
- If this is so, then they can also just install Calibre, at no financial cost,
- The added benefit of my using EPUB3 format now remains, that some people who were using smart-phones to read my blog,
*have the option*of installing and paying for a proprietary solution.

But in practice, I don’t really see this as much of a win-win situation for my readers because, instead of telling them ‘Install this one piece of software,’ I’m telling them, ‘Install either *this* piece of software, or *that* piece of software, to take in my blog fully!’

I can appreciate how annoyed users get, when a Web-site requires actions from them. The ability to view PDF Files is always present, on a Desktop or a Laptop. The request to my readers, even to install Free Software, would be overbearing from me. And, so I’m balancing this scenario in my head, of generally offering these sorts of documents, in two formats.

It’s just a shame that the effort that would be required, for me to go back and offer an EPUB3 -version of all the PDFs, which I already did author, would be too great an undertaking, for me to embark on.

(Update 1/5/2019, 15h25 : )

I suppose that one question which some of my readers might want an answer to would be, ‘How can users generate EPUB3 ebooks, on a Linux platform?’ And yesterday I tried several solutions, the first of which had as goal, that I would generate EPUB2 ebooks, which my readers could view either with the apps ‘FBReader’ , or with ‘Aldiko’ . Those solutions gave inconsistent results. I was making use of the Linux command-line:

‘tex4ebook’

Only to find out that this command-line was ill-suited for typesetting Math. This command would generate small PNG (Image) Files, which contained the actual Math glyphs, and would first insert those into flowed HTML, before creating an EPUB document with the same HTML, and, if the option was given, to derive a MOBI-File document, from the generated EPUB-File, for which the proprietary command-line tool needs to be installed from Amazon, which is called ‘kindlegen‘ .

Regardless of how I modified my efforts yesterday, the best result that I could get was, that the Math would display in ‘FBReader’ , but with the line of text aligned to the PNG Image’s bottom. In some cases, the flowed text would display in such a way, that the line of text was actually aligned with *the top* of the image! By itself this is a deal-breaker because one cannot afford the confusion, of the reader perhaps interpreting an appended variable as having the semantic meaning of either being a subscript, or even an exponent, if the correct meaning is for the right-hand variable just to be multiplied, by whatever is to its left. Even if the item to the left of the variable is an expanded matrix, the author cannot leave it to the imagination of the reader to figure out, that what was meant should only be a matrix multiplication.

When opening the same files with ‘Aldiko’ , I found that this ebook-reader would insist on always displaying the pixels of the PNG-File, at a 1:1 ratio with the device’s / phone’s physical display, and no manipulation I could think of, of the generated Styles or of the HTML itself, would convince Aldiko to do otherwise. Also, the ‘-r’ command-line argument for ‘tex4ebook’ seemed to have no effect on this behaviour – Perhaps, a bug in the script? This tends to result in ‘a very small dot’ where the Math figure should appear, too small for people to recognize, and certainly not an acceptable result. And this unacceptable result would take place, not because of an error which the author might have made, but entirely due to which ebook-reader the reader had chosen.

So after attempting to achieve the unachievable for several hours, I finally conceded, and selected the ‘EPUB3′ format, which the ‘tex4ebook’ command-line does not generate in any special way, such as maybe, ‘With MathML instead of with those PNG Images?’ The following shell-script was instrumental in creating the desired result:

```
#!/bin/bash
# /opt/dirk/Make_XHMTL.sh
# Permissions: chmod a+rx
# Input: 1 Parameter with no spaces,
# LaTeX File, with name ending in .tex
# Output: XHTML File, with name ending in .html
# Containing MathML
# Warning: Many temporary files generated in pwd,
# With same base-name !
# Symlink to this script to be placed in /usr/local/bin
if [ -z "$1" ] || [ "${1: -4}" != ".tex" ] ; then
echo "Usage: Make_XHTML.sh <myfile.tex>"
echo "Output: <myfile.html>"
echo "Also generates many auxiliary files <myfile*>"
echo "Places all output in PWD."
exit 0
fi
PATTERN=$'[ \'"\t]'
if [[ $1 =~ $PATTERN ]] ; then
echo "Error: Parameter must not contain spaces."
exit 1
fi
if ! [ -a "$1" ] || [ -d "$1" ] ; then
echo "Error: Could not locate File $1"
exit 1
fi
htlatex $1 'xhtml,charset=utf-8,mathml' ' -cunihtf -utf8 -cvalidate'
```

There are a few things which the reader should know about this script. One is the fact that we need to have a sufficiently-complete installation of LaTeX in order for the script to work. Another is the fact that what is output, is not an EPUB-File, but rather, a file with a name ending in .HTML, that is in fact an XHTML-File, as well as numerous auxiliary files.

But then, using the GUI *of the installed application ‘Calibre’* , it’s possible to import the XHTML-File that results, into our library of ebooks. This will cause an entry to appear, that Calibre displays as only existing in ZIP-File format. And Calibre offers no options to view this format, other than as a bundle of files. But what this enables us to do next, is to Convert the ebook in question, to “EPUB Format”, again using the GUI. After that, EPUB will be one of the displayed formats for the book, which can be viewed as well as copied to another folder, and which is a self-contained file.

If the XHTML contained MathML, then an EPUB3 document will result, which the reader can either display correctly, or not at all. And with Math, that should be the desired result.

I suppose that I should add another comment on how the LaTeX document can be generated. In principle, many ways exist to output a TEX-File, along with all the supporting files. The method chosen results in small differences in the generated LaTeX. I like to use ‘LyX’ , a GUI-based, ‘WYSIWYM’ editor, mainly suited for exporting to LaTeX. There are some facts about how to use LyX, which users familiar with regular word-processors may not be familiar with.

The most important, relevant guideline is: All LyX or LaTeX documents have a “Document Class”. LyX defaults to the class ‘Article’ . Many of the PDF-Files I’ve generated for this blog were generated with that document class. It’s suitable for creating brief pieces of text, as well as figures, that simply seem to float in space. **‘Article’ is not a suitable document-class for EPUB documents**! The most-suitable document class which I’ve found for the moment is “Book (Standard Class)”. What this document class will provide, is a context for generating a Title, which the GUI of LyX allows us to do, and from which LyX will enter a ‘\maketitle’ Tag into the generated LaTeX document.

Such meta-data will eventually become important, if the resulting document is to be imported into Calibre. And the reason is the fact that Calibre does not base its naming of the document, on file-names. Instead, Calibre will generate folders and file-names derived from this meta-data. What I actually found was, I needed to specify the ‘Author’ meta-data *again*, from within the GUI of Calibre.

(Update 1/6/2019, 6h00 : )

One of the feats which I was able to accomplish was, to edit the LaTeX-File, so that when using ‘tex4ebook’ to output an EPUB2 ebook, all the Math looked typeset *almost* correctly when viewed using the Calibre ebook-reader, even though some of its figures had first been converted into PNG Images. But, the resulting EPUB2 ebook did not look typeset correctly on any smart-phone app:

So here the issue remained, ‘A collection of 3 items stacked vertically cannot be encoded, so that a first item must instead be encoded as having both a subscript and a super-script. This *would still be correct* mathematically, *if* the resulting super-script *was not* additionally displaced horizontally, with respect to where the subscript is.’

The question could be regarded as meaningless, whether such an ebook format is still ‘valid’ . Calibre could have the ability to render it ‘correctly’ , simply because that reader has the entire resources of the PC it’s running on to do so. The display of EPUB documents (without MathML) is ultimately based on how well the platform-libraries render HTML, which means that on a PC, very convoluted EPUB documents will display, that do not on some smart-phones.

For that reason I’m *not* going to waste my readers’ time, explaining what LaTeX formatting led to this result.

Dirk

]]>