Performing a familiar task, just using the built-in packages of Maxima.

According to an earlier posting, I had suggested a recipe for ‘Perpendicularizing’ a matrix, that was to represent a Quadric Equation, according to methods which I learned in “Linear 1″. That approach used the application ‘wxMaxima’, which is actually a fancy front-end for the application ‘Maxima’. But the main drawback with the direct approach I had suggested, was that it depended on the package ‘lapack’, which I had written, takes a long time to compile.

Since writing that posting, I discovered that some users cannot even get ‘lapack’ to compile, making that a broken, unusable package for them. Yet, the desire could still exist, to carry out the same project. Therefore, I have now expounded on this, by using the package ‘eigen’, which is built in to Maxima, and which should work for more users, assuming there is no bug in the way Maxima was built.

The following work-sheet explains what initially goes wrong when using the package ‘eigen’, and, how to remedy the initial problem…

Continue reading Performing a familiar task, just using the built-in packages of Maxima.

The Difference Between a Quartic, and a Quadric

I’ve talked to people who did not distinguish, between a Quartic, and a Quadric.

The following is a Quartic:

y = ax4 + bx3 + cx2 + dx + e

It follows in the sequence from a linear equation, through a quadratic, through a cubic, to arrive at the quartic. What follows it is called a “Quintic”.

The following is a Quadric:

a1 x2 + a2 y2 + a3 z2 +

a4 (xy) + a5 (yz) + a6 (az) +

a7 x + a8 y + a9 z – C = 0

The main reason quadrics are important, is the fact that they represent 3D shapes such as Hyperboloids, Ellipsoids, and Mathematically significant, but mundanely insignificant shapes, that radiate away from 1 axis out of 3, but that are symmetrical along the other 2 axes.

If the first-order terms of a quadric are zero, then the mixed terms merely represent rotations of these shapes, while, if the mixed terms are also zero, then these shapes are aligned with the 3 axes. Thus, if (C) was simply equal to (5), and if the signs of the 3 single, squared terms, by themselves, are:

+x2 +y2 +z2 = C : Ellipsoid .

ellipsoid

+x2 -y2 -z2 = C : Hyperboloid .

hyperboloid

+x2 +y2 – z2 = C : ‘That strange shape’ .

quadr_3


The way in which quadrics can be manipulated with Linear Algebra is of some curiosity, in that we can have a regular column vector (X), which represents a coordinate system, and we can state the transpose of the same vector, (XT), which forms the corresponding row-vector, for the same coordinate system. And in that case, the quadric can also be stated by the matrix product:

XT M X = C

(Updated 1/13/2019, 21h35 : )

Continue reading The Difference Between a Quartic, and a Quadric

Whether the Columns of Matrices have a Natural Order

This article is meant for readers who, like me, have studied Linear Algebra and who, like me, are curious about Quantum Mechanics.

Are the columns of matrices in a given, natural order, as we write them? Well, if we are using the matrix as a rotation matrix in CGI – i.e. its elements are derived from the trig functions of Euler Angles – then the column order depends, on the order in which we have labeled coordinates to be X, Y and Z. We are not free to change this order in the middle of our calculations, but if we decide that X, Y and Z are supposed to form a different set, then we need to use different matrices as well.

(Edited 02/15/2018 :

OTOH, We also know that a matrix can be an expression of a system of simultaneous equations, which can be solved manually through Gauss-Jordan Elimination on the matrix. If we have found that our system has infinitely many solutions, then we are inclined to say that certain variables are the “Leading Variables” while the others are the “Free Variables”. It is being taught today, that the Free Variables can also be made our parameters, so that the set of values for the Leading Variables follows from those parameters. But wait. Should it not be arbitrary for certain combinations of variables, which follows from which?

The answer is, that if we simply use Gauss-Jordan Elimination, and if two variables are connected as having possibly infinite combinations of values, then it will always be the variables stated earlier in the equations which become the Leading, and the ones stated later in the equations will become the Free Variables. We could restate the entire equations with the variables in some other order, and then surely enough, the variable that used to be a Free one will have become a new Leading one, and vice-versa. (And if we do so, the parametric equations for the other Leading variables will generally also change.)

As of 02/15/2018:

This is an observation which I once made, based on certain exercises in Linear Algebra, as taught, having been simplified in the way I described. Eventually, systems of solutions will come up in the real world, in which a Free Variable actually precedes a Leading Variable, both in the order they get mentioned in equations, as well as according to the order of matrix-columns. The corresponding row with a single 1, corresponding to those Free Variables, will not occur, so that it also cannot be added to or subtracted from what will be the earlier row, in such a case.

The later observation follows from the fact that such solutions have infinitely many solutions:

  1. If the Free Variables are just given a set of values that follow from the solution-set, Then the Leading variables would still need to have definite values, as defined by the same solution-matrix,
  2. If the Free Variables are given a set of values, that no longer follow from the solution-set, it will not follow that a different set of values for the Leading Variables, will make such extraneous solutions viable.

End of edit, 02/15/2018 )

The order of the columns, has become the order of discovery.

This could also have ramifications for Quantum Mechanics, where matrices are sometimes used. QM used matrices at first, in an effort to be empirical, and to acknowledge that we as Humans, can only observe a subset of the properties which particles may have. And then what happens in QM, is that some of the matrices used are computed to have Eigenvalues, and if those turn out to be real numbers, they are also thought to correspond to observable properties of particles, while complex Eigenvalues are stated – modestly enough – not to correspond to observable properties of the particle.

Even though this system seems straightforward, it is not foolproof. A Magnetic North Pole corresponds according to Classical Principles, to an angle from which an assumed current is always flowing arbitrarily, clockwise or counter-clockwise. It should follow then, that from a different perspective, a current which was flowing clockwise before, should always be flowing counter-clockwise. And yet according to QM, monopoles should exist.

Continue reading Whether the Columns of Matrices have a Natural Order

Self-Educating about Perpendicular Matrices with Complex Elements

One of the key reasons for which my class was taught Linear Algebra, including how to compute Eigenvalues and Eigenvectors of Matrices, was so that we could Diagonalize Symmetrical Matrices, in Real Numbers. What this did was to compute the ‘Perpendicular Matrix’ of a given matrix, in which each column was one of its Eigenvectors, and which was an example of an Orthogonal Matrix.  (It might be the case that what was once referred to as a Perpendicular Matrix, may now be referred to as the Orthogonal Basis of the given matrix,?)

(Edit 07/04/2018 :

In fact, what we were taught, is now referred to as The Eigendecomposition of a matrix. )

Having computed the perpendicular matrix P of M, it was known that the matrix product

PT M P = D,

which gives a Diagonal Matrix ‘D’. But, a key problem my Elementary Linear class was not taught to solve, was what to do if ‘M’ had complex Eigenvalues. In order to be taught that, we would need to have been taught in general, how to combine Linear Algebra with Complex Numbers. After that, the Eigenvectors could have been computed as easily as before, using Gauss-Jordan Elimination.

I have brushed up on this in my old Linear Algebra textbook, where the last chapter writes about Complex Numbers. Key facts which need to be understood about Complex Vector Spaces, is

  • The Inner Product needs to be computed differently from before, in a way that borrows from the fact that complex numbers naturally have conjugates. It is now the sum, of each element of one vector, multiplied by the conjugate, of the corresponding element of the other vector.
  • Orthogonal and Symmetrical Matrices are relatively unimportant with Complex Elements.
  • A special operation is defined for matrices, called the Conjugate Transpose, A* .
  • A Unitary Matrix now replaces the Orthogonal Matrix, such that A-1 = A* .
  • A Hermitian Matrix now replaces the Symmetrical Matrix, such that A = A* , and the elements along the main diagonal are Real. Hermitian Matrices are also easy to recognize by inspection.
  • Not only Hermitian Matrices can be diagonalized. They have a superset, known as Normal Matrices, such that A A* = A* A . Normal Matrices can be diagonalized.

This could all become important in Quantum Mechanics, considering the general issue known to exist, by which the bases that define how particles can interact, somehow need to be multiplied by complex numbers, to describe accurately, how particles do interact.

Continue reading Self-Educating about Perpendicular Matrices with Complex Elements