I’ve talked to people who did not distinguish, between a Quartic, and a Quadric.
The following is a Quartic:
y = ax4 + bx3 + cx2 + dx + e
It follows in the sequence from a linear equation, through a quadratic, through a cubic, to arrive at the quartic. What follows it is called a “Quintic”.
The following is a Quadric:
a1 x2 + a2 y2 + a3 z2 +
a4 (xy) + a5 (yz) + a6 (az) +
a7 x + a8 y + a9 z – C = 0
The main reason quadrics are important, is the fact that they represent 3D shapes such as Hyperboloids, Ellipsoids, and Mathematically significant, but mundanely insignificant shapes, that radiate away from 1 axis out of 3, but that are symmetrical along the other 2 axes.
If the first-order terms of a quadric are zero, then the mixed terms merely represent rotations of these shapes, while, if the mixed terms are also zero, then these shapes are aligned with the 3 axes. Thus, if (C) was simply equal to (5), and if the signs of the 3 single, squared terms, by themselves, are:
+x2 +y2 +z2 = C : Ellipsoid .
+x2 -y2 -z2 = C : Hyperboloid .
+x2 +y2 – z2 = C : ‘That strange shape’ .
The way in which quadrics can be manipulated with Linear Algebra is of some curiosity, in that we can have a regular column vector (X), which represents a coordinate system, and we can state the transpose of the same vector, (XT), which forms the corresponding row-vector, for the same coordinate system. And in that case, the quadric can also be stated by the matrix product:
XT M X = C
(Updated 1/13/2019, 21h35 : )
(As of yesterday : )
Where (C) is really just a simplification of something else, yet Mathematically valid. It implies that all the vector-elements resulting from the matrix multiplication, should add up, usually, to (+1). Therefore, if we had to state the quadric:
+x2 -2y2 +2(yz) -z2 +4(xz) = 1
As the matrix (M), this would be the matrix which follows:
|+1 0 +2| | 0 -2 +1| |+2 +1 -1|
And then, it would be possible to compute the Perpendicularization (P) of (M), through “Eigendecomposition”, yielding:
| | | -0.8313 -0.4449 -0.3333 | | | | | | -0.1258 0.7347 -0.6667 | | | | | | -0.5415 0.5122 0.6667 | | |
With the intent that:
D = PT M P
| | | +2.3028 0.0000 0.0000 | | | | | | 0.0000 -1.3028 0.0000 | | | | | | 0.0000 0.0000 -3.0000 | | |
X = P Y
YT PT M P Y = C
YT D Y = C
Where (D) is a diagonal matrix, hence, a matrix with the same shape aligned with the axes of (Y), that results when the coordinate system has been rotated by (P). What is interesting is that through computing (P), we can find the signs of the diagonal, non-zero elements of (D), and therefore, finally, determine the shape in (M). Because as long as there are mixed terms, it’s ambiguous what shape is defined, where the signs of the single, squared terms by themselves, is not enough to do so.
Therefore, in the above example, the shape is a Hyperboloid ( + – – ) .
If the first-order terms are also non-zero, then we have a translation, as well as the assumed rotation, of (X) with respect to (Y)… We compute the rotation first, assuming that it is still a rotation, from the mixed and the squared terms. And then, we use (P) in order to rotate the first-order terms, to arrive at the derived first-order terms, that are valid within (Y), not within (X), as the original first-order terms were:
Y = PT X
And then, since (D) has no mixed terms, we can add the rotated first-order terms to the equation with (D), and ‘complete the square’ 3 times, in order to find the translation according to (Y)…
(Update 1/13/2019, 21h35 : )
One fact which I’ve learned about ‘Eigendecomposition’ is, that there exist more than one form of it. But only one specific type of matrix (P) will work for this exercise. This happens to be the type of matrix, which in my ‘Linear 1′ course, was just called a Perpendicular matrix, but we were given specific instructions on how it must be computed.
Each column of (P) must be an Eigenvector of (M), but must also be made a unit vector. This is also called the “Right Eigenvector Matrix”, as opposed to the “
Left Eigenvector Matrix“.
When using Maxima to compute (P), the packages must be loaded:
This last instruction will take very long to execute, when given for the first time by one user, because ‘lapack’ takes a long time to compile. But when that has finished, the function to compute (P) becomes:
PL: dgeev(M, true, false);
‘PL’ will be a list of three elements:
- (Always), The list of Eigenvalues of (M),
- (If the second parameter was set to True), The Right Eigenvectors, else False,
- (If the third parameter was set to True), The Left Eigenvectors, else False.
This exercise suggests the additional question, of whether situations that call for the inverse of a matrix to be used, allow the use of the transpose instead. And the answer is ‘Usually, not. ‘ The exception arises when the matrix is orthonormal, in which case the transpose is also the inverse. In this exercise, if (P) is computed correctly, it will be orthonormal.
The additional question seems appropriate, of whether the Mode Matrix of (M) can simply be used, and a Gram-Schmidt Orthonormalization performed on it. The short answer is ‘No’. There are two reasons why not:
- Gram-Schmidt will ensure that all the remaining vectors in the matrix are orthogonal, but not unit vectors,
- Gram-Schmidt has as its main idiosyncrasy, to take the direction of the first vector read from the matrix as being completely accurate, while modifying the directions of the following vectors successively more.
Problem (2) has the side effect that it matters, whether the first vector read, is actually the first row, or the first column of the matrix. Gram-Schmidt goes row-by-row, even though (P) used in this exercise needs to be accurate, column-by-column.