The Difference Between a Quartic, and a Quadric

I’ve talked to people who did not distinguish, between a Quartic, and a Quadric.

The following is a Quartic:

y = ax4 + bx3 + cx2 + dx + e

It follows in the sequence from a linear equation, through a quadratic, through a cubic, to arrive at the quartic. What follows it is called a “Quintic”.

The following is a Quadric:

a1 x2 + a2 y2 + a3 z2 +

a4 (xy) + a5 (yz) + a6 (az) +

a7 x + a8 y + a9 z – C = 0

The main reason quadrics are important, is the fact that they represent 3D shapes such as Hyperboloids, Ellipsoids, and Mathematically significant, but mundanely insignificant shapes, that radiate away from 1 axis out of 3, but that are symmetrical along the other 2 axes.

If the first-order terms of a quadric are zero, then the mixed terms merely represent rotations of these shapes, while, if the mixed terms are also zero, then these shapes are aligned with the 3 axes. Thus, if (C) was simply equal to (5), and if the signs of the 3 single, squared terms, by themselves, are:

+x2 +y2 +z2 = C : Ellipsoid .


+x2 -y2 -z2 = C : Hyperboloid .


+x2 +y2 – z2 = C : ‘That strange shape’ .


The way in which quadrics can be manipulated with Linear Algebra is of some curiosity, in that we can have a regular column vector (X), which represents a coordinate system, and we can state the transpose of the same vector, (XT), which forms the corresponding row-vector, for the same coordinate system. And in that case, the quadric can also be stated by the matrix product:

XT M X = C

(Updated 1/13/2019, 21h35 : )

Continue reading The Difference Between a Quartic, and a Quadric

Understanding the 2×2 Rotation Matrix

When Students have taken their first Linear Algebra course, they should have been taught, that a column vector can be multiplied by a matrix, to result in a column vector. They should also have been taught, that when matrices are used to multiply a column vector more than once, to result in a final column vector, the operation proceeds from right to left, and that the matrices which do so can themselves be multiplied, as the operation is associative. This multiplication can result in one matrix, as long as the number of rows of the first (right-hand) matrix is always equal to the number of columns in the second (left-hand) matrix.

One subject which does not usually get taught in beginning Linear Algebra courses, is that when the vectors are part of the same coordinate system, the matrix is equally capable of defining a rotation. What tends to get taught first, is transformations that appear linear and parallel.

The worksheet below is intended to show, that the correct choice of elements in a matrix, can also define a rotation:


(Edit 12/30/2018, 10h30 : )

The work-sheet has been updated, also to give a hint as to how 3D, Euler Angles may be translated into a 3×3 matrix.

(Edit 1/4/2019, 8h05 : )

I have now created a version of the same work-sheet, which can be viewed on a smart-phone. Please excuse the formatting errors which result:

Work-Sheet for Phones



A Clarification on Polynomial Approximations… Refining the Exercise

Some time ago, I posted an idea, on how the concept of a polynomial approximation can be simplified, in terms of the real-time Math that needs to be performed, in order to produce 4x oversampling, in which the positions of the interpolated samples with respect to time, are fixed positions.

In order for the reader to understand the present posting, which is a reiteration, he or she would need to read the posting I linked to above. Without reading that posting, the reader will not understand the Matrix, which I included below.

There was something clearly wrong with the idea, which I wrote above, but what is wrong, is not the fact that I computed, or assume the usefulness, of a product between two matrices. What is wrong with the idea as first posted above, is that the order of the approximation is only 4, thus implying a polynomial of the 3rd degree. This is a source of poor approximations close to the Nyquist Frequency.

But As I wrote before, the idea of using anything based on polynomials, can be extended to 7th-order approximations, which imply polynomials of the 6th degree. Further, there is no reason why a 7×7 matrix cannot be pre-multiplied by a 3×7 matrix. The result will only be a 3×7 matrix.

Hence, if we were to assume that such a matrix is to be used, this is the worksheet, which computed what that matrix would have to be:


The way this would be used in a practical application is, that a vector of input-samples be formed, corresponding to

t = [ -3, -2, -1, 0, +1, +2, +3 ]

And that the interpolation should result corresponding to

t = [ 0, 1/4, 1/2, 3/4 ]

Further, the interpolation at t = 0 does not need to be recomputed, as it was already provided by the 4th element of the input vector. So the input-vector would only need to be multiplied by the suggested matrix, to arrive at the other 3 values. After that, a new sample can be used as the new, 7th element of the vector, while the old 1st element is dropped, so that another 3 interpolated samples can be computed.

This would be an example of an idea which does not work out well according to a first approximation, but which will produce high-quality results, when the method is applied more rigorously.



About +90⁰ Phase-Shifting

I have run into people, who believe that a signal cannot be phase-advanced in real-time, only phase-delayed. And as far as I can tell, this idea stems from the misconception, that in order for a signal to be given a phase-advance, some form of prediction would be needed. The fact that this is not true can best be visualized, when we take an analog signal, and derive another signal from it, which would be the short-term derivative of the first signal. ( :1 ) Because the derivative would be most-positive at points in its waveform where the input had the most-positive slope, and zero where the input was at its peak, we would already have derived a sine-wave for example, that will be phase-advanced 90⁰ with respect to an input sine-wave.


But the main reason this is not done, is the fact that a short-term derivative also acts as a high-pass filter, which progressively doubles in output amplitude, for every octave of frequencies.

What can be done in the analog domain however, is that a signal can be phase-delayed 90⁰, and the frequency-response kept uniform, and then simply inverted. The phase-diagram of each of the signal’s frequency-components will then show, the entire signal has been phase-advanced 90⁰.


(Updated 11/29/2017 : )

Continue reading About +90⁰ Phase-Shifting