In This Posting, I elaborated at length, about Polynomial Approximation that is not overdetermined, but rather exactly defined, by a set of unknown (
y) values along a set of known time-coordinates (
x). Just to summarize, If the sample-time-points are known to be arbitrary X-coordinates 0, 1, 2 and 3, then the matrix (
X1) can state the powers of these coordinates, and If additionally the vector (
A) stated the coefficients of a polynomial, then the product (
X1 * A ) would also produce the four y-values as vector (
X1 can be computed before the algorithm is designed, and its inverse, (
X1^-1 ), would be such that (
X1^-1 * Y = A ). Hence, given a prepared matrix, a linear multiplication can derive a set of coefficients easily from a set of variable y-values.
Well this idea can be driven further. There could be another arbitrary set of x-coordinates 1.0, 1.25, 1.5, 1.75 , which are meant to be a 4-point interpolation within the first set. And then another matrix could be prepared before the algorithm is committed, called (
X2), which states the powers of this sequence. And then (
X2 * A = Y' ), where (
Y') is a set of interpolated samples.
What follows from this is that (
X2 * X1^-1 * Y = Y' ). But wait a moment. Before the algorithm is ever burned onto a chip, the matrix (
X2 * X1^-1 ) can be computed by Human programmers. We could call that our constant matrix (
So a really cheap interpolation scheme could start with a vector of 4 samples (
Y), and derive the 4 interpolated samples (
Y') just by doing one matrix-multiplication (
X3 * Y = Y' ). It would just happen that
Y' = Y
And so we could already guess off the top of our heads, that the first row of
X3 should be equal to ( 0, 1, 0, 0 ).
While this idea would certainly be considered obsolete by standards today, it would correspond roughly to the amount of computational power a single digital chip would have had in real-time, in the late 1980s… ?
I suppose that an important question to ask would be, ‘Aside from just stating that this interpolation smooths the curve, what else does it cause?’ And my answer would be, that Although it Allows for (undesirable) Aliasing of Frequencies to occur during playback, when the encoded ones are close to the Nyquist Frequency, If the encoded Frequencies are about 1/2 that or lower, Very Little Aliasing will still take place. And so, over most of the audible spectrum, this will still act as a kind of low-pass filter, although over-sampling has taken place.