Polynomial Interpolation: Practice Versus Theory

I have posted several times, that it is possible to pre-compute a matrix, such that to multiply a set of input-samples by this matrix, will result in the coefficients of a polynomial, and that next, a fractional position within the center-most interval of this polynomial can be computed – as a polynomial – to arrive at a smoothing function. There is a difference between how I represented this subject, and how it would be implemented.

I assumed non-negative values for the Time parameter, from 0 to 3 inclusively, such that the interval from 1 … 2 can be smoothed. This might work well for degrees up to 3, i.e. for orders up to 4. But in order to compute the matrices accurately, even using computers, when the degree of the polynomial is anything greater than 3, it makes sense to assume x-coordinates from -1 … +2 , or from -3 … +3 . Because, we can instruct a computer to divide by 3 to the 6th power more easily than by 6 to the 6th power.

And then in general, the evaluation of the polynomial will take place over the interval 0 … +1 .

The results can easily be shifted anywhere along the x-axis, as long as we only do the interpolation closest to the center. But the computation of the inverse matrix cannot.

My Example

Also, if it was our goal to illustrate the system to a reader who is not used to Math, then the hardest fact to prove, would be that the matrix of terms has a non-zero determinant, and is therefore invertible, when some of the terms are negative, as it was before.

Continue reading Polynomial Interpolation: Practice Versus Theory

A Cheapo Idea To Throw Out There, On Digital Oversampling

In This Posting, I elaborated at length, about Polynomial Approximation that is not overdetermined, but rather exactly defined, by a set of unknown (y) values along a set of known time-coordinates (x). Just to summarize, If the sample-time-points are known to be arbitrary X-coordinates 0, 1, 2 and 3, then the matrix (X1) can state the powers of these coordinates, and If additionally the vector (A) stated the coefficients of a polynomial, then the product ( X1 * A ) would also produce the four y-values as vector (Y).

X1 can be computed before the algorithm is designed, and its inverse, ( X1^-1 ), would be such that ( X1^-1 * Y = A ). Hence, given a prepared matrix, a linear multiplication can derive a set of coefficients easily from a set of variable y-values.

Well this idea can be driven further. There could be another arbitrary set of x-coordinates 1.0, 1.25, 1.5, 1.75 , which are meant to be a 4-point interpolation within the first set. And then another matrix could be prepared before the algorithm is committed, called (X2), which states the powers of this sequence. And then ( X2 * A = Y' ), where (Y') is a set of interpolated samples.

What follows from this is that ( X2 * X1^-1 * Y = Y' ). But wait a moment. Before the algorithm is ever burned onto a chip, the matrix ( X2 * X1^-1 ) can be computed by Human programmers. We could call that our constant matrix (X3).

So a really cheap interpolation scheme could start with a vector of 4 samples (Y), and derive the 4 interpolated samples (Y') just by doing one matrix-multiplication ( X3 * Y = Y' ). It would just happen that

Y'[1] = Y[2]

And so we could already guess off the top of our heads, that the first row of X3 should be equal to ( 0, 1, 0, 0 ).

While this idea would certainly be considered obsolete by standards today, it would correspond roughly to the amount of computational power a single digital chip would have had in real-time, in the late 1980s… ?

I suppose that an important question to ask would be, ‘Aside from just stating that this interpolation smooths the curve, what else does it cause?’ And my answer would be, that Although it Allows for (undesirable) Aliasing of Frequencies to occur during playback, when the encoded ones are close to the Nyquist Frequency, If the encoded Frequencies are about 1/2 that or lower, Very Little Aliasing will still take place. And so, over most of the audible spectrum, this will still act as a kind of low-pass filter, although over-sampling has taken place.

Dirk

Continue reading A Cheapo Idea To Throw Out There, On Digital Oversampling

A Note on Polynomial Fitting

In This Posting, I wrote that Polynomial Smoothing / Fitting / Approximation can be used in conjunction with a Sinc Filter, to convert Sample-Rates. I should note that sometimes, Polynomial functions can be used by themselves.

This could be an explanation for how early CD Players eventually replaced the sinc filter, with “Mathematical Sound Shaping” (M.A.S.H.). The sinc filters were much maligned for their lack of immediacy. In their place, M.A.S.H. suffered from some level of distortion at frequencies near the Nyquist Frequency.

Further, experiments I have yet to carry out with ‘QTractor‘, may show, that it will convert

44.1 kHz -> 48 kHz

using the exact same method as

48 kHz -> 44.1 kHz

Assuming that is, I set the Global filtering method to a (slower) non-default setting.

Dirk

 

A Note on Sample-Rate Conversion Filters

One type of (low-pass) filter which I had learned about some time ago, is a Sinc Filter. And by now, I have forgiven the audio industry, for placing the cutoff frequencies of various sinc filters, directly equal to a relevant Nyquist Frequency. Apparently, it does not bother them that a sinc filter will pass the cutoff frequency itself, at an amplitude of 1/2, and that therefore a sampled audio stream can result, with signal energy directly at its Nyquist Frequency.

There are more details about sinc filters to know, that are relevant to the Digital Audio Workstation named ‘QTractor‘, as well as to other DAWs. Apparently, if we want to resample an audio stream from 44.1 kHz to 48 kHz, in theory this corresponds to a “Rational” filter of 147:160, which means that if our Low-Pass Filter is supposed to be a sinc filter, it would need to have 160 * (n) coefficients in order to work ideally.

But, since no audio experts are usually serious about devising such a filter, what they will try next in such a case, is just to oversample the original stream by some reasonable factor, such as by a factor of 4 or 8, then to apply the sinc filter to this sample-rate, and after that to achieve a down-sampling, by just picking samples out, the sample-numbers of which have been rounded down. This is also referred to as an “Arbitrary Sample-Rate Conversion”.

Because 1 oversampled interval then corresponds to only 1/4 or 1/8 the real sampling interval of the source, the artifacts can be reduced in this way. Yet, this use of a sinc filter is known to produce some loss of accuracy, due to the oversampling, which sets a limit in quality.

Now, I have read that a type of filter also exists, which is called a “Farrow Filter”. But personally, I know nothing about Farrow Filters.

As an alternative to cherry-picking samples in rounded-down positions, it is possible to perform a polynomial smoothing of the oversampled stream (after applying a sinc filter if set to the highest quality), and then to ‘pick’ points along the (now continuous) polynomial that correspond to the output sampling rate. This can be simplified into a system of linear equations, where the exponents of the input-stream positions conversely become the constants, multipliers of which reflect the input stream. At some computational penalty, it should be possible to reduce output artifacts greatly.

Continue reading A Note on Sample-Rate Conversion Filters