Polynomial Interpolation: Practice Versus Theory

I have posted several times, that it is possible to pre-compute a matrix, such that to multiply a set of input-samples by this matrix, will result in the coefficients of a polynomial, and that next, a fractional position within the center-most interval of this polynomial can be computed – as a polynomial – to arrive at a smoothing function. There is a difference between how I represented this subject, and how it would be implemented.

I assumed non-negative values for the Time parameter, from 0 to 3 inclusively, such that the interval from 1 … 2 can be smoothed. This might work well for degrees up to 3, i.e. for orders up to 4. But in order to compute the matrices accurately, even using computers, when the degree of the polynomial is anything greater than 3, it makes sense to assume x-coordinates from -1 … +2 , or from -3 … +3 . Because, we can instruct a computer to divide by 3 to the 6th power more easily than by 6 to the 6th power.

And then in general, the evaluation of the polynomial will take place over the interval 0 … +1 .

The results can easily be shifted anywhere along the x-axis, as long as we only do the interpolation closest to the center. But the computation of the inverse matrix cannot.

My Example

Also, if it was our goal to illustrate the system to a reader who is not used to Math, then the hardest fact to prove, would be that the matrix of terms has a non-zero determinant, and is therefore invertible, when some of the terms are negative, as it was before.

Continue reading Polynomial Interpolation: Practice Versus Theory

There do in fact exist detailed specs about the Scarlett Focusrite 2i2.

One fact which I have written about before, is that I own a Scarlett Focusrite 2i2 USB-sound-device, and that I have tested whether it can be made to work on several platforms not considered standard, such as under Linux, with the JACK sound daemon, and under Android.

One fact which has reassured me, is that The company Web-site does in fact publish full specifications for it by now.

One conclusion which I can reach from this, is that the idea of setting my Linux software to a sample-rate of 192kHz, was simply a false memory. According to my own, earlier blog entry, I only noticed a top sample-rate of 96kHz at the time. And, my Android software only offered me a top sample-rate of 48kHz with this device.

The official specs state that its analog input frequency-response is a very high-quality version of 20Hz-20kHz, while its conversion is stated at 96kHz. What this implies is that when set to output audio at 44.1 or 48kHz, it must apply its own internal down-sampling, i.e. a digital low-pass filter, while at 88.2 or 96kHz, it must be applying the same analog filter, but not down-sampling its digital stream.

And so, whether we should be using it to record at 96kHz or at 48kHz, may depend on whether we think that our audio software will perform down-sampling using higher-quality filters than its internal processing does. But there can be an opposite point of view on that.

Just as some uses of computers see work offloaded from the main CPU, to external acceleration hardware, we could just as easily decide that the processing power built-in to this external sound device, can ease the workload on our CPU. After all, just because I got no buffer underruns during a simple test, does not imply necessarily, that I would get no sound drop-outs, if I was running a complex audio project in real-time.

Dirk

(Edit 03/21/207 : )

Continue reading There do in fact exist detailed specs about the Scarlett Focusrite 2i2.

The Advantage of Linear Filters – ie Convolutions

A question might come to mind to readers who are not familiar with this subject, as to why the subset of ‘Morphologies’ that is known as ‘Convolutions’ – i.e. ‘Linear Filters’ – is advantageous in filtering signals.

This is because even though such a static system of coefficients, applied constantly to input samples, will often produce spectral changes in the signal, they will not produce frequency components that were not present before. If new frequency components are produced, this is referred to as ‘distortion’, while otherwise all we get is spectral errors – i.e. ‘coloration of the sound’. The latter type of error is gentler on the ear.

For this reason, the mere realization that certain polynomial approximations can be converted into systems, that entirely produce linear products of the input samples, makes those more interesting.

OTOH, If each sampling of a continuous polynomial curve was at a random, irregular point in time – thus truly revealing it to be a polynomial – then additional errors get introduced, which might resemble ‘noise’, because those may not have deterministic frequencies with respect to the input.

And, the fact that the output samples are being generated at a frequency which is a multiple of the original sample-rate, also means that new frequency components will be generated, that go up to the same multiple.

In the case of digital signal processing, the most common type of distortion is ‘Aliasing’, while with analog methods it used to be ‘Total Harmonic Distortion’, followed by ‘Intermodulation Distortion’.

If we up-sample a digital stream and apply a filter, which consistently underestimates the sub-sampled, then the resulting distortion will consist of unwanted modulations of the higher Nyquist Frequency.

Continue reading The Advantage of Linear Filters – ie Convolutions

A Cheapo Idea To Throw Out There, On Digital Oversampling

In This Posting, I elaborated at length, about Polynomial Approximation that is not overdetermined, but rather exactly defined, by a set of unknown (y) values along a set of known time-coordinates (x). Just to summarize, If the sample-time-points are known to be arbitrary X-coordinates 0, 1, 2 and 3, then the matrix (X1) can state the powers of these coordinates, and If additionally the vector (A) stated the coefficients of a polynomial, then the product ( X1 * A ) would also produce the four y-values as vector (Y).

X1 can be computed before the algorithm is designed, and its inverse, ( X1^-1 ), would be such that ( X1^-1 * Y = A ). Hence, given a prepared matrix, a linear multiplication can derive a set of coefficients easily from a set of variable y-values.

Well this idea can be driven further. There could be another arbitrary set of x-coordinates 1.0, 1.25, 1.5, 1.75 , which are meant to be a 4-point interpolation within the first set. And then another matrix could be prepared before the algorithm is committed, called (X2), which states the powers of this sequence. And then ( X2 * A = Y' ), where (Y') is a set of interpolated samples.

What follows from this is that ( X2 * X1^-1 * Y = Y' ). But wait a moment. Before the algorithm is ever burned onto a chip, the matrix ( X2 * X1^-1 ) can be computed by Human programmers. We could call that our constant matrix (X3).

So a really cheap interpolation scheme could start with a vector of 4 samples (Y), and derive the 4 interpolated samples (Y') just by doing one matrix-multiplication ( X3 * Y = Y' ). It would just happen that

Y'[1] = Y[2]

And so we could already guess off the top of our heads, that the first row of X3 should be equal to ( 0, 1, 0, 0 ).

While this idea would certainly be considered obsolete by standards today, it would correspond roughly to the amount of computational power a single digital chip would have had in real-time, in the late 1980s… ?

I suppose that an important question to ask would be, ‘Aside from just stating that this interpolation smooths the curve, what else does it cause?’ And my answer would be, that Although it Allows for (undesirable) Aliasing of Frequencies to occur during playback, when the encoded ones are close to the Nyquist Frequency, If the encoded Frequencies are about 1/2 that or lower, Very Little Aliasing will still take place. And so, over most of the audible spectrum, this will still act as a kind of low-pass filter, although over-sampling has taken place.

Dirk

Continue reading A Cheapo Idea To Throw Out There, On Digital Oversampling