A Note on Polynomial Fitting

In This Posting, I wrote that Polynomial Smoothing / Fitting / Approximation can be used in conjunction with a Sinc Filter, to convert Sample-Rates. I should note that sometimes, Polynomial functions can be used by themselves.

This could be an explanation for how early CD Players eventually replaced the sinc filter, with “Mathematical Sound Shaping” (M.A.S.H.). The sinc filters were much maligned for their lack of immediacy. In their place, M.A.S.H. suffered from some level of distortion at frequencies near the Nyquist Frequency.

Further, experiments I have yet to carry out with ‘QTractor‘, may show, that it will convert

44.1 kHz -> 48 kHz

using the exact same method as

48 kHz -> 44.1 kHz

Assuming that is, I set the Global filtering method to a (slower) non-default setting.

Dirk

 

A Note on Sample-Rate Conversion Filters

One type of (low-pass) filter which I had learned about some time ago, is a Sinc Filter. And by now, I have forgiven the audio industry, for placing the cutoff frequencies of various sinc filters, directly equal to a relevant Nyquist Frequency. Apparently, it does not bother them that a sinc filter will pass the cutoff frequency itself, at an amplitude of 1/2, and that therefore a sampled audio stream can result, with signal energy directly at its Nyquist Frequency.

There are more details about sinc filters to know, that are relevant to the Digital Audio Workstation named ‘QTractor‘, as well as to other DAWs. Apparently, if we want to resample an audio stream from 44.1 kHz to 48 kHz, in theory this corresponds to a “Rational” filter of 147:160, which means that if our Low-Pass Filter is supposed to be a sinc filter, it would need to have 160 * (n) coefficients in order to work ideally.

But, since no audio experts are usually serious about devising such a filter, what they will try next in such a case, is just to oversample the original stream by some reasonable factor, such as by a factor of 4 or 8, then to apply the sinc filter to this sample-rate, and after that to achieve a down-sampling, by just picking samples out, the sample-numbers of which have been rounded down. This is also referred to as an “Arbitrary Sample-Rate Conversion”.

Because 1 oversampled interval then corresponds to only 1/4 or 1/8 the real sampling interval of the source, the artifacts can be reduced in this way. Yet, this use of a sinc filter is known to produce some loss of accuracy, due to the oversampling, which sets a limit in quality.

Now, I have read that a type of filter also exists, which is called a “Farrow Filter”. But personally, I know nothing about Farrow Filters.

As an alternative to cherry-picking samples in rounded-down positions, it is possible to perform a polynomial smoothing of the oversampled stream (after applying a sinc filter if set to the highest quality), and then to ‘pick’ points along the (now continuous) polynomial that correspond to the output sampling rate. This can be simplified into a system of linear equations, where the exponents of the input-stream positions conversely become the constants, multipliers of which reflect the input stream. At some computational penalty, it should be possible to reduce output artifacts greatly.

Continue reading A Note on Sample-Rate Conversion Filters

My Distinction Between Variables And Constants

The way I process information, applied to ‘Computer Algebra Systems’, defines the difference between constants and variables in a context-sensitive way. It’s for the purpose of solving one problem, that certain symbols in an expression become variables, others constants, and others yet, function names. The fact that a syntax has been defined to store these symbols, does not affect the fact that their status can be changed from constant to variable and vice-versa.

I’ll name an example. For most purposes a Univariate Polynomial has the single variable (x), denotes powers of (x) as its base terms, and multiplies each of the base terms by a constant coefficient. To some people this might seem immutable.

But if the purpose of the exercise is to compute a Statistical, Polynomial Regression – which is “an overdetermined system” – then we must find optimal values for prospective coefficients. We can use this as the basis to form a “Polynomial Approximation” of a system, which could be of the 8th degree for example, and yet this polynomial must fit a data-set as closely as possible, which could have a list of 20 values of (x), each associated with a real value of (y), which our optimized set of coefficients is supposed to approximate, from the powers of (x), including the power (0), which always yields the base value (1).

In order to determine our 9 coefficients, we need to decide that all the powers of (x) have become constants. The coefficients we’re trying to determine best, have now become the variables in our problem. Thus, we have a column-vector of real (y)s (still variables), and matrices which state the powers of (x) which supposedly led to those values of (y). I believe that this is a standard for doing so:

Regression Analysis Guide

Well another conclusion we can reach, is that the base values which need to be correlated with real (y), aren’t limited to powers of (x). They could just as easily be some other functions of (x). It’s just that one advantage which polynomials have, is that if there is some scaling of (x), it’s possible to define a scaled parameter (t = ux) such that a corresponding polynomial in terms of (t) can do what our polynomial in terms of (x) did. If the base value was ( sin(x) ) , then ( sin(t) ) could not simply take its place. This is important to note if we are trying to approximate orbital motions of planets for example.

But then as soon as we’ve computed our best-fitting vector of coefficients, we can treat them as constants again, so that to plug in different values of (x) which did not occur in the original data-set, will also yield the corresponding, predicted values of (y’). So now (x) and (y’) are our variables again.

Dirk