One type of (low-pass) filter which I had learned about some time ago, is a Sinc Filter. And by now, I have forgiven the audio industry, for placing the cutoff frequencies of various sinc filters, directly equal to a relevant Nyquist Frequency. Apparently, it does not bother them that a sinc filter will pass the cutoff frequency itself, at an amplitude of 1/2, and that therefore a sampled audio stream can result, with signal energy directly at its Nyquist Frequency.

There are more details about sinc filters to know, that are relevant to the Digital Audio Workstation named ‘`QTractor`

‘, as well as to other DAWs. Apparently, if we want to resample an audio stream from 44.1 kHz to 48 kHz, in theory this corresponds to a “Rational” filter of 147:160, which means that if our Low-Pass Filter is supposed to be a sinc filter, it would need to have 160 * (n) coefficients in order to work ideally.

But, since no audio experts are usually serious about devising such a filter, what they will try next in such a case, is just to oversample the original stream by some reasonable factor, such as by a factor of 4 or 8, then to apply the sinc filter to this sample-rate, and after that to achieve a down-sampling, by just picking samples out, the sample-numbers of which have been rounded down. This is also referred to as an “Arbitrary Sample-Rate Conversion”.

Because 1 oversampled interval then corresponds to only 1/4 or 1/8 the real sampling interval of the source, the artifacts can be reduced in this way. Yet, this use of a sinc filter is known to produce some loss of accuracy, due to the oversampling, which sets a limit in quality.

Now, I have read that a type of filter also exists, which is called a “Farrow Filter”. But personally, I know nothing about Farrow Filters.

As an alternative to cherry-picking samples in rounded-down positions, it is possible to perform a polynomial smoothing of the oversampled stream (*after* applying a sinc filter *if* set to the highest quality), and then to ‘pick’ points along the (now continuous) polynomial that correspond to the output sampling rate. This can be simplified into a system of linear equations, where the exponents of the input-stream positions conversely become the constants, multipliers of which reflect the input stream. At some computational penalty, it should be possible to reduce output artifacts greatly.

Practical constraints on polynomial smoothing will limit the degree of the polynomial, but an inverse of a 4×4 matrix can be computed once in advance, to reveal the 3rd-degree polynomial that satisfies any input vector of 4 samples…

```
a1 + a2 x + a3 x^2 + a4 x^3 = y
x = (0, 1, 2, 3)
| 1 0 0 0 | | a1 | | y1 |
| 1 1 1 1 | | a2 | = | y2 |
| 1 2 4 8 | | a3 | | y3 |
| 1 3 9 27 | | a4 | | y4 |
X A = Y
X^-1 Y = A
F(x) := A[1] + A[2]x + A[3]x^2 + A[4]x^3
s = F( 1 + q ), 0 <= q < 1
```

Further, there is a type of special case in sinc filters, known as “Half-Band Filters”, which offer a computational bonus, because half their coefficients are actually zeroes then, and therefore do not need to be multiplied by. Sample-Rate Conversions at ratios that are powers of two, are often best implemented, by a series of half-band filters.

This set of facts can be interesting to know, if we are setting up `JACK`

, and need to choose a sample rate. Most sound cards which are capable of 44.1 kHz will also be capable of 48 kHz, and we would want to know if the higher rate is of any benefit.

The answer seems to be, that the higher sample-rate is better, as long as we are seeking to record all our own samples, as well as to sequence the tracks from a MIDI-based virtual instrument, then running at 48 kHz. But if we are seeking just to drop loops and prerecorded samples into our audio project, which have been supplied at 44.1 kHz, then what some Digital Audio Workstations will do, is to leave them encoded at that sample-rate, but to do a *real-time* Sample-Rate Conversion when we are monitoring our project.

(Edit 07/19/2016 : ) If a four-factor oversampling sinc filter extended to include 5+5 zero-crossings, this would *already* require 39 coefficients, and I do not see how it would be greatly ‘faster’ than a polynomial filter to compute. Therefore, I may make a simplifying assumption, that when the Global settings of ‘`QTractor`

‘ state a “`Sinc Filter (Fastest)`

“, which is the default, this application may just double the sampling rate twice, and apply a half-Band Filter after each time.

And, when we Export our final mix-down, Is the option offered to us by our DAW, to use a superior filter… ?

On that note: One disadvantage in using `QTractor`

is, the fact that the only way to do a mix-down, is to bounce the entire project in real-time. This means,

- The process is vulnerable to any buffer-underruns produced by JACK,
- The type of filter chosen for monitoring, will also be applied to bounce.
- If that type of filter is very HQ, this can increase the load on the CPU, and can thus increase the risk of buffer-underruns in practice.
- If
`QTractor`

has set its Session Sample-Rate to 48 kHz, in accordance to how`JACK`

was running, the simplest option may be to Export our ‘Golden Copy’ / final results to a 48 kHz output file, but then to use an external program such as`Audacity`

, to produce a 44.1 kHz version of it… ?

Dirk

BTW: If we are preparing audio files for mobile devices, such as smart-phones and tablets, I have read that using 48 kHz is not recommended, because many of the devices will do an automatic down-conversion to 44.1 kHz, without applying a Low-Pass Filter at all.

Apparently, the way some people listen to sound is such, that this does not even seem to bother them much. But then, there is certainly no advantage to offering those listeners a 48 kHz -sampled sound file either.

Any such down-conversion would need to take place in real-time, which will also limit the type of filter that can be used, given real constraints on available CPU power, so that *even if* a Low-Pass Filter is applied by the device, it is likely to of the non-ideal variety.

(Edit 07/21/2016 : ) In case the reader might be wondering, what the Inverse of that Matrix looks like, here it is. Predictably, the approximation is closest to accuracy between sample-positions x=1 and x=2, out of x = (0, 1, 2, 3). Hence, that interval should be probed every time. In the experiment below, I chose to apply the Nyquist Frequency, which is also the worst-case scenario. And, to extend the polynomial to the 5th degree, so that an additional sample can be added at each end of the curve, poses the question of how sample-positions (-1) and (4) *would even be relevant*, to what the content of the interval (1, 2) should contain. Not so. ~~And we would need to start raising (4) to the power of (5)~~ ! :

```
This is Yacas version '1.3.3'.
Yacas is Free Software--Free as in Freedom--so you can redistribute Yacas or
modify it under certain conditions. Yacas comes with ABSOLUTELY NO WARRANTY.
See the GNU General Public License (GPL) for the full conditions.
Type ?license or ?licence to see the GPL; type ?warranty for warranty info.
See http://yacas.sf.net for more information and documentation on Yacas.
Type ?? for help. Or type ?function for help on a function.
To exit Yacas, enter Exit(); or quit or Ctrl-c.
Type 'restart' to restart Yacas.
To see example commands, keep typing Example();
In> X := {{1,0,0,0},{1,1,1,1},{1,2,4,8},{1,3,9,27}};
Out> {{1,0,0,0},{1,1,1,1},{1,2,4,8},{1,3,9,27}}
In> PrettyForm(X);
/ \
| ( 1 ) ( 0 ) ( 0 ) ( 0 ) |
| |
| ( 1 ) ( 1 ) ( 1 ) ( 1 ) |
| |
| ( 1 ) ( 2 ) ( 4 ) ( 8 ) |
| |
| ( 1 ) ( 3 ) ( 9 ) ( 27 ) |
\ /
Out> True
In> P := Inverse(X);
Out> {{1,0,0,0},{(-11)/6,3,(-3)/2,1/3},{1,(-5)/2,2,(-1)/2},{(-1)/6,1/2,(-1)/2,1/6}}
In> PrettyForm(P);
/ \
| ( 1 ) ( 0 ) ( 0 ) ( 0 ) |
| |
| / -11 \ ( 3 ) / -3 \ / 1 \ |
| | --- | | -- | | - | |
| \ 6 / \ 2 / \ 3 / |
| |
| ( 1 ) / -5 \ ( 2 ) / -1 \ |
| | -- | | -- | |
| \ 2 / \ 2 / |
| |
| / -1 \ / 1 \ / -1 \ / 1 \ |
| | -- | | - | | -- | | - | |
| \ 6 / \ 2 / \ 2 / \ 6 / |
\ /
Out> True
In> A1 := P * {1,1,1,1};
Out> {1,0,0,0}
In> A2 := P * {-1,1,-1,1};
Out> {-1,20/3,-6,4/3}
In> Y2 := X * A2;
Out> {-1,1,-1,1}
In> F(x) := -1 + ((20/3) * x) - (6 * x^2) + ((4/3) * x^3);
Out> True
In> {F(0), F(1), F(2), F(3)}
Out> {-1,1,-1,1}
In> dF(x) := D(x) F(x);
Out> True
In> dF(x);
Out> 20/3-12*x+4*x^2
In> x := 1;
Out> 1
In> Eval(dF(x));
Out> (-4)/3
In> x := 1.5;
Out> 1.5
In> Eval(dF(x));
Out> (-7.)/3
In> x := 2;
Out> 2
In> Eval(dF(x));
Out> (-4)/3
In> F(0.5);
Out> 1
In> F(1);
Out> 1
In> F(2);
Out> -1
In> F(0.75);
Out> 3.5625/3
In> F(2.25);
Out> (-10.6875)/9
In> 3.5625 * 3;
Out> 10.6875
In> Solve(dF(x) == 0, x);
Out> {x==(Sqrt(112/3)+12)/8,x==(12-Sqrt(112/3))/8}
In> 12/8 = 3/2;
Out> True
In> N(3/2);
Out> 1.5
In> N( (12-Sqrt(112/3))/8 , 7);
Out> 0.7362381
In> N( F((12-Sqrt(112/3))/8) , 7);
Out> 1.1880752
In> F(1);
Out> 1
In> Exit();
Out> True
Quitting...
```

Hint: If we were to over-sample 2x and then still only apply a Linear Interpolation, would having oversampled the above example have changed the accuracy of the Linear Interpolation?

The following is what Polynomial Smoothing produces, at 1/2 Nyquist Frequency, which it will also produce if we over-sample the above example 2x, again to focus on the interval from x = 1 to x = 2:

Enjoy.

## 6 thoughts on “A Note on Sample-Rate Conversion Filters”