About The Applicability of Over-Sampling Theory

One fact which I have described in my blog, is that when Audio Engineers set the sampling rate at 44.1kHz, they were taking into account a maximum perceptible frequency of 20kHz, but that if the signal was converted from analog to digital format, or the other way around, directly at that sampling rate, they would obtain strong aliasing as their main feature. And so a concept which once existed was called ‘over-sampling’, in which then, the sample-rate was quadrupled, and by now, could simply be doubled, so that all the analog filters still have to be able to do, is suppress a frequency which is twice as high, as the frequencies which they need to pass.

The interpolation of the added samples, exists digitally as a low-pass filter, the highest-quality variety of which would be a sinc-filter.

All of this fun and wonderful technology has a main weakness. It actually needs to be incorporated into the devices, in order to have any bearing on them. That MP3-player, which you just bought at the dollar-store? It has no sinc-filter. And therefore, whatever a sinc-filter would have done, gets lost on the consumer.

Continue reading About The Applicability of Over-Sampling Theory

Guessing at the Number of Coefficients Filters Might Need

There probably exist Mathematically-more-rigorous ways to derive the following information. But just in order to be able to understand concepts clearly, I often find that I need to do some estimating, that will give some idea, of how many zero-crossings, for example, a Sinc Filter should realistically have, on each side of its center sample. Or, of what kind of cutoff-performance the low-pass part of a Daubechies Wavelet will have, If it only has 8 coefficients…

If the idea is accepted that a low-pass filter is supposed to be of some type, based on the ‘Sinc Function’, including filters that only have 2x / 1-octave over-sampling, then a question which Electronics Experts will face, is what number of zero-crossings is appropriate. This question is especially difficult to find a precise answer to, because the series does not converge. It is a modified series of the form Infinite Sum (1/n) .

Just to orient ourselves within the Sinc Function when applied this way, the center sample is technically one of the zero-crossings, but is equal to 1, because it has the only coefficient of the form (0/0). After that, each coefficient twice removed is a zero-crossing, and the coefficients displaced from those are the standard non-zero examples.

Continue reading Guessing at the Number of Coefficients Filters Might Need

About The History of Sinc Filters

A habit of mine which betrays my age, is to use the term ‘Sinc Filter’. I think that according to terminology today, there is no such thing. But there does exist a continuous function called ‘the Sinc Function’.

When I use the term ‘Sinc Filter’, I am referring to a convolution – a linear filter – the discreet coefficients of which are derived from the Sinc Function. But I think that a need exists to explain why such filters were ever used.

The Audio CDs that are by now outdated, were also the beginning of popular digital sound. And as such, CD players needed to have a Digital-to-Analog converter, a D/A converter. But even back when Audio CDs were first invented, listeners would not have been satisfied to listen to the rectangular wave-patterns that would come out of the D/A converter itself, directly at the 44.1 kHz sample-rate of the CD. Instead, those wave-patterns needed to be put through a low-pass filter, which also acted to smooth the rectangular wave-pattern.

But there was a problem endemic to these early Audio CDs. In order to minimize the number of bits that they would need to store, Electronic Engineers decided that Human Hearing stopped after 20 kHz, so that they chose their sampling rate to be just greater than twice that frequency. And indeed, when the sample-rate is 44.1 kHz, the Nyquist Frequency, the highest that can be recorded, is exactly equal to 22.05 kHz.

What this meant in practice, was that the low-pass filters used needed to have an extremely sharp cutoff-curve, effectively passing 20 kHz, but blocking anything higher than 22.05 kHz. With analog circuits, this was next to impossible to achieve, without also destroying the sound quality. And so here Electronics Experts first invented the concept of ‘Oversampling’.

Simply put, Oversampling in the early days meant that each analog sample from an D/A converter would be repeated several times – such as 4 times – and then passed through a more complex filter, which was implemented at first on an Analog IC.

This analog IC had a CCD delay-line, and at each point in the delay-line it had the IC equivalent to ‘a potentiometer setting’, that ‘stored’ the corresponding coefficient of the linear filter to be implemented. The products of the delayed signal with these settings on the IC, were summed with an analog amplifier – on the same IC.

Because the Sinc Function defines a brick-wall, low-pass filter, if  a 4x oversampling factor was used, then this linear filter would also have a cutoff-frequency at 1/4 the new, oversampled Nyquist Frequency.

What this accomplished, was to allow an analog filter to follow, which had 2 octaves of frequency-separation, within which to pass the lower frequency, but to block this oversampled, Nyquist Frequency.

Now, there is a key point to this which Electronics Experts were aware of, but which the googly-eyed buyers of CD players were often not. This type of filtering was needed more, before the Analog-to-Digital conversion took place, when CDs were mastered, than it needed to take place in the actual players that consumers bought.

The reason was a known phenomenon, by which If a signal is fed to a sample-and-hold circuit running at 44.1 kHz, and if the analog, input frequency exceeded the Nyquist Frequency, these excessive input frequencies get mirrored by the sample-and-hold circuit, so that where the input frequencies continued to increase, the frequencies in the digitized stream would be reflected back down – to somewhere below the Nyquist Frquency.

And what this meant was, that if there was any analog input at an supposedly-inaudible 28.05 kHz for example, it would wind up in the digital stream at a very audible 16.05 kHz. And then, having an oversampling CD player would no longer be able to separate that from any intended signal content actually at 16.05 kHz.

Therefore, in studios where CDs were mastered, it was necessary to have the sample-and-hold circuit also run at 4x or 8x the final sample-rate, so that this could be put through a homologous low-pass filter, only 1/4 or 1/8 the samples of which would actually be converted to digital, through the A/D converter, and then stored…

Now today, that sort of filter design has been replaced completely, through the availability of better chips, that do all the processing numerically and therefore digitally. Hence, if 4x oversampling is being used, the digital version of the signal and not its analog version, are being ‘filtered’, through specialized digital chips.

Back in the 1980s, the types of chips and the scale of integration required, were not yet available.

Continue reading About The History of Sinc Filters

libsamplerate

In This Posting, I gave much thought, to how the ‘Digital Audio Workstation’ named QTractor might hypothetically do a sample-rate conversion.

I thought of several combinations, of “Half-Band Filters” that are based on the Sinc Function, and ‘Polynomial Smoothing’. The latter possibility would have often caused a computational penalty. But there was one, simpler combination of methods, which I did not think of.

QTractor uses a GPL Linux library named ‘libsamplerate‘. Its premise starts out with the idea, that a number of Half-Band Filters can be applied in correct sequences with 2x oversampling or 2x down-sampling, to achieve a variety of effects.

But then, ‘libsamplerate‘ does something ingenious in its simplicity: A Linear Interpolation! Linear interpolation will not offer as clean a spectrum as polynomial smoothing will in one step. But then, this library makes up for that, by just offering a finer resolution of oversampling, if the client application chooses it.

This library offers three quality levels:

  1. SRC_SINC_FASTEST
  2. SRC_SINC_MEDIUM_QUALITY
  3. SRC_SINC_BEST_QUALITY

 

Now, in This Posting, I identified an additional issue which arises, when we are doing an “Arbitrary Re-Sampling” and down-sampling. This issue was, that the source stream contains frequency components that are higher than the output stream Nyquist Frequency, and which need to be eliminated, even though the output stream is not in sync with the source stream.

To the best of my understanding, this problem can be solved, by making a temporary output stream 2x as fast as the final output stream, and then down-sampling by a factor of 2 again…

Sincerely,

Dirk

(Edit 07/21/2016 : ) The ‘GPL’ requires that this library be kept as free software, because it is in the nature of the GPL license, that any work derived from the code must also stay GPL, which stands of the “General Public License”.

But, because the possibility exists of some commercial exploitation being sought after, the Open-Source Software movement allows for a type of license, which is called the ‘LGPL’, which stands for the “Lesser General Public License”. The LGPL will allow for some software to be derived from the original code, which can be migrated into the private domain, so that the author of the derived code may close their source-code and sell their product for profit.

There exists a library similar to this one, that is named ‘libresample‘, with the express purpose that that one be LGPL code.

Yet, the authors of ‘libsamplerate‘ believe that this GPL version of the library is the superior one, which they would therefore have kept in the public domain.


 

Continue reading libsamplerate