A Cheapo Idea To Throw Out There, On Digital Oversampling

In This Posting, I elaborated at length, about Polynomial Approximation that is not overdetermined, but rather exactly defined, by a set of unknown (y) values along a set of known time-coordinates (x). Just to summarize, If the sample-time-points are known to be arbitrary X-coordinates 0, 1, 2 and 3, then the matrix (X1) can state the powers of these coordinates, and If additionally the vector (A) stated the coefficients of a polynomial, then the product ( X1 * A ) would also produce the four y-values as vector (Y).

X1 can be computed before the algorithm is designed, and its inverse, ( X1^-1 ), would be such that ( X1^-1 * Y = A ). Hence, given a prepared matrix, a linear multiplication can derive a set of coefficients easily from a set of variable y-values.

Well this idea can be driven further. There could be another arbitrary set of x-coordinates 1.0, 1.25, 1.5, 1.75 , which are meant to be a 4-point interpolation within the first set. And then another matrix could be prepared before the algorithm is committed, called (X2), which states the powers of this sequence. And then ( X2 * A = Y' ), where (Y') is a set of interpolated samples.

What follows from this is that ( X2 * X1^-1 * Y = Y' ). But wait a moment. Before the algorithm is ever burned onto a chip, the matrix ( X2 * X1^-1 ) can be computed by Human programmers. We could call that our constant matrix (X3).

So a really cheap interpolation scheme could start with a vector of 4 samples (Y), and derive the 4 interpolated samples (Y') just by doing one matrix-multiplication ( X3 * Y = Y' ). It would just happen that

Y'[1] = Y[2]

And so we could already guess off the top of our heads, that the first row of X3 should be equal to ( 0, 1, 0, 0 ).

While this idea would certainly be considered obsolete by standards today, it would correspond roughly to the amount of computational power a single digital chip would have had in real-time, in the late 1980s… ?

I suppose that an important question to ask would be, ‘Aside from just stating that this interpolation smooths the curve, what else does it cause?’ And my answer would be, that Although it Allows for (undesirable) Aliasing of Frequencies to occur during playback, when the encoded ones are close to the Nyquist Frequency, If the encoded Frequencies are about 1/2 that or lower, Very Little Aliasing will still take place. And so, over most of the audible spectrum, this will still act as a kind of low-pass filter, although over-sampling has taken place.

Dirk

Continue reading A Cheapo Idea To Throw Out There, On Digital Oversampling

About The History of Sinc Filters

A habit of mine which betrays my age, is to use the term ‘Sinc Filter’. I think that according to terminology today, there is no such thing. But there does exist a continuous function called ‘the Sinc Function’.

When I use the term ‘Sinc Filter’, I am referring to a convolution – a linear filter – the discreet coefficients of which are derived from the Sinc Function. But I think that a need exists to explain why such filters were ever used.

The Audio CDs that are by now outdated, were also the beginning of popular digital sound. And as such, CD players needed to have a Digital-to-Analog converter, a D/A converter. But even back when Audio CDs were first invented, listeners would not have been satisfied to listen to the rectangular wave-patterns that would come out of the D/A converter itself, directly at the 44.1 kHz sample-rate of the CD. Instead, those wave-patterns needed to be put through a low-pass filter, which also acted to smooth the rectangular wave-pattern.

But there was a problem endemic to these early Audio CDs. In order to minimize the number of bits that they would need to store, Electronic Engineers decided that Human Hearing stopped after 20 kHz, so that they chose their sampling rate to be just greater than twice that frequency. And indeed, when the sample-rate is 44.1 kHz, the Nyquist Frequency, the highest that can be recorded, is exactly equal to 22.05 kHz.

What this meant in practice, was that the low-pass filters used needed to have an extremely sharp cutoff-curve, effectively passing 20 kHz, but blocking anything higher than 22.05 kHz. With analog circuits, this was next to impossible to achieve, without also destroying the sound quality. And so here Electronics Experts first invented the concept of ‘Oversampling’.

Simply put, Oversampling in the early days meant that each analog sample from an D/A converter would be repeated several times – such as 4 times – and then passed through a more complex filter, which was implemented at first on an Analog IC.

This analog IC had a CCD delay-line, and at each point in the delay-line it had the IC equivalent to ‘a potentiometer setting’, that ‘stored’ the corresponding coefficient of the linear filter to be implemented. The products of the delayed signal with these settings on the IC, were summed with an analog amplifier – on the same IC.

Because the Sinc Function defines a brick-wall, low-pass filter, if  a 4x oversampling factor was used, then this linear filter would also have a cutoff-frequency at 1/4 the new, oversampled Nyquist Frequency.

What this accomplished, was to allow an analog filter to follow, which had 2 octaves of frequency-separation, within which to pass the lower frequency, but to block this oversampled, Nyquist Frequency.

Now, there is a key point to this which Electronics Experts were aware of, but which the googly-eyed buyers of CD players were often not. This type of filtering was needed more, before the Analog-to-Digital conversion took place, when CDs were mastered, than it needed to take place in the actual players that consumers bought.

The reason was a known phenomenon, by which If a signal is fed to a sample-and-hold circuit running at 44.1 kHz, and if the analog, input frequency exceeded the Nyquist Frequency, these excessive input frequencies get mirrored by the sample-and-hold circuit, so that where the input frequencies continued to increase, the frequencies in the digitized stream would be reflected back down – to somewhere below the Nyquist Frquency.

And what this meant was, that if there was any analog input at an supposedly-inaudible 28.05 kHz for example, it would wind up in the digital stream at a very audible 16.05 kHz. And then, having an oversampling CD player would no longer be able to separate that from any intended signal content actually at 16.05 kHz.

Therefore, in studios where CDs were mastered, it was necessary to have the sample-and-hold circuit also run at 4x or 8x the final sample-rate, so that this could be put through a homologous low-pass filter, only 1/4 or 1/8 the samples of which would actually be converted to digital, through the A/D converter, and then stored…

Now today, that sort of filter design has been replaced completely, through the availability of better chips, that do all the processing numerically and therefore digitally. Hence, if 4x oversampling is being used, the digital version of the signal and not its analog version, are being ‘filtered’, through specialized digital chips.

Back in the 1980s, the types of chips and the scale of integration required, were not yet available.

Continue reading About The History of Sinc Filters