About the Amplitudes of a Discrete Differential

One of the concepts which exist in digital signal processing, is that the difference between two consecutive input samples (in the time-domain) can simply be output, thus resulting in a differential of some sort, even though the samples of data do not represent a continuous function. There is a fact which must be observed to occur at (F = N / 2) – i.e. when the frequency is half the Nyquist Frequency, of (h / 2) , if (h) is the sampling frequency.

The input signal could be aligned with the samples, to give a sequence of [s0 … s3] equal to

0, +1, 0, -1

This set of (s) is equivalent to a sine-wave at (F = N / 2) . Its discrete differentiation [h0 … h3] would be

+1, +1, -1, -1

At first glance we might think, that this output stream has the same amplitude as the input stream. But the problem becomes that the output stream is by same token, not aligned with the samples. There is an implicit peak in amplitudes between (h0) and (h1) which is greater than (+1) , and an implicit peak between (h2) and (h3) more negative than (-1) . Any adequate filtering of this stream, belonging to a D/A conversion, will reproduce a sine-wave with a peak amplitude greater than (1).

(Edit 03/23/2017 : )

In this case we can see, that samples h0 and h1 of the output stream, would be phase-shifted 45⁰ with respect to the zero crossings and to the peak amplitude, that would exist exactly between h0 and h1. Therefore, the amplitude of h0 and h1 will be the sine-function of 45⁰ with respect to this peak value, and the actual peak would be (the square root of 2) times the values of h0 and h1.

(Erratum 11/28/2017 —

And so a logical question which anybody might want an answer to would be, ‘Below what frequency does the gain cross unity gain?’ And the answer to that question is, somewhat obscurely, at (N/3) . This is a darned low frequency in practice. If the sampling rate was 44.1kHz, this is achieved somewhere around 7 kHz, and music, for which that sampling rate was devised, easily contains sound energy above that frequency.

Hence the sequences which result would be:

s = [ +1, +1/2, -1/2, -1, -1/2, +1/2 ]

h = [ +1/2, -1/2, -1, -1/2, +1/2, +1 ]

What follows is also a reason for which by itself, DPCM offers poor performance in compressing signals. It usually needs to be combined with other methods of data-reduction, thus possibly resulting in the lossy ADPCM. And another approach which uses ADPCM, is aptX, the last of which is a proprietary codec, which minimizes the loss of quality that might otherwise stem from using ADPCM.

I believe this observation is also relevant to This Earlier Posting of mine, which implied a High-Pass Filter with a cutoff frequency of 500 Hz, that would be part of a Band-Pass Filter. My goal was to obtain a gain of at most 0.5 , over the entire interval, and to simplify the Math.

— End of Erratum. )

(Posting shortened here on 11/28/2017 . )

Dirk

About The History of Sinc Filters

A habit of mine which betrays my age, is to use the term ‘Sinc Filter’. I think that according to terminology today, there is no such thing. But there does exist a continuous function called ‘the Sinc Function’.

When I use the term ‘Sinc Filter’, I am referring to a convolution – a linear filter – the discreet coefficients of which are derived from the Sinc Function. But I think that a need exists to explain why such filters were ever used.

The Audio CDs that are by now outdated, were also the beginning of popular digital sound. And as such, CD players needed to have a Digital-to-Analog converter, a D/A converter. But even back when Audio CDs were first invented, listeners would not have been satisfied to listen to the rectangular wave-patterns that would come out of the D/A converter itself, directly at the 44.1 kHz sample-rate of the CD. Instead, those wave-patterns needed to be put through a low-pass filter, which also acted to smooth the rectangular wave-pattern.

But there was a problem endemic to these early Audio CDs. In order to minimize the number of bits that they would need to store, Electronic Engineers decided that Human Hearing stopped after 20 kHz, so that they chose their sampling rate to be just greater than twice that frequency. And indeed, when the sample-rate is 44.1 kHz, the Nyquist Frequency, the highest that can be recorded, is exactly equal to 22.05 kHz.

What this meant in practice, was that the low-pass filters used needed to have an extremely sharp cutoff-curve, effectively passing 20 kHz, but blocking anything higher than 22.05 kHz. With analog circuits, this was next to impossible to achieve, without also destroying the sound quality. And so here Electronics Experts first invented the concept of ‘Oversampling’.

Simply put, Oversampling in the early days meant that each analog sample from an D/A converter would be repeated several times – such as 4 times – and then passed through a more complex filter, which was implemented at first on an Analog IC.

This analog IC had a CCD delay-line, and at each point in the delay-line it had the IC equivalent to ‘a potentiometer setting’, that ‘stored’ the corresponding coefficient of the linear filter to be implemented. The products of the delayed signal with these settings on the IC, were summed with an analog amplifier – on the same IC.

Because the Sinc Function defines a brick-wall, low-pass filter, if  a 4x oversampling factor was used, then this linear filter would also have a cutoff-frequency at 1/4 the new, oversampled Nyquist Frequency.

What this accomplished, was to allow an analog filter to follow, which had 2 octaves of frequency-separation, within which to pass the lower frequency, but to block this oversampled, Nyquist Frequency.

Now, there is a key point to this which Electronics Experts were aware of, but which the googly-eyed buyers of CD players were often not. This type of filtering was needed more, before the Analog-to-Digital conversion took place, when CDs were mastered, than it needed to take place in the actual players that consumers bought.

The reason was a known phenomenon, by which If a signal is fed to a sample-and-hold circuit running at 44.1 kHz, and if the analog, input frequency exceeded the Nyquist Frequency, these excessive input frequencies get mirrored by the sample-and-hold circuit, so that where the input frequencies continued to increase, the frequencies in the digitized stream would be reflected back down – to somewhere below the Nyquist Frquency.

And what this meant was, that if there was any analog input at an supposedly-inaudible 28.05 kHz for example, it would wind up in the digital stream at a very audible 16.05 kHz. And then, having an oversampling CD player would no longer be able to separate that from any intended signal content actually at 16.05 kHz.

Therefore, in studios where CDs were mastered, it was necessary to have the sample-and-hold circuit also run at 4x or 8x the final sample-rate, so that this could be put through a homologous low-pass filter, only 1/4 or 1/8 the samples of which would actually be converted to digital, through the A/D converter, and then stored…

Now today, that sort of filter design has been replaced completely, through the availability of better chips, that do all the processing numerically and therefore digitally. Hence, if 4x oversampling is being used, the digital version of the signal and not its analog version, are being ‘filtered’, through specialized digital chips.

Back in the 1980s, the types of chips and the scale of integration required, were not yet available.