One of the concepts which exist in digital signal processing, is that the difference between two consecutive input samples (in the time-domain) can simply be output, thus resulting in a differential of some sort, even though the samples of data do not represent a continuous function. There is a fact which must be observed to occur at (F = N / 2) – i.e. when the frequency is half the Nyquist Frequency, of (h / 2) , if (h) is the sampling frequency.

The input signal could be aligned with the samples, to give a sequence of [s0 … s3] equal to

0, +1, 0, -1

This set of (s) is equivalent to a sine-wave at (F = N / 2) . Its discrete differentiation [h0 … h3] would be

+1, +1, -1, -1

At first glance we might think, that this output stream has the same amplitude as the input stream. But the problem becomes that the output stream is by same token, not aligned with the samples. There is an implicit peak in amplitudes between (h0) and (h1) which is greater than (+1) , and an implicit peak between (h2) and (h3) more negative than (-1) . Any adequate filtering of this stream, belonging to a D/A conversion, will reproduce a sine-wave with a peak amplitude greater than (1).

(Edit 03/23/2017 :

In this case we can see, that samples h0 and h1 of the output stream, would be phase-shifted 45⁰ with respect to the zero crossings and to the peak amplitude, that would exist exactly between h0 and h1. Therefore, the amplitude of h0 and h1 will be the sine-function of 45⁰ with respect to this peak value, and the actual peak would be (the square root of 2) times the values of h0 and h1. )

And so a logical question which anybody might want an answer to would be, ‘Below what frequency does the gain cross unity gain?’ And the answer to that question is revealed by Differential Calculus. If a sine-wave has a peak amplitude of (1), then its instantaneous differential equals (2 π F) , which is also known as (ω) , at zero-crossing. It follows that unit gain will only take place at (F = N / π) . This is a darned low frequency in practice. If the sampling rate was 44.1kHz, this is achieved somewhere around 7 kHz, and music, for which that sampling rate was devised, easily contains sound energy above that frequency.

What follows is also a reason for which by itself, offers poor performance in compressing signals. It usually needs to be combined with other methods of data-reduction, thus possibly resulting in the lossy . And another approach which uses , is , the last of which is a proprietary codec, which minimizes the loss of quality that might otherwise stem from using .

I believe this observation is also relevant to This Earlier Posting of mine, which implied a High-Pass Filter with a cutoff frequency of 1 kHz, that would be part of a Band-Pass Filter. My goal was to obtain a gain of *at least* 0.5 , over the entire interval, and to simplify the Math.

(Edited 03/21/2017 . )

Apparently, the digital high-pass filters in their pure form, tend to give a gain at the cutoff frequency, which approaches sqrt(1/2) at very low frequencies, but which is lower than that at any reasonable fraction of the Nyquist Frequency, then also being at a reasonable fraction of the sample-rate. Thus, it becomes non-trivial to compensate either the high-pass or the low-pass filter, to arrive at unit gain at their cutoff frequency. Trying to do the same for a band-pass filter causes the problem, that whatever normalization we applied to one, already gives an unpredictable gain to the other. For example, we would need to compute by how much the normalization of the high-pass filter overshoots (1.0) – less than (2) – at the cutoff frequency of the low-pass filter, before we could compute by how much to compensate the latter… And at that point, the Math is no longer simple.

The reason this happens, is best illustrated in this WiKiPedia Article about a first-order High-Pass Filter, in which an author simply substituted ΔT for dT. According to Calculus, ΔT represents some finite difference in T, while dT represents an infinitesimal difference in T. We are taught that in certain special cases, equations can be solved as limit equations, when ΔT approaches zero, and what follows is that more-complex equations can be solved on the basis of these cases.

The discrete, digital forms of this simple filter are only correct, when ΔT remains very small. But because frequencies that are substantial with respect to the sampling interval are assumed, ΔT becomes noticeable in fact. In the example I gave above, it was not apparent that the Discrete Differential was 2πF, instead just resembling 2F.

Dirk

(Edit 02/20/2017 : ) One reason I reach this conclusion so slowly, is the assumption that this numeric approach simulates the old analog filter, that simply consisted of a capacitor in series with a resistor. At a certain frequency, the reactive impedance of the capacitor was equal to the ohmic impedance – aka resistance – of the resistor. And at that frequency two things would happen:

- The amplitude where the two components connected, was the square root of (1/2), and
- The phase-shift there, was at 45⁰.

~~Because the numeric approach fails to duplicate this, it implies a failure to simulate this circuit exactly. Also, it was already a property of this simple high-pass filter, that at no finite frequency, its gain would reach (1.0)~~ .

(Edit 03/26/2017 : )

The Numeric approach can be made to duplicate this. In the WiKiPedia article linked to above, there is a set of equations which goes roughly like this:

```
ω = 2 π F
k = h / ( ω + h )
where h is the sampling rate,
and F is the intended corner-frequency.
...
```

The WiKiPedia acknowledges that this constant will only give correct results, for low frequencies with respect to `N`

, the Nyquist Frequency. Practical filters assume that signal-content is encoded at any fraction of the Nyquist Frequency, which must therefore also be available as predictable corner-frequencies. But if we ignore this problem and observe that the approach gives -6db, at low corner frequencies, it follows that the correct approach would be

```
N = h / 2
ω = π F
k = h / ( ω + h )
```

But again, only for low corner-frequencies, since the -3db point of a textbook first-order filter is one octave removed from the -6db point, and since the maximum encoded frequency will be `N`

and not `h`

. I.e., the differential of the sine function is (1), when 1 complete wave takes place over the domain [ 0.0 … 2 π ) , and when θ is small. Thus, if we contract the wave to the domain of [ 0.0 … 1.0 ) , it will follow that the differential is 2 π F . But because `N`

is not `h`

, one wave will never take place from [ 0.0 … 1.0 ) . One full wave will take place maximally from [ 0.0 … 2.0 ) , at which point the differential becomes π F .

Yet, the suggested filter also does not satisfy the Bounded-In, Bounded-Out requirement.

The input-signal could begin with a step from zero to the maximum positive amplitude, followed by a wait. Because the filter is a high-pass filter, its output signal will decay back towards zero after some arbitrary amount of time. After that, the input-signal can simply consist of a step, from the maximum positive to the maximally-negative amplitude. The output amplitude will contain a spike, of twice the maximally-negative input amplitude.

And so it strikes me that this requirement, Bounded-In, Bounded-Out, is not always a serious concern. It would be a concern if it occurred with the types of signals specifically, which the filter is expected to process.

## One thought on “About the Amplitudes of a Discrete Differential”