One concept that exists in modern digital signal processing is, that a simple algorithm can often be written, to perform what old-fashioned, analog filters were able to do.
But then, one place where I find lacking progress – at least, where I can find the information posted publicly – is, about how to discretize slightly more complicated analog filters. Specifically, if one wants to design 2nd-order low-pass or high-pass filters, one approach which is often recommended is, just to chain the primitive low-pass or high-pass filters. The problem with that is, the highly damped frequency-response curve that follows, which is evident, in the attenuated voltage gain, at the cutoff frequency itself.
In analog circuitry, a solution to this problem exists in the “Sallen-Key Filter“, which naturally has a gain at the corner frequency of (-6db), which would also result, if two primitive filters were simply chained. But beyond that, the analog filter can be given (positive) feedback gain, in order to increase its Q-factor.
I set out to write some pseudo-code, for how such a filter could also be converted into algorithms…
Second-Order... LP: for i from 1 to n Y[i] := ( k * Y[i-1] ) + ((1 - k) * X[i]) + Feedback[i-1] Z[i] := ( k * Z[i-1] ) + ((1 - k) * Y[i]) Feedback[i] := (Z[i] - Z[i-1]) * k * α (output Z[i]) BP: for i from 1 to n Y[i] := ( k * Y[i-1] ) + ((1 - k) * X[i]) + Feedback[i-1] Z[i] := ( k * (Z[i-1] + Y[i] - Y[i-1]) ) Feedback[i] := Z[i] * (1 - k) * α (output Z[i]) HP: for i from 1 to n Y[i] := ( k * (Y[i-1] + X[i] - X[i-1]) ) + Feedback[i-1] Z[i] := ( k * (Z[i-1] + Y[i] - Y[i-1]) ) Feedback[i] := Z[i] * (1 - k) * α (output Z[i]) Where: k is the constant that defines the corner frequency via ω, And α is the constant that peaks the Q-factor. ω = 2 * sin(π * F0 / h) k = 1 / (1 + ω), F0 < (h / 4) h Is the sample-rate. F0 Is the corner frequency. To achieve a Q-factor (Q): α = (2 + (sin^2(π * F0 / h) * 2) - (1 / Q)) 'Damping Factor' = (ζ) = 1 / (2 * Q) Critical Damping: ζ = 1 / sqrt(2) (...) Q = 1 / sqrt(2)
(Algorithm Revised 2/08/2021, 23h40. )
(Computation of parameters Revised 2/09/2021, 2h15. )
(Updated 2/10/2021, 18h25… )
(As of 2/07/2021, 9h05… )
I would just like to acknowledge, that the WiKiPedia defines the Low-Pass Filter (…algorithm) differently from the way I do, and in a way which is not symmetrical with their definition of the High-Pass Filter:
My explanation for the difference in how the WiKiPedia defines their Low-Pass Filter lies, in the fact that they use a different value of (“α”), to define the same corner frequency, from the value of (“α”), as they use for their High-Pass Filter. Accounting for the fact that I define the corresponding symbol, (‘k’), the same way both times, should yield the same type of Low-Pass Filter as the WiKiPedia yields.
(Update Removed 2/08/2021, 14h45, because inaccurate.)
(Update 2/09/2021, 1h30: )
I found the following Web-article useful, in deciphering how second-order low-pass filters generally work:
(Update 2/09/2021, 3h15: )
One observation which I made about the primitive building-blocks of discretized filters was, that while the algorithms shown yield accurate frequencies, when the corner frequency is low, compared with the Nyquist Frequency, as the corner frequency approaches the Nyquist Frequency, the strange behaviour will set in, of a gain of (1/2), not, of (1/sqrt(2)). I have explained this away in different parts of my blog, as the inability of the sampling interval to capture any signal component which has been phase-shifted 90⁰. In short, what any analog low-pass filter is supposed to do at its corner frequency is, to yield an amplitude of (1/sqrt(2)), but with a 45⁰ phase-shift. Such a phase-shift will also be representable as two components, one in-phase and the other 90⁰ out of phase, each at (1/2) amplitude. As if the sampling interval was ignoring one of the components, these algorithms will seem to yield (1/2) amplitude, at 0⁰ phase-shift.
The problematic aspects of this behaviour are twofold:
- This cannot be setting in abruptly, when the corner frequency exceeds or equals (h / 4), And
- The open-loop gain in these filters, applied in pairs, needs to be known exactly, so that high Q-factors can be implemented, and the inverse of the gain approached but never exceeded.
Thus, I felt that the best way to visualize this over-damping would be, that a 90⁰ phase-shifted vector could start to get added-in to the in-phase component, as the corner frequency increases, and that the expected gain that analog filters would attain, results as a hypotenuse. But, this hypotenuse will be missing from the algorithmic filters’ gain, so that it can be multiplied in the determination of the parameter that leads to the Q-factor. By itself, this would yield the term:
However, since the algorithmic filters form a chain of pairs, the square root can be omitted. Further, two consecutive halves form a quarter.
This is how my conjecture forms, as to the correct way to achieve the desired Q-factors.
(Update 2/10/2021, 18h25: )
There is an observation about this discretization of the filters, which should really be more obvious, than the one which I described above. It suffers from the main problem, that any feedback samples are not being added instantaneously, while feedback in an analog circuit would be instantaneous.
In this emulation, the feedback is effectively being added with a delay of 1 sampling interval, after having passed through a High-Pass and a Low-Pass Filter ‘normally’, which in turn, would also make it useless above half the Nyquist Frequency (= h / 4), where each sample deviates from the previous sample in the opposite direction, so that ‘positive’ feedback would only lead to greater cancellation. At half the Nyquist Frequency, feedback samples are being added-in, with an effective 90⁰ phase-delay (compared to the actual input samples).
What this would seem to suggest is, that in order for the filters to be applied over the span of an original Nyquist Frequency, the inputs must also first be up-sampled 2x, perhaps with a linear interpolation, the intended filters applied according to the Math of the new sample-rate, and then, half the resulting output-samples actually used as output, at the original sample-rate.
In this special case, there should be no complicated requirements on how to down-sample, which would come into effect, if there could be (relevant) frequency-components present above half the (derived) Nyquist Frequency. The reason such frequency-components would not be present would be the fact, that the derived sample-rate would
only contain signal components present at the original sample-rate.
But then, this observation would also have an effect on what the consequence will be, if a fed-back sample was added with a phase-delay of 90⁰, to the input sample. According to how Resonance is formally described, If the Q-factor is high, its phase-delay will get close to 90⁰ but not quite reach it. And, at a phase-delay of exactly 90⁰, a physical resonator will fail to receive any more energy from the excitor, thereby having no reason to become active. At a phase-delay exceeding 90⁰, the resonator will also be losing energy to the excitor, when both are physical components.
According to what I’ve already written, I will leave the exercise up to the reader, to compute an adjusted value of (α), which should compensate for this latter, stronger phenomenon, than the phenomenon which I adjusted for earlier.
One thing I can tell the reader not to do is, ever to allow (α) to reach or exceed (4.0). The reason for this is as follows… If (F0) is chosen to be (h / 6)…
(k = (1-k) = 0.5), and, (k * (1-k) = 0.25),
Which will be the maximum open-loop gain, not accounting for (α). Therefore, if (α >= 4.0), a runaway sample-value can and will result…