Thoughts About Software Equalizers

If a software-equalizer possesses GUI controls that correspond to approximate octaves, or repeated 1-2-5 sequences, it is entirely likely to be implemented as a set of bandpass filters acting in parallel. However, the simplistic bandpass filters I was contemplating, would also have required that the signal be multiplied by a factor of 4, to achieve unit gain where their low-pass and high-pass cutoff frequencies join, as I described in this posting.

(Edit 03/23/2017:

Actually, the parameters which define each digital filter, are non-trivial to compute, but nevertheless computable when the translation into the digital domain has been carried out correctly. And so a type of equalizer can be achieved, based on derived bandpass-filters, on the basis that each bandpass-filter has been tuned correctly.

If the filters cross over at their -6db point, then one octave lower or higher, one filter will reach its -3db point, while the other will reach its -12db point. So instead of -12db, this combination would yield -15db.

The fact that the signal which has wandered into one adjacent band is at -3db with respect to the center of that band, does not lead to a simple summation, because there is also a phase-shift between the frequency-components that wander across.

I suppose that the user should be aware, that in such a case, the gain of the adjacent bands has not dropped to zero, at the peak of the current band, so that perhaps the signal will simplify, if the corner-frequencies have been corrected. This way, a continuous curve will result from discrete settings.

Now, if the intention is to design a digital bandpass filter with greater than 6 db /Octave falloff curves, the simplistic approach would be just to put two of the previous stages in series – into a pipeline resulting in second-order filters.

Also, the only way then to preserve the accuracy of the input samples, is to convert them into floating-point format first, for use in processing, after which they can be exported to a practical audio-format again. )

(Edit 03/25/2017 :

The way simplistic high-pass filters work, they phase-shift the signal close to +90⁰ far down along the part of the frequency-response-curve, which represents their roll-off. And simplistic low-pass filters will phase-shift the signal close to -90⁰ under corresponding conditions.

OTOH, Either type of filter is supposed to phase-shift their signal ±45⁰, at their -3db point.

What this means is that if the output from several band-pass filters is taken in parallel – i.e. summed – then the center-frequency of one band will be along the roll-off part of the curve of each adjacent band, which combined with the -3db point from either its high-pass or its low-pass component. But then if the output of this one central band is set to zero, the output from the adjacent bands will be 90⁰ apart from each other. )

(Edit 03/29/2017 :

A further conclusion of this analysis would seem to be, that even to implement an equalizer with 1 slider /Octave properly, requires that each bandpass-filter be a second-order filter instead. That way, when the signals wander across to the center-frequency of the slider for the next octave, they will be at -6db relative to the output of that slider, and 180⁰ phase-shifted with respect to each other. Then, setting the center slider to its minimum position will cause the adjacent ones to form a working Notch Filter, and will thus allow any one band to be adjusted arbitrarily low.

And, halfway between the slider-center-frequencies, the gain of each will again be -3db, resulting in a phase-shift of ±90 with respect to the other one, and achieving flat frequency-response, when all sliders are in the same position.

The problem becomes, that if a 20-band equalizer is attempted, because the 1 /Octave example already required second-order bandpass-filters, the higher one will require 4th-order filters by same token, which would be a headache to program… )

Continue reading Thoughts About Software Equalizers

How certain signal-operations are not convolutions.

One concept that exists in signal processing, is that there could be a definition of a filter, which is based in the time-domain, and that this definition can resemble a convolution. And yet, a derived filter could no longer be expressible perfectly as a convolution.

For example, the filter in question might add reverb to a signal recursively. In the frequency-domain, the closer two frequencies are, which need to be distinguished, the longer the interval is in the time-domain, which needs to be considered before an output sample is computed.

Well, reverb that is recursive would need to be expressed as a convolution with an infinite number of samples. In the frequency-domain, this would result in sharp spikes instead of smooth curves.

I.e., If the time-constant of the reverb was 1/4 millisecond, a 4kHz sine-wave would complete within this interval, while a 2kHz sine-wave would be inverted in phase 180⁰. What this can mean is that a representation in the frequency-domain may simply have maxima and minima, that alternate every 2kHz. The task might never be undertaken to make the effect recursive.

(Last Edited on 02/23/2017 … )

Continue reading How certain signal-operations are not convolutions.

The Advantage of Linear Filters – ie Convolutions

A question might come to mind to readers who are not familiar with this subject, as to why the subset of ‘Morphologies’ that is known as ‘Convolutions’ – i.e. ‘Linear Filters’ – is advantageous in filtering signals.

This is because even though such a static system of coefficients, applied constantly to input samples, will often produce spectral changes in the signal, they will not produce frequency components that were not present before. If new frequency components are produced, this is referred to as ‘distortion’, while otherwise all we get is spectral errors – i.e. ‘coloration of the sound’. The latter type of error is gentler on the ear.

For this reason, the mere realization that certain polynomial approximations can be converted into systems, that entirely produce linear products of the input samples, makes those more interesting.

OTOH, If each sampling of a continuous polynomial curve was at a random, irregular point in time – thus truly revealing it to be a polynomial – then additional errors get introduced, which might resemble ‘noise’, because those may not have deterministic frequencies with respect to the input.

And, the fact that the output samples are being generated at a frequency which is a multiple of the original sample-rate, also means that new frequency components will be generated, that go up to the same multiple.

In the case of digital signal processing, the most common type of distortion is ‘Aliasing’, while with analog methods it used to be ‘Total Harmonic Distortion’, followed by ‘Intermodulation Distortion’.

If we up-sample a digital stream and apply a filter, which consistently underestimates the sub-sampled, then the resulting distortion will consist of unwanted modulations of the higher Nyquist Frequency.

Continue reading The Advantage of Linear Filters – ie Convolutions