Hypothetically, how an FFT-based equalizer can be programmed.

One of the concepts which I only recently posted about was, that I had activated an equalizer function, that was once integrated into how the PulseAudio sound server works, but which may be installed with additional packages, in more-recent versions of Debian Linux. As I wrote, to activate this under Debian 8 / Jessie was a bit problematic at first, but could ultimately be accomplished. The following is what the controls of this equalizer look like, on the screen:

Equalizer_1

And, this is what the newly-created ‘sink’ is named, within the (old) KDE-4 desktop manager’s Settings Panel:

Equalizer_2

What struck me as remarkable about this, was its naming, as an “FFT-based Equalizer…”. I had written an earlier posting, about How the Fast Fourier Transform differs from the Discrete Fourier Transform. And, because I tend to think first, about how convolutions may be computed, using a Discrete Cosine Transform, it took me a bit of thought, to comprehend, how an equalizer function could be implemented, based on the FFT.

BTW, That earlier posting which I linked to above, has as a major flaw, a guess on my part about how MP3 sound compression works, that makes a false assumption. I have made more recent postings on how sound-compression schemes work, which no longer make the same false assumption. But otherwise, that old posting still explains, what the difference between the FFT and other, Discrete Transforms is.

So, the question which may go through some readers’ minds, like mine, would be, how a graphic equalizer based on the FFT can be made computationally efficient, to the maximum. Obviously, when the FFT is only being used to analyze a sampling interval, what results is a (small) number of frequency coefficients, spaced approximately uniformly, over a series of octaves. Apparently, such a set of coefficients-as-output, needs to be replaced by one stream each, that isolates one frequency-component. Each stream then needs to be multiplied by an equalizer setting, before being mixed into the combined equalizer output.

I think that one way to compute that would be, to replace the ‘folding’ operation normally used in the Fourier Analysis, with a procedure, that only computes one or more product-sums, of the input signal with reference sine-waves, but in each case except for the lowest frequency, over only a small fraction of the entire buffer, which becomes shorter according to powers of 2.

Thus, it should remain constant that, in order for the equalizer to be able to isolate the frequency of ~31Hz, a sine-product with a buffer of 1408 samples needs to be computed, once per input sample. But beyond that, determining the ~63Hz frequency-component, really only requires that the sine-product be computed, with the most recent 704 samples of the same buffer. Frequency-components that belong to even-higher octaves can all be computed, as per-input-sample sine-products, with the most-recent 352 input-samples, etc. (for multiples of ~125Hz). Eventually, as the frequency-components start to become odd products of an octave, an interval of 176 input samples can be used, for the frequency-components belonging to the same octave, thus yielding the ~500Hz and ~750Hz components… After that, in order to filter out the ~1kHz and the ~1.5kHz components, a section of the buffer only 88 samples long can be used…

Mind you, one alternative to doing all that would be, to apply a convolution of fixed length to the input stream constantly, but to recompute that convolution, by first interpolating frequency-coefficients between the GUI’s slider-positions, and then applying one of the Discrete Cosine Transforms to the resulting set of coefficients. The advantage to using a DCT in this way would be, that the coefficients would only need to be recomputed once, every time the user changes the slider-positions. But then, to name the resulting equalizer an ‘FFT-based’ equalizer, would actually be false.

(Updated 7/25/2020, 11h15… )

Continue reading Hypothetically, how an FFT-based equalizer can be programmed.

Basic Colpitts Oscillator

One of the concepts which I’ve been exploring on my blog, concerns tuned circuits, and another concerns Voltage-Controlled Oscillators (VCOs). As one type of voltage-controlled oscillator, I have considered an Astable Multivibrator, which has as advantage a wide frequency-range, but which will eventually have as disadvantage, a limited maximum frequency, when the supply voltage is only 3V. There could be other more-complex types of VCOs that apply, when, say, 200MHz is needed, but one basic type of oscillator which will continue to work under such conditions, which has been known for a century, and which will require an actual Inductor – a discrete coil – is called the Colpitts Oscillator. Here is its basic design:

Colpitz_1.svg

In this schematic I’ve left out actual component values because those will depend on the actual frequency, the available supply voltage, on whether a discrete transistor is to be used or an Integrated Circuit, on whether a bipolar transistor is to be used or a MOSFET… But there are nevertheless certain constraints on the component-values which apply. It’s assumed that C1 and C2 form part of the resonant “Tank Circuit” with L1, that in series, they define the frequency, and that they are to be made equal. C3 is not a capacitor with a critical value, instead to be chosen large enough, just to act as a coupling-capacitor at the chosen frequency (:2) . R2 is to be made consistent with the amount of bias current to flow through Q1, and R1 is chosen so that, as labelled, the correct bias voltage can be applied, in this case, to a MOSFET, without interfering with the signal-frequency, supplied through C3.

I’m also making the assumption that everything to the right of the dotted line would be put on a chip, while everything to the left of the dotted line would be supplied as external, discrete components. This is also why C3, a coupling capacitor, becomes possible.

The basic premise of this oscillator is that C1 and C2 do not only act as a voltage-divider, but that, when the circuit that forms between L1, C1 and C2 is resonant with a considerable Q-factor (>= 5), C1 and C2 actually act as though they were a centre-tapped auto-transformer. If this circuit was not resonating, the behaviour of C1 and C2 would not be so. But as long as it is, it’s possible for a driving voltage, together with a driving current, to be supplied to the connection between C1 and C2, in this case by the Source of Q1, and that the voltage which will form where C1 connects with both L1 and the Gate of Q1 (that last part, through C3), will essentially be the former, driving voltage doubled. Therefore, all that needs to happen on the part of the active component, is to form a voltage-follower, between its Gate and Source, so that the voltage-deviations at the Source, follow from those at the Gate, with a gain greater than (0.5). If that can be achieved, the open-loop gain of this circuit will exceed (1.0), and it will resonate.

It goes without say that C1 and C2 will also isolate whatever DC voltage may exist at the Source of Q1, from the DC voltage of L1.


 

There is a refinement to be incorporated, specifically to achieve a VCO. Some type of varactor needs to be connected in parallel with L1, so that low-frequency voltage-changes on the varactor will change the frequency at which this circuit oscillates, because by definition, a varactor adds variable capacitance.

What some sources will suggest is that, the best way to add a varactor to this circuit will be, to put yet-another coupling capacitor, and a resistor, the latter of which supplies the low-frequency voltage to the varactor. But I would urge my reader to be more-creative, in how a varactor could be added. One way I could think of might be, to get rid of R1 and C3, and instead of terminating L1 together with C2 to ground, to terminate them to the supply voltage, thus ensuring that Q1 is biased ‘On’, even though the coupling capacitor C3 would have been removed in that scenario. What would be the advantage in this case? The fact that The varactor could be implemented on-chip, and not supplied as yet-another, external, discrete component, many of which would eat up progressively more space on a circuit-board, as a complex circuit is being created.

I should also add that some problems will result, if the capacitance to be connected in parallel with L1 becomes as large, as either C1 or C2. An eventual situation will result, in which C1 and C2 stop acting, as though they formed a (voltage-boosting) auto-transformer. An additional voltage-divider would form, between C1 in this case, and the added, parallel capacitance. And this gives more food for thought. (:1)

 

(Possible Usage Scenario : )

(Updated 7/29/2019, 14h45 … )

Continue reading Basic Colpitts Oscillator

A Gap in My Understanding of Surround-Sound Filled: Separate Surround Channel when Compressed

In This earlier posting of mine, I had written about certain concepts in surround-sound, which were based on Pro Logic and the analog days. But I had gone on to write, that in the case of the AC3 or the AAC audio CODEC, the actual surround channel could be encoded separately, from the stereo. The purpose in doing so would have been, that if decoded on the appropriate hardware, the surround channel could be sent directly to the rear speakers – thus giving 6-channel output.

While writing what I just linked to above, I had not yet realized, that either channel of the compressed stream, could contain phase information conserved. This had caused me some confusion. Now that I realize, that the phase information could be correct, and not based on the sampling windows themselves, a conclusion comes to mind:

Such a separate, compressed surround-channel, would already be 90⁰ phase-shifted with respect to the panned stereo. And what this means could be, that if the software recognizes that only 2 output channels are to be decoded, the CODEC might just mix the surround channel directly into the stereo. The resulting stereo would then also be prepped, for Pro Logic decoding.

Dirk

 

A Word Of Compliment To Audacity

One of the open-source applications which can be used as a Sound-Editor, is named ‘Audacity’. And in an earlier posting, I had written that this application may apply certain effects, which first involve performing a Fourier Transform of some sort on sampling-windows, which then manipulate the frequency-coefficients, and which then invert the Fourier Transform, to result in time-domain sound samples again.

On closer inspection of Audacity, I’ve recently come to realize that its programmers have avoided going that route, as often as possible. They may have designed effects which sound more natural as a result, but which follow how traditional analog methods used to process sound.

In some places, this has actually led to criticism of Audacity, let’s say because the users have discovered, that a low-pass or a high-pass filter would not maintain phase-constancy. But in traditional audio work, low-pass or high-pass filters always used to introduce phase-shifts. Audacity simply brings this into the digital realm.

I just seem to be remembering certain other sound editors, that used the Fourier Transforms extensively.

Dirk