Basic Colpitts Oscillator

One of the concepts which I’ve been exploring on my blog, concerns tuned circuits, and another concerns Voltage-Controlled Oscillators (VCOs). As one type of voltage-controlled oscillator, I have considered an Astable Multivibrator, which has as advantage a wide frequency-range, but which will eventually have as disadvantage, a limited maximum frequency, when the supply voltage is only 3V. There could be other more-complex types of VCOs that apply, when, say, 200MHz is needed, but one basic type of oscillator which will continue to work under such conditions, which has been known for a century, and which will require an actual Inductor – a discrete coil – is called the Colpitts Oscillator. Here is its basic design:

Colpitz_1.svg

In this schematic I’ve left out actual component values because those will depend on the actual frequency, the available supply voltage, on whether a discrete transistor is to be used or an Integrated Circuit, on whether a bipolar transistor is to be used or a MOSFET… But there are nevertheless certain constraints on the component-values which apply. It’s assumed that C1 and C2 form part of the resonant “Tank Circuit” with L1, that in series, they define the frequency, and that they are to be made equal. C3 is not a capacitor with a critical value, instead to be chosen large enough, just to act as a coupling-capacitor at the chosen frequency (:2) . R2 is to be made consistent with the amount of bias current to flow through Q1, and R1 is chosen so that, as labelled, the correct bias voltage can be applied, in this case, to a MOSFET, without interfering with the signal-frequency, supplied through C3.

I’m also making the assumption that everything to the right of the dotted line would be put on a chip, while everything to the left of the dotted line would be supplied as external, discrete components. This is also why C3, a coupling capacitor, becomes possible.

The basic premise of this oscillator is that C1 and C2 do not only act as a voltage-divider, but that, when the circuit that forms between L1, C1 and C2 is resonant with a considerable Q-factor (>= 5), C1 and C2 actually act as though they were a centre-tapped auto-transformer. If this circuit was not resonating, the behaviour of C1 and C2 would not be so. But as long as it is, it’s possible for a driving voltage, together with a driving current, to be supplied to the connection between C1 and C2, in this case by the Source of Q1, and that the voltage which will form where C1 connects with both L1 and the Gate of Q1 (that last part, through C3), will essentially be the former, driving voltage doubled. Therefore, all that needs to happen on the part of the active component, is to form a voltage-follower, between its Gate and Source, so that the voltage-deviations at the Source, follow from those at the Gate, with a gain greater than (0.5). If that can be achieved, the open-loop gain of this circuit will exceed (1.0), and it will resonate.

It goes without say that C1 and C2 will also isolate whatever DC voltage may exist at the Source of Q1, from the DC voltage of L1.


 

There is a refinement to be incorporated, specifically to achieve a VCO. Some type of varactor needs to be connected in parallel with L1, so that low-frequency voltage-changes on the varactor will change the frequency at which this circuit oscillates, because by definition, a varactor adds variable capacitance.

What some sources will suggest is that, the best way to add a varactor to this circuit will be, to put yet-another coupling capacitor, and a resistor, the latter of which supplies the low-frequency voltage to the varactor. But I would urge my reader to be more-creative, in how a varactor could be added. One way I could think of might be, to get rid of R1 and C3, and instead of terminating L1 together with C2 to ground, to terminate them to the supply voltage, thus ensuring that Q1 is biased ‘On’, even though the coupling capacitor C3 would have been removed in that scenario. What would be the advantage in this case? The fact that The varactor could be implemented on-chip, and not supplied as yet-another, external, discrete component, many of which would eat up progressively more space on a circuit-board, as a complex circuit is being created.

I should also add that some problems will result, if the capacitance to be connected in parallel with L1 becomes as large, as either C1 or C2. An eventual situation will result, in which C1 and C2 stop acting, as though they formed a (voltage-boosting) auto-transformer. An additional voltage-divider would form, between C1 in this case, and the added, parallel capacitance. And this gives more food for thought. (:1)

 

(Possible Usage Scenario : )

(Updated 7/29/2019, 14h45 … )

Continue reading Basic Colpitts Oscillator

A Gap in My Understanding of Surround-Sound Filled: Separate Surround Channel when Compressed

In This earlier posting of mine, I had written about certain concepts in surround-sound, which were based on Pro Logic and the analog days. But I had gone on to write, that in the case of the AC3 or the AAC audio CODEC, the actual surround channel could be encoded separately, from the stereo. The purpose in doing so would have been, that if decoded on the appropriate hardware, the surround channel could be sent directly to the rear speakers – thus giving 6-channel output.

While writing what I just linked to above, I had not yet realized, that either channel of the compressed stream, could contain phase information conserved. This had caused me some confusion. Now that I realize, that the phase information could be correct, and not based on the sampling windows themselves, a conclusion comes to mind:

Such a separate, compressed surround-channel, would already be 90⁰ phase-shifted with respect to the panned stereo. And what this means could be, that if the software recognizes that only 2 output channels are to be decoded, the CODEC might just mix the surround channel directly into the stereo. The resulting stereo would then also be prepped, for Pro Logic decoding.

Dirk

 

A Word Of Compliment To Audacity

One of the open-source applications which can be used as a Sound-Editor, is named ‘Audacity’. And in an earlier posting, I had written that this application may apply certain effects, which first involve performing a Fourier Transform of some sort on sampling-windows, which then manipulate the frequency-coefficients, and which then invert the Fourier Transform, to result in time-domain sound samples again.

On closer inspection of Audacity, I’ve recently come to realize that its programmers have avoided going that route, as often as possible. They may have designed effects which sound more natural as a result, but which follow how traditional analog methods used to process sound.

In some places, this has actually led to criticism of Audacity, let’s say because the users have discovered, that a low-pass or a high-pass filter would not maintain phase-constancy. But in traditional audio work, low-pass or high-pass filters always used to introduce phase-shifts. Audacity simply brings this into the digital realm.

I just seem to be remembering certain other sound editors, that used the Fourier Transforms extensively.

Dirk

 

A Practical Application, that calls for A Uniform Phase-Shift: SSB Modulation

A concept that exists in radio-communications, which is derived from amplitude-modulation, and which is further derived from balanced modulation, is single-sideband modulation. And even back in the 1970s, this concept existed. Its earliest implementations required that a low-frequency signal be passed to a balanced modulator, which in turn would have the effect of producing an upper sideband (the USB) as well as an inverted lower sideband (the LSB), but zero carrier-energy. Next, the brute-force approach to achieving SSB entailed, using a radio-frequency filter to separate either the USB or the LSB.

The mere encumbrance of such high-frequency filters, especially if this method is to be used at RF frequencies higher than the frequencies, of the old ‘CB Radio’ sets, sent Engineers looking for a better approach to obtaining SSB modulation and demodulation.

And one approach that existed since the onset of SSB, was actually to operate two balanced modulators, in a scheme where one balanced modulator would modulate the original LF signal. The second balanced modulator would be fed an LF signal which had been phase-delayed 90⁰, as well as a carrier, which had either been given a +90⁰ or a -90⁰ phase-shift, with respect to whatever the first balanced modulator was being fed.

The concept that was being exploited here, is that in the USB, where the frequencies add, the phase-shifts also add, while in the LSB, where the frequencies subtract, the phase-shifts also subtract. Thus, when the outputs of the two modulators were mixed, one side-band would be in-phase, while the other would be 180⁰ out-of-phase. If the carrier had been given a +90⁰ phase-shift, then the LSB would end up 180⁰ out-of-phase – and cancel, while if the carrier had been given a -90⁰ phase-shift, the USB would end up 180⁰ out-of-phase – and cancel.

This idea hinges on one ability: To phase-shift an audio-frequency signal, spanning several octaves, so that a uniform phase-shift results, but also so that the amplitude of the derived signal be consistent over the required frequency-band. The audio signal could be filtered to reduce the number of octaves that need to be phase-shifted, but then it would need to be filtered to achieve a constrained frequency-range, before being used twice.

And so a question can arise, as to how this was achieved historically, given analog filters.

My best guess would be, that a stage which was used, involved a high-pass and a low-pass filter that acted in parallel, and which would have the same corner-frequency, the outputs of which were subtracted – with the high-pass filter negative, for -90⁰ . At the corner-frequency, the phase-shifts would have been +/- 45⁰. This stage would achieve approximately uniform amplitude-response, as well as achieving its ideal phase-shift of -90⁰ at the one center-frequency. However, this would also imply that the stage reaches -180⁰ (full inversion) at higher frequencies, because there, the high-pass component that takes over, is still being subtracted !

( … ? … )

What can in fact be done, is that a multi-band signal can be fed to a bank of 2nd-order band-pass filters, spaced 1 octave apart. The fact that the original signal can be reconstructed from their output, derives partially from the fact that at one center-frequency, an attenuated version is also passed through one-filter-up, with a phase-shift of +90⁰ , and a matching attenuated version of that signal also passed through one-filter-down, with a phase-shift of -90⁰. This means that the two vestigial signals that pass through the adjacent filters are at +/- 180⁰ with respect to each other, and cancel out, at the present center-frequency.

If the output from each band-pass filter was phase-shifted, this would need to take place in a way not frequency-dependent. And so it might seem to make sense to put an integrator at the output of each bp-filter, the time-constant of which is to achieve unit gain, that the center-frequency of that band. But what I also know, is that doing so will deform the actual frequency-response of the amplitudes, coming from the one band. What I do not know, is whether this blends well with the other bands.

If this was even to produce a semi-uniform -45⁰ shift, then the next thing to do, would be to subtract the original input-signal from the combined output.

(Edit 11/30/2017 :

It’s important to note, that the type of filter I’m contemplating does not fully achieve a phase-shift of +/- 90⁰ , at +/- 1 octave. This is just a simplification which I use to help me understand filters. According to my most recent calculation, this type only achieves a phase-shift of +/- 74⁰ , when the signal is +/- 1 octave from its center-frequency. )

Now, my main thought recently has been, if and how this problem could be solved digitally. The application could still exist, that many SSB signals are to be packed into some very high, microwave frequency-band, and that the type of filter which will not work, would be a filter that separates one audible-frequency sideband, out of the range of such high frequencies.

And as my earlier posting might suggest, the main problem I’d see, is that the discretized versions of the low-pass and high-pass filters that are available to digital technology in real-time, become unpredictable both in their frequency-response, and in their phase-shifts, close to the Nyquist Frequency. And hypothetically, the only solution that I could see to that problem would be, that the audio-frequency band would need to be oversampled first, at least 2x, so that the discretized filters become well-behaved enough, to be used in such a context. Then, the corner-frequencies of each, will actually be at 1/2 Nyquist Frequency and lower, where their behavior will start to become acceptable.

The reality of modern technology could well be such, that the need for this technique no longer exists. For example, a Quadrature Mirror Filter could be used instead, to achieve a number of side-bands that is a power of two, the sense with which each side-band would either be inverted or not inverted could be made arbitrary, and instead of achieving 2^n sub-bands at once, the QMF could just as easily be optimized, to target one specific sub-band at a time.

Continue reading A Practical Application, that calls for A Uniform Phase-Shift: SSB Modulation