A Practical Application, that calls for A Uniform Phase-Shift: SSB Modulation

A concept that exists in radio-communications, which is derived from amplitude-modulation, and which is further derived from balanced modulation, is single-sideband modulation. And even back in the 1970s, this concept existed. Its earliest implementations required that a low-frequency signal be passed to a balanced modulator, which in turn would have the effect of producing an upper sideband (the USB) as well as an inverted lower sideband (the LSB), but zero carrier-energy. Next, the brute-force approach to achieving SSB entailed, using a radio-frequency filter to separate either the USB or the LSB.

The mere encumbrance of such high-frequency filters, especially if this method is to be used at RF frequencies higher than the frequencies, of the old ‘CB Radio’ sets, sent Engineers looking for a better approach to obtaining SSB modulation and demodulation.

And one approach that existed since the onset of SSB, was actually to operate two balanced modulators, in a scheme where one balanced modulator would modulate the original LF signal. The second balanced modulator would be fed an LF signal which had been phase-delayed 90⁰, as well as a carrier, which had either been given a +90⁰ or a -90⁰ phase-shift, with respect to whatever the first balanced modulator was being fed.

The concept that was being exploited here, is that in the USB, where the frequencies add, the phase-shifts also add, while in the LSB, where the frequencies subtract, the phase-shifts also subtract. Thus, when the outputs of the two modulators were mixed, one side-band would be in-phase, while the other would be 180⁰ out-of-phase. If the carrier had been given a +90⁰ phase-shift, then the LSB would end up 180⁰ out-of-phase – and cancel, while if the carrier had been given a -90⁰ phase-shift, the USB would end up 180⁰ out-of-phase – and cancel.

This idea hinges on one ability: To phase-shift an audio-frequency signal, spanning several octaves, so that a uniform phase-shift results, but also so that the amplitude of the derived signal be consistent over the required frequency-band. The audio signal could be filtered to reduce the number of octaves that need to be phase-shifted, but then it would need to be filtered to achieve a constrained frequency-range, before being used twice.

And so a question can arise, as to how this was achieved historically, given analog filters.

My best guess would be, that a stage which was used, involved a high-pass and a low-pass filter that acted in parallel, and which would have the same corner-frequency, the outputs of which were subtracted – with the high-pass filter negative, for -90⁰ . At the corner-frequency, the phase-shifts would have been +/- 45⁰. This stage would achieve approximately uniform amplitude-response, as well as achieving its ideal phase-shift of -90⁰ at the one center-frequency. However, this would also imply that the stage reaches -180⁰ (full inversion) at higher frequencies, because there, the high-pass component that takes over, is still being subtracted !

( … ? … )

What can in fact be done, is that a multi-band signal can be fed to a bank of 2nd-order band-pass filters, spaced 1 octave apart. The fact that the original signal can be reconstructed from their output, derives partially from the fact that at one center-frequency, an attenuated version is also passed through one-filter-up, with a phase-shift of +90⁰ , and a matching attenuated version of that signal also passed through one-filter-down, with a phase-shift of -90⁰. This means that the two vestigial signals that pass through the adjacent filters are at +/- 180⁰ with respect to each other, and cancel out, at the present center-frequency.

If the output from each band-pass filter was phase-shifted, this would need to take place in a way not frequency-dependent. And so it might seem to make sense to put an integrator at the output of each bp-filter, the time-constant of which is to achieve unit gain, that the center-frequency of that band. But what I also know, is that doing so will deform the actual frequency-response of the amplitudes, coming from the one band. What I do not know, is whether this blends well with the other bands.

If this was even to produce a semi-uniform -45⁰ shift, then the next thing to do, would be to subtract the original input-signal from the combined output.

(Edit 11/30/2017 :

It’s important to note, that the type of filter I’m contemplating does not fully achieve a phase-shift of +/- 90⁰ , at +/- 1 octave. This is just a simplification which I use to help me understand filters. According to my most recent calculation, this type only achieves a phase-shift of +/- 74⁰ , when the signal is +/- 1 octave from its center-frequency. )

Now, my main thought recently has been, if and how this problem could be solved digitally. The application could still exist, that many SSB signals are to be packed into some very high, microwave frequency-band, and that the type of filter which will not work, would be a filter that separates one audible-frequency sideband, out of the range of such high frequencies.

And as my earlier posting might suggest, the main problem I’d see, is that the discretized versions of the low-pass and high-pass filters that are available to digital technology in real-time, become unpredictable both in their frequency-response, and in their phase-shifts, close to the Nyquist Frequency. And hypothetically, the only solution that I could see to that problem would be, that the audio-frequency band would need to be oversampled first, at least 2x, so that the discretized filters become well-behaved enough, to be used in such a context. Then, the corner-frequencies of each, will actually be at 1/2 Nyquist Frequency and lower, where their behavior will start to become acceptable.

The reality of modern technology could well be such, that the need for this technique no longer exists. For example, a Quadrature Mirror Filter could be used instead, to achieve a number of side-bands that is a power of two, the sense with which each side-band would either be inverted or not inverted could be made arbitrary, and instead of achieving 2^n sub-bands at once, the QMF could just as easily be optimized, to target one specific sub-band at a time.

Continue reading A Practical Application, that calls for A Uniform Phase-Shift: SSB Modulation

An Elaboration on Quadrature Mirror Filter

This was an earlier posting of mine, in which I wrote about a “Quadrature Mirror Filter”. But the above posting may not make it clear to all readers, why a QMF approach will actually result in two streams, each of which has half the sample-rate of the original stream.

A basic premise which gets used, is the Daubechies Wavelet, according to which there exists a Scaling Function that later gets named ‘H1′, and a corresponding Wavelet which gets named ‘H0′. It could also be thought that H1 is a low-pass filter with a corner frequency of 1/2 the Nyquist Frequency, while H0 is a Band-Pass Filter derived from H1. Also, because the upper cutoff frequency of H0 is the Nyquist Frequency, it is not clear to me either, why we would not just call that a High-Pass Filter. But the WiKi page calls that the Band-Pass Filter.

Alright, So we can start with a stream sampled at 44.1 kHz and derive two output streams, one which contains the lower half of frequencies, and the other of which contains the upper half. How do the sample-rates of either get halved?

The answer is that after we have filtered the original stream both ways, we pick out every second sample of each.

This is also what would get done if we were to use a (more expensive) Half-Band Filter based on ‘the Sinc Function’, to down-sample a stream. In contrast, if we are over-sampling a stream to the highest level of accuracy, we first repeat each sample once, and then apply the (better) low-pass filter.  (It should be noted however, that a 4-coefficient Daubechies Wavelet would be considered ‘deficient’. Those start to become interesting, at maybe 8 coefficients.)

But when it comes to Quadrature Mirror Filters, when we have down-sampled the stream, we have also halved its Nyquist Frequency – both times. But then in the case of ‘H0′ above, original frequency components above the Nyquist Frequency are subject to the phenomenon I mentioned in another posting, according to which they get mirrored back down, from the new, lower Nyquist Frequency, all the way to zero (DC). Hence, the output of H0 gets inverted in frequencies, when it is subsequently down-sampled.

Dirk

Continue reading An Elaboration on Quadrature Mirror Filter

aptX and Delta-Modulation

I am an old-timer. And one of the tricks which once existed in Computing, to compress the amount of memory that would be needed, just to store digitized sound, was called “Delta Modulation”. At that time, the only ‘normal’ way to digitize sound was what is now called PCM, which often took up too much memory.

And so a scheme was devised very early, by which only the difference between two consecutive samples would actually stored. Today, this is called ‘DPCM‘. And yet, this method has an obvious, severe drawback. If the signal contains substantial amplitudes, associated with frequencies that are half the Nyquist Frequency or higher, this method will clip that content, and produce dull, altered sound.

Well one welcoming fact which I have learned, is that this limitation has essentially been overcome. One commercial domain in which this has been overcome, is with the compression scheme / CODEC named “aptX“. This is a proprietary scheme, owned by Qualcomm, but is frequently used, as the chips manufactured and designed by Qualcomm are installed into many devices and circuits. One important place this gets used, is with the type of Bluetooth headset, that now has high-quality sound.

What happens in aptX, requires that the band of frequencies which start out as a PCM stream, needs to get ‘beaten down’ into 4 sub-bands, using a type of filter known as a “Quadrature Mirror Filter“. This happens in two stages. I know of a kind of Quadrature Mirror Filter which was possible in the old analog days, but have had problems until now, imagining how somebody might implement one using algorithms.

The analog approach required, a local sine-wave, a phase-shifted local sine-wave, a balanced demodulator used twice, and a phase-shifter which was capable of phase-shifting a (wide) band of frequencies, without altering their relative amplitudes. This latter feat is a little difficult to accomplish with simple algorithms, and when accomplished, typically involves high latency. aptX is a CODEC with low latency.

The main thing to understand about a Quadrature Mirror Filter, implemented using algorithms in digital signal processing today, is that the hypothetical example the WiKi article above cites, using a Haar Wavelet for H0 and its complementary series for H1, actually fails to implement a quadrature-split in a pure way, and was offered just as a hypothetical example. The idea that H1( H0(z) ) always equals zero, simply suggested that the frequencies passed by these two filters are mutually exclusive, so that in an abstract way, they pass the requirements. After the signal is passed through H0 and H1 in parallel, the output of each is reduced to half the sampling rate of the input.

What Qualcomm explicitly does, is to define a series H0 and a series H1, such that they apply “64 coefficients”, so that they may achieve a frequency-split accurately. And it is not clear from the article, whether the number of coefficients for each filter is 64, or whether their sum for two filters is 64, or the sum of all six. Either way, this implies a lot of coefficients, which is why dedicated hardware is needed today, to implement aptX, and this dedicated hardware belongs to the kind, which needs to run its own microprogram.

Back in the early days of Computing, programmers would actually use the Haar Wavelet, because of its computational simplicity, even though doing so did not split the spectrum cleanly. And then this wavelet would define the ‘upper sideband’ in a notional way, while its complementary filter would define the notional, ‘lower sideband’, when splitting.

But then the result of this becomes 4 channels in the case of aptX, each of which has 1/4 the sampling rate of the original audio. And then it is possible, in effect, to delta-modulate each of these channels separately. The higher frequencies have then been beaten down to lower frequencies…

But there is a catch. In reality, aptX needs to use ‘ADPCM‘ and not ‘DPCM’, because it can happen in any case, that the amplitudes of upper-frequency bands could be high. ADPCM is a scheme, by which the maximum short-term differential is computed for some time-interval, which is allowed to be a frame of samples, and where a simple division is used to compute a scale factor, by which these differentials are to be quantized.

This is a special situation, in which the sound is quantized in the time-domain, rather than being quantized in the frequency-domain. Quantizing the higher-frequency sub-bands has the effect of adding background – ‘white’ – noise to the decoded signal, thus making the scheme lossy. Yet, because the ADPCM stages are adaptive, the degree of quantization keeps the level of this background noise at a certain fraction, of the amplitude of the intended signal.

And so it would seem, that even old tricks which once existed in Computing, such as delta modulation, have not gone to waste, and have been transformed into something more HQ today.

I think that one observation to add would be, that this approach makes most sense, if the number of output samples of each instance of H0 is half as many, as the number of input samples, and if the same can be said for H1.

And another observation would be, that this approach does not invert the lower sideband, the way real quadrature demodulation would. Instead, it would seem that H0 inverts the upper sideband.

If the intent of down-sampling is to act as a 2:1 low-pass filter, then it remains productive to add successive pairs of samples. Yet, this could just as easily be the definition of H1.

Dirk

(Edit 06/20/2016 : ) There is an observation to add about wavelets. The Haar Wavelet is the simplest kind:


H0 = [ +1, -1 ]
H1 = [ +1, +1 ]

And this one guarantees that the original signal can be reconstructed from two down-sampled sub-bands. But, if we remove one of the sub-bands completely, this one results in weird spectral results. This can also be a problem if the sub-bands are modified in ways that do not match.

It is possible to define complementary Wavelets, that are also orthogonal, but which again, result in weird spectral results.

The task of defining ones, which are both orthogonal and spectrally neutral, has been solved better by the Daubechies series of Wavelets. However, the series of coefficients used there are non-intuitive, and were also beyond my personal ability to figure out spontaneously.

The idea is that there exists a “scaling function”, which also results in the low-pass filter H1. And then, if we reverse the order of coefficients and negate every second one, we get the high-pass filter H0, which is really a band-pass filter.

To my surprise, the Daubechies Wavelets achieve ‘good results’, even with a low number of coefficients such as maybe 4? But for very good audio results, a longer series of coefficients would still be needed.

One aspect to this which is not mentioned elsewhere, is that while a Daubechies Wavelet-set could be used for encoding, that has a high order of approximation, it could still be that simple appliances will use the Haar Wavelet for decoding. This could be disappointing, but I guess that when decoding, the damage done in this way will be less severe than when encoding.

The most correct thing to do, would be to use the Daubechies Wavelets again for decoding, and the mere time-delays that result from their use, still fall within the customary definitions today, of “low-latency solutions”. If we needed a Sinc Filter, using it may no longer be considered so, and if we needed to find a Fourier Transform of granules of sound, only to invert it again later, it would certainly not be considered low-latency anymore.

And, when the subject is image decomposition or compression, it is a 2-dimensional application, and the reuse of the Haar Wavelet is more common.