A Basic Limitation in Stereo FM Reproduction

One of the concepts which exist in modern, high-definition sound, is that Human Sound perception can take place between 20 Hz and 20kHz, even though those endpoints are somewhat arbitrary. Some people cannot hear frequencies as high as 20kHz, especially older people, or anybody who just does not have good hearing. Healthy, young children and teenagers can typically hear that entire frequency range.

But, way back when FM radio was invented, sound engineers had flawed data about what frequencies Humans can hear. It was given to them as data to work with that Humans can only hear frequencies from 30Hz to 15kHz. And so, even though Their communications authorities had the ability to assign frequencies somewhat arbitrarily, they did so in a way that was based on such data. (:1)

For that reason, the playback of FM Stereo today, using household receivers, is still limited to an audio frequency range from 30Hz to 15kHz. Even very expensive receivers will not be able to reproduce sound, that was once part of the modulated input, outside this frequency range, although other reference points can be applied, to try to gauge how good the sound quality is.

There is one artifact of this initial standard which was sometimes apparent in early receivers. Stereo FM has a pilot frequency at 19kHz, which a receiver needs to lock an internal oscillator to, but in such a way that the internal oscillator runs at 38kHz, but such that this internal oscillator can be used to demodulate the stereo part of the sound. Because the pilot signal which is actually part of the broadcast signal is ‘only’ at 19kHz, this gives an additional reason to cut off the audible signal at ‘only’ 15Khz; the pilot is not meant to be heard. But, way back in the 1970s and earlier, Electrical Engineers did not have the type of low-pass filters available to them which they do now, that are also known as ‘brick-wall filters’, or filters that attenuate frequencies above the cutoff frequency very suddenly. Instead, equipment designed to be manufactured in the 1970s and earlier, would only use low-pass filters with gradual ‘roll-off’ curves, to attenuate the higher frequencies progressively more, above the cutoff frequency by an increasing distance, but in a way that was gentle. And in fact, even today the result seems to be, that gentler roll-off of the higher frequencies, results in better sound, when the quality is measured in other ways than just the frequency range, such as, when sound quality is measured for how good the temporal resolution, of very short pulses, of high-frequency sound is.

Generally, very sharp spectral resolution results in worse temporal resolution, and this is a negative side effect of some examples of modern sound technology.

But then sometimes, when listeners with high-end receivers in the 1970s and before, who had very good hearing, were tuned in to an FM Stereo Signal, they could actually hear some residual amount of the 19kHz pilot signal, which was never a part of the original broadcast audio. That was sometimes still audible, just because the low-pass filter that defined 15kHz as the upper cut-off frequency, was admitting the 19kHz component to a partial degree.

One technical accomplishment that has been possible since the 1970s however, in consumer electronics, was an analog ‘notch filter’, which seemed to suppress one exact frequency – or almost so – and such a notch filter could be calibrated to suppress 19kHz specifically.

Modern electronics makes possible such things as analog low-pass filters with a more-sudden frequency-cut-off, digital filters, etc. So it’s improbable today, that even listeners whose hearing would be good enough, would still be receiving this 19kHz sound-component to their headphones. In fact, the sound today is likely to seem ‘washed out’, simply because of too many transistors being fit on one chip. And when I just bought an AM/FM Radio in recent days, I did not even try the included ear-buds at first, because I have better headphones. When I did try the included ear-buds, their sound-quality was worse than that, when using my own, valued headphones. I’d say the included ear-buds did not seem to reproduce frequencies above 10kHz at all. My noise-cancelling headphones clearly continue to do so.

One claim which should be approached with extreme skepticism would be, that the sound which a listener seemed to be getting from an FM Tuner, was as good as sound that he was also obtaining from his Vinyl Turntable. AFAIK, the only way in which this would be possible would be, if he was using an extremely poor turntable to begin with.

What has happened however, is that audibility curves have been accepted – since the 1980s – that state the upper limit of Human hearing as 20kHz, and that all manner of audio equipment designed since then takes this into consideration. This would include Audio CD Players, some forms of compressed sound, etc. What some people will claim in a way that strikes me as credible however, is that the frequency-response of the HQ turntables was as good, as that of Audio CDs was. And the main reason I’ll believe that is the fact that Quadraphonic LPs were sold at some point, which had a sub-carrier for each stereo channel, that differentiated that stereo channel front-to-back. This sub-carrier was actually phase-modulated. But in order for Quadraphonic LPs to have worked at all, their actual frequency response need to go as high as  40kHz. And phase-modulation was chosen because this form of modulation is particularly immune to the various types of distortion which an LP would insert, when playing back frequencies as high as 40kHz.

About Digital FM:

(Updated 6/24/2019, 14h50 … )

Continue reading A Basic Limitation in Stereo FM Reproduction

A Gap in My Understanding of Surround-Sound Filled: Separate Surround Channel when Compressed

In This earlier posting of mine, I had written about certain concepts in surround-sound, which were based on Pro Logic and the analog days. But I had gone on to write, that in the case of the AC3 or the AAC audio CODEC, the actual surround channel could be encoded separately, from the stereo. The purpose in doing so would have been, that if decoded on the appropriate hardware, the surround channel could be sent directly to the rear speakers – thus giving 6-channel output.

While writing what I just linked to above, I had not yet realized, that either channel of the compressed stream, could contain phase information conserved. This had caused me some confusion. Now that I realize, that the phase information could be correct, and not based on the sampling windows themselves, a conclusion comes to mind:

Such a separate, compressed surround-channel, would already be 90⁰ phase-shifted with respect to the panned stereo. And what this means could be, that if the software recognizes that only 2 output channels are to be decoded, the CODEC might just mix the surround channel directly into the stereo. The resulting stereo would then also be prepped, for Pro Logic decoding.



Not Being Sure about the Sign Bit

One format in which MP3 can encode stereo, is in the form of “Joint Stereo”. In this form, the signal is sent as a sum-channel, and a difference-channel. The left and right channels can be reconstructed from them, just as easily as a sum and a difference, could be computed. The reason this is done, is to save on the bit-rate of the difference channel, along the argument that human stereo-directionality is more limited in frequencies, than straightforward hearing is.

But this is also one example, in which the difference channel needs to have a defined sign, as either being positive when left is more so, or being more positive when right is more so.

And so one way in which this could be encoded, if the decision was made to include doing so into the standard, could be to encode an additional sign-bit into each non-zero frequency coefficient of the difference channel. But doing so would also affect the overall bit-rate of the signal enough, that this deters professionals from doing it. Also, the argument is made that lower bit-rates can lead to higher sound quality, because if they want, users can increase the bit-rate anyway, resulting in greater definition of the more audible components of the sound.

And so with Joint Stereo, a feature built-in to MP3 is, “Side-Bits”. These side-bits are included if the mode is enabled, and declare as header information, how the signal should be panned, either to the left or the right, as well as what the sign of the difference-channel might be. ( :1 )

Well there is an implementation detail about the side-bits, which I do not specifically know: Whether they are stored only once per frame, or once per frequency sub-band, since the compression schemes do divide the spectrum into sub-bands already. Thus, if the side-bits were encoded for each sub-band, for each frame, then a good compromise could be reached, in terms of how much data should be spent on that.

The only coefficient which I imagine would have a sign-bit to itself each time, would be the zero coefficient, that corresponds to DC, but that also corresponds to F = Sampling Interval / 2.

This question becomes more relevant for surround-sound encoding. If, rather than using the Pro Logic method, some other scheme was decided on, for defining a ‘Back-Front’ channel, then it would suddenly become critical, that this channel have correct sign information. And then it would also become possible for this channel to correlate negatively with frequency components that also belong to the stereo channels. Hence, a more familiar method of designing the servos of Pro Logic II would be effective again, than what would be needed, if none of the cosine transforms could produce negative coefficients. ( :2 )


P.S. When we listen to accurately-reproduced signals on stereo headphones, If the signals are perfectly in-phase, humans tend to hear them as if they came from inside our own heads. If they are 180 degrees out-of-phase, we tend to hear them as coming from a non-specific place outside our heads.

I have had a personal friend complain, that when he listens to Classical Music via MP3 Files, he cannot ascertain the direction which sounds are coming from – i.e. the positions of the instruments in the orchestra, but can hear panning and this in-versus-out condition. Listening to Classical Music via MP3 is a very bad idea.

What this tells me is that my friend has very good hearing, which is tuned to the needs of Classical Music.

The reason he hears some of the sound as being in the out-of-phase position, may well just be due to the fact that Joint Stereo was being selected by the encoder, and that certain frequency components in the difference-channel, did not have substantial counterparts in the sum-channel. Mathematically, this results in a correlation of zero between the encoded channels, but in a correlation of -1 between the reconstructed left and right…

Subjectively, I would say that I have observed better sound quality in this regard, when using OGG Compression, and at a higher bit-rate. I found that “Enya” required 192kbps, while “Strauss” did not sound good until I had reached 256kbps.

But I do not know objectively, what it is in the OGG Files, that gives me this better experience. I do not have the precision hearing, which said friend has. I have used FLAC to encode some “Beethoven” and “Schubert”, but mainly just in order to archive their music without any loss in information at all, and not as a testament, to the listening experience really being that much better.

1: ) In the case of Joint Stereo with MP3, what I would expect, is that the ‘pan-number’ will also direct the decoder to set the polarity of the difference-channel, to be positive with whichever side the sum-channel is being panned-towards more strongly. And I expect this to happen, regardless of what the phase information was when encoding.

If there was explicit sign information here, such information would first also have had to be measured, when encoding, as relative to the phase-position of the sum-channel. Since phase-information is generally relative. And I do not hear speak, of correlation information being collected first when encoding, between the stereo and difference channels.

2: ) This subject peaked my interest into how OGG Compression deals with multi-channel sound. I did an experiment using “Audacity”, in which I prepared a 6-channel project, chose to export it in a custom channel-configuration, and then chose different settings in the channel-meta-data window.

While AC3 was ‘limited’ to allowing a maximum of 7 channels, OGG allows a compressed stream with up to 32 channels. But, I seem to have observed that when compressing more than 2 channels, OGG forgoes even joint stereo optimization, instead only compressing each channel individually. This seems to follow from the observation, that If I mix channels 3 and 4, assigned to a hypothetical front-center and LFE, I should have turned those two into a monaural signal repeated once. But doing so does not improve the OGG File size.

There was a 3m45s stream, which took 5.2MB as a 6-channel AC3. The same stream takes up 18.1MB as a 6-channel OGG. And these bit-rates result from choosing a rate of 192kbps for the AC3 File, while choosing ‘Quality Level 8/10′ for the OGG.

I think that one reason for the big difference in bit-rates, is the fact that my stream consisted of a stereo signal originally, of which there were merely 3 copies. The AC3 File takes advantage of the correlations to compress, while OGG is not as able to do so.

Further, I read somewhere that OGG takes the remarkable approach, to convert the stereo into joint stereo, after quantization (in the frequency domain), while MP3 does so before quantization. This makes the conversion which OGG performs, of a signal into stereo, a lossless process, and also seems to imply, encoding one sign bit with each coefficient of the difference-channel. Any advantage OGG gives to the bit-rate would need to stem, from the majority of low-amplitude coefficients in the difference-channel, as well as from limiting its frequencies.

By contrast, this would seem to suggest, that MP3 will compute an ‘FFT’ of each channel, also in order to determine the side-bits, after which it will compute a sum and a difference channel in the time-domain, and then compute the ‘DCT’ of each…


Some Thoughts on Surround Sound

The way I seem to understand modern 5.1 Surround Sound, there exists a complete stereo signal, which for the sake of legacy compatibility, is still played directly to the front-left and the front-right speaker. But what also happens, is that a third signal is picked up, which acts as the surround channel, in a way that neither favors the left nor the right asymmetrically.

I.e., if people were to try to record this surround channel as being a sideways-facing microphone component, by its nature its positive signal would either favor the left or the right channel, and this would not count as a correct surround-sound mike. In fact, such an arrangement can best be used to synthesize stereo, out of geometries which do not really favor two separate mikes, one for left and one for right.

But, a single, downward-facing, HQ mike would do as a provider of surround information.

If the task becomes, to carry out a stereo mix-down of a surround signal, this third channel is first phase-shifted 90 degrees, and then added differentially between the left and right channels, so that it will interfere least with stereo sound.

In the case where such a mixed-down, analog stereo signal needs to be decoded into multi-speaker surround again, the main component of “Pro Logic” does a balanced summation of the left and right channels, producing the center channel, but at the same time a subtraction is carried out, which is sent rearward.

The advantage which Pro Logic II has over I, is that this summation first adjusts the relative gain of both input channels, so that the front-center channel has zero correlation with the rearward surround information, which has presumably been recovered from the adjusted stereo as well.

Now, an astute reader will recognize, that if the surround-sound thus recovered, was ‘positive facing left’, its addition to the front-left signal will produce the rear-left signal favorably. But then the thought could come up, ‘How does this also derive a rear-right channel?’ The reason for which this question can arise, is the fact that a subtraction has taken place within the Pro Logic decoder, which is either positive when the left channel is more so, or positive when the right channel is more so.

(Edit 02/15/2017 : The less trivial answer to this question is, A convention might exist, by which the left stereo channel was always encoded as delayed 90 degrees, while the right could always be advanced, so that a subsequent 90 degree phase-shift when decoding the surround signal can bring it back to its original polarity, so that it can be mixed with the rear left and right speaker outputs again. The same could be achieved, if the standard stated, that the right stereo channel was always encoded as phase-delayed.

However, the obvious conclusion of that would be, that if the mixed-down signal was simply listened to as legacy stereo, it would seem strangely asymmetrical, which we can observe does not happen.

I believe that when decoding Pro Logic, the recovered Surround component is inverted when it is applied to one of the two Rear speakers. )

But what the reader may already have noticed, is that if he or she simply encodes his mixed-down stereo into an MP3 File, later attempts to use a Pro Logic decoder are for not, and that some better means must exist to encode surround-sound onto DVDs or otherwise, into compressed streams.

Well, because I have exhausted my search for any way to preserve the phase-accuracy, at least within highly-compressed streams, the only way in which this happens, which makes any sense to me, is if in addition to the ‘joint stereo’, which provides two channels, a 3rd channel was multiplexed into the compressed stream, which as before, has its own set of constraints, for compression and expansion. These constraints can again minimize the added bit-rate needed, let us say because the highest frequencies are not thought to contribute much to human directional hearing…

(Edit 02/15/2017 :

Now, if a computer decodes such a signal, and recognizes that its sound card is only in  stereo, the actual player-application may do a stereo mix-down as described above, in hopes that the user has a pro Logic II -capable speaker amp. But otherwise, if the software recognizes that it has 4.1 or 5.1 channels as output, it can do the reconstruction of the additional speaker-channels in software, better than Pro Logic I did it.

I think that the default behavior of the AC3 codec when decoding, if the output is only specified to consist of 2 channels, is to output legacy stereo only.

The approach that some software might take, is simply to put two stages in sequence: First, AC3 decoding with 6 output channels, Secondly, mixing down the resulting stereo in a standard way, such as with a fixed matrix. This might not be as good for movie-sound, but would be best for music.


 1.0   0.0
 0.0   1.0
 0.5   0.5
 0.5   0.5
+0.5  -0.5
-0.5  +0.5


If we expected our software to do the steering, then we might also expect, that software do the 90° phase-shift, in the time-domain, rather than in the frequency-domain. And this option is really not feasible in a real-time context.

The AC3 codec itself would need to be capable of 6-channel output. There is really no blind guarantee, that a 6-channel signal is communicated from the codec to the sound system, through an unknown player application... )

(Edit 02/15/2017 : One note which should be made on this subject, is that the type of matrix which I suggested above might work for Pro Logic decoding of the stereo, but that if it does, it will not be heard correctly on headphones.

The separate subject exists, of ‘Headphone Spacialization’, and I think this has become relevant in modern times.

A matrix approach to Headphone Spacialization would assume that the 4 elements of the output vector, are different from the ones above. For example, each of the crossed-over components might be subject to some fixed time-delay, which is based on the Inter-Aural Delay, after it is output from the matrix, instead of awaiting a phase-shift… )

(Edit 03/06/2017 : After much thought, I have come to the conclusion that there must exist two forms of the Surround channel, which are mutually-exclusive.

There can exist a differential form of the channel, which can be phase-shifted 90⁰ and added differentially to the stereo.

And there can exist a common-mode, non-differential form of it, which either correlates more with the Left stereo or with the Right stereo.

For analog Surround – aka Pro Logic – the differential form of the Surround channel would be used, as it would for compressed files.

But when an all-in-one surround-mike is implemented on a camcorder, this originally provides a common-mode Surround-channel. And then it would be up to the audio system of the camcorder, to provide steering, according to which this channel either correlates more with the front-left or the front-right. As a result of that, a differential surround channel can be derived. )

(Updated 11/20/2017 : )

Continue reading Some Thoughts on Surround Sound