Some Music may be Suitable for Surround-Sound.

One question which older members of the population might ask themselves, is whether it makes any sense, for people to be listening to music with surround-sound.

This question – or rather bias – stems from the way much music was mixed-down in the 1970s and 1980s, where the Artists applied a catch-as-catch-can approach, to creating Stereo. In fact back then, the goal was often even to confuse how the listener hears sound, using phase-shifts, and thus to be psychedelic. And so a basis exists to think, that electronic music especially, was never meant to be heard in surround-sound.

But the situation has changed since then. For several decades, some FM radio stations have been offering some music in surround-sound. And further, much of the old music from the 1970s has also been remastered, with more-modern technical considerations.

Before I could know whether a friend of mine is listening to Beethovens 9th Symphony, I cannot be sure whether what he says is real, and so I generally give people the benefit of the doubt. And, if he says he is listening to Neil Young, does he mean the way he recorded in the 1970s, or is he referring to a recording, which Neil Young personally remastered after the year 2000? ;)

Dirk

 

There exists an argument, against Headphone Spatialization.

Headphone Spatializaion is also known as ‘Binaural Sound’ or ‘Binaural Audio’. It is based on the idea, that when people hear direction, people do not only take into account the relative amplitudes of Left versus Right – aka panning – but that somehow, people also take into account time-delay that sound requires, to get to the more-distant ear, with respect to the closer ear. This time-delay is also known as the Inter-Aural Delay.

Quite frankly, if people are indeed able to do this, I would like to know how, because the actual cochlea cannot do this. The cochlea only perceives frequency-energies, and the brain is able in each hemisphere, to do a close comparison of those energies, between sound perceived by both ears.

If such an ability exists, it may be due to what happens in the middle ear. And this could be, because sound from one source, reaches each cochlea, over more than one pathway…

But what this also means is that if listeners simply put on headphones and listen to stereo, they too are relatively unable to make out positioning, unless that stereo is very accurate, which it is not, after it has been MP3-compressed.

So technology exists hypothetically, that will take explicit surround-sound, and encode it into stereo which is not meant to be re-compressed afterward, but that allows for stereo-perception.

There exist valid arguments against the widespread use of such technology. The way each person interprets the sound from his two ears, is an individual skill which he learns, in many cases people can only hear direction by moving their heads slightly, and the head of each person is slightly different anatomically. So there might not be any unified way to accomplish this.

What I find, is that when there are subtle differences in how this works over a large population, there is frequently a possible simplification, that does not correspond 100% to how one case interprets sound, but that works better in general, than what would happen if the effect is not applied at all.

Therefore, I would think of this as a “Surround Effect”, rather than as ‘Surround Sound’, the latter of which is meant to be heard over speakers, and where the ability of an individual to make out direction, falls back on the ability of the individual, also to make out the direction of his own speakers.

Dirk

 

A Concept about Directionality In Sound Perception

We all understand that given two ears, we can hear panning when we listen to reproduced stereo, as well as maybe that sounds seem to come ‘from outside’ as opposed to ‘from inside’, corresponding to out-of-phase as opposed to in-phase. But the reality of human sound perception is, that we are supposed to be capable of more subtle perception, about the location of the origin of sounds. I will call this more subtle perception of directions, ‘complete stereo-directionality’.

One idea which some people have pursued, is that we do not just hear amplitudes associated with frequencies, but that we might be able to perceive phase-vectors associated with frequencies as well. This idea seems to agree with the fact that at least a part of our complete stereo-directionality seems to be based on Inter-Aural-Time-Differences, as a basis for perceiving direction. This idea also seems to agree well with the fact that in Science, and with Machines, the amplitude of any frequency component, can be represented by a complex number.

But this idea does not seem to agree well, with the fact that our ultimate organ to perceive sound is not the outer ear, nor the middle ear, but the inner ear, which is also known as the cochlea. As I understand it, the cochlea is capable of differentiating along frequency-mappings incredibly precisely, but not along phase-relationships.

Now, some reason may exist to think, that the middle ear and the skull carry out some sort of mixing of sounds, that enter the outer ear, before those sounds reach the cochlea. But for the moment, I am going to regard this detail as secondary.

I think that what ultimately happens, is that on the cerebral cortex, just as it goes with the optical lobes, the aural lobes have a mapping of fingerprint-like ‘ridges’. The long-range mapping may be according to frequency, but the short-range mapping may be such, that one set of ridges corresponds to input from one ear, while the negative of that same pattern of ridges, represents the input of the opposite ear.

And so what the cerebral cortex can do, is make very precise differentiations in its short-range neural systems, between what any one frequency-component has as amplitude, as perceived by one cochlea differently from the other cochlea.

When sound events reach our ears, they can follow many paths, as well as perhaps being mixed as well by our middle ear, so that real phase positions lead to subtle amplitude-differences, as sensed by our cochlea, and as interpreted by our cerebral cortex with its ridged mappings. Inter-Aural Time-Differences may also lead to subtle differences in per-frequency amplitudes, by the time they reach the cochlea.

And I suspect that the latter is what leads to our ‘complete stereo-directionality’.


What this would also mean, is that in lossy sound compression, if the programmers decided to compute a Fourier Transform of each stereo channel first – and the Discreet Cosine Transform is one type of Fourier Transform – and then to store the differences between absolute amplitudes that result, they may quite accidentally have processed the sound closer to how human hearing processes sound.

If instead, the programmers chose to compute the L-R component in the time-domain first, and then to perform some Fourier Transform of L+R and L-R secondly, they may have been intending to capture more information than can be captured in the other way. But they may have captured information with this method, that human hearing is not able to interpret well.

This would be especially true then, in cases where L and R mainly cancel, so that the amplitude of L+R is low, while the Fourier Amplitude of L-R would be high.

This might sound fascinating due to whatever our middle ear next does with it, but does not lead to meaningful interpretations, of ‘where that sound even supposedly comes from’. Hence, while this could be psychedelic, it would not enhance our ‘complete stereo-directionality’.

Also, the idea may be applied by our brain, that whatever sound we are focusing on, ‘all the other sounds’ form a continuous background noise, such that the sound we are focusing on may seem to have negative amplitudes, because real amplitudes locally become lower than the virtual noise levels. And while this may allow us to derive some sort of perception of phase-cancellation, it may not actually be due, to our cochlea having picked up phase-cancellation.

Dirk