Why Humans Can Hear the Inter-Aural Delay

I’ve given this subject some attention in the past. It seems to be a fact, that as long as a sound stream is temporally complex, Humans can use the Inter-Aural Delay, as one of several hints as to the direction, which the sound came from. But as soon as the sound is temporally uniform, we cannot.

The way I’d explain this is without physical controversy. The way neurons fire, they can either be seen to carry binary or analog information. In short, one firing of a neuron could be like a ‘1’ as opposed to a ‘0’. Or, the steady rate at which a different type of neuron is firing, could encode an analog level. Well, some neurons seem to operate in both modes. At the onset of a signal, they could fire a short burst, after which a steady rate indicates a sustained amplitude.

The length of the path, which signals from the left auditory nerve need to take, to reach the left auditory cortex, may be exactly the same, as the length of the path, with which signals from the right auditory nerve take, to reach the left auditory cortex.

Therefore, the auditory cortex should be in a good position to discern in which order pulses of sound, or onsets of sound, reach it, as part of its information to determine direction, hence, to perceive the IAD.

But AFAICT, if the amplitude of a sine-wave is constant, then there is no real way in which our cortex can discern in what relative phase position it has reached our two ears.

(Updated 11/22/2018, 18h55 … )

Continue reading Why Humans Can Hear the Inter-Aural Delay

Dolby Atmos

My new Samsung Galaxy S9 smart-phone exceeds the audio capabilities of the older S-series phones, and the sound-chip of this one has a feature called “Dolby Atmos”. Its main premise is, that a movie may have had audio encoded according to either Dolby Atmos, or according to the older, analog ‘Pro Logic’ system, and that, using headphone spatialization, it can be played back with more or less correct positioning. Further, the playback of mere music can be made more rich.

(Updated 11/25/2018, 13h30 … )

Rather than just to write that this feature exists and works, I’m going to use whatever abilities I have to analyze the subject, and to try to form an explanation of how it works.

In This earlier posting, I effectively wrote the (false) supposition, that sound compression which works in the frequency domain, fails to preserve the phase position of the signal correctly. I explained why I thought so.

But in This earlier posting, I wrote what the industry had done in practice, which can result in the preservation of phase-positions, of frequency components.

The latter of the above two postings is the more-accurate. What follows from that is, that if the resolution of the compressed stream is high, meaning that the quantization step is small, phase position is likely to be preserved well, while if the resolution (of the sound) is poor, meaning that the quantization step is large, and the resulting integers small, poor phase information will also result, that may be so poor as only to observe the ±180⁰ difference that also follows, from recorded, non-zero coefficients being signed values.

‘Dolby Atmos’ is a multi-track movie sound system, that encodes actual speaker positions, but, not based on the outdated Pro Logic boxes, which were based on analog wires coming in. In order to understand what was done with Pro Logic, maybe the reader should also read This earlier posting of mine, which explains some of the general principles. In addition, while Pro Logic 1 and 2 had as outputs, physical speakers, Dolby Atmos on the S9 aims to use headphone spatialization, to achieve a similar effect.

I should also state from the beginning, that the implementation of Dolby Atmos in the Samsung S9 phone, allows the user to select between three modes when active:

  1. Movies,
  2. Music,
  3. Voice.

In addition to the actual surround decoding, the Samsung S9 changes the equalizer settings – yes, it also has a built-in equalizer.

(Updated 11/30/2018, 7h30 … )

Continue reading Dolby Atmos

There exists an argument, against Headphone Spatialization.

Headphone Spatializaion is also known as ‘Binaural Sound’ or ‘Binaural Audio’. It is based on the idea, that when people hear direction, people do not only take into account the relative amplitudes of Left versus Right – aka panning – but that somehow, people also take into account time-delay that sound requires, to get to the more-distant ear, with respect to the closer ear. This time-delay is also known as the Inter-Aural Delay.

Quite frankly, if people are indeed able to do this, I would like to know how, because the actual cochlea cannot do this. The cochlea only perceives frequency-energies, and the brain is able in each hemisphere, to do a close comparison of those energies, between sound perceived by both ears.

If such an ability exists, it may be due to what happens in the middle ear. And this could be, because sound from one source, reaches each cochlea, over more than one pathway…

But what this also means is that if listeners simply put on headphones and listen to stereo, they too are relatively unable to make out positioning, unless that stereo is very accurate, which it is not, after it has been MP3-compressed.

So technology exists hypothetically, that will take explicit surround-sound, and encode it into stereo which is not meant to be re-compressed afterward, but that allows for stereo-perception.

There exist valid arguments against the widespread use of such technology. The way each person interprets the sound from his two ears, is an individual skill which he learns, in many cases people can only hear direction by moving their heads slightly, and the head of each person is slightly different anatomically. So there might not be any unified way to accomplish this.

What I find, is that when there are subtle differences in how this works over a large population, there is frequently a possible simplification, that does not correspond 100% to how one case interprets sound, but that works better in general, than what would happen if the effect is not applied at all.

Therefore, I would think of this as a “Surround Effect”, rather than as ‘Surround Sound’, the latter of which is meant to be heard over speakers, and where the ability of an individual to make out direction, falls back on the ability of the individual, also to make out the direction of his own speakers.

Dirk