Different types of music, for testing Audio Codecs – Or Maybe Not.

One of my recent activities has been, to start with Audio CDs from the 1980s and 1990s, the encoding of which was limited only by the 44.1kHz sample rate, and the bit-depth, as well as by whatever type of Sinc Filter was once used to master them, but not limited by any sort of lossy compression, and to “rip” those, but into different types of lossy compression, in order to evaluate that. The two types of compression I recently played with were ‘AAC’ (plain), and ‘OGG Opus’, at 128kbps both times.

One of the apparent facts which I learned was that Phil Collins music is not the best type to try this with. The reason is the fact that much of his music was recorded using electronic instruments of his era, the main function of which was, to emulate standard acoustical instruments, but in a way that was ‘acoustically pure’. The fact that Phil Collins started his career as a drummer, did not prevent him from releasing later, solo albums.

If somebody is listening to an entire string section of an orchestra, or to a brass section, then one factor which contributes to the nature of the sound is, that every Violin is minutely off-pitch, as would be every French Horn. But what that also means is that the sound which results, is “Thick”. Its spectral energy is spread in subtle ways, whereas if somebody mixes two sine-waves that have exactly the same frequency, then he obtains another sine-wave. If one mixes 10 sine-waves that have exactly the same frequency, then he still obtains one sine-wave.

Having sound to start from which is ‘Thick’, is a good basis to test Codecs. Phil Collins does not provide that. Therefore, if the acoustical nature of the recording is boring, I have no way to know, whether it’s the Codec that failed to bring out greater depth, or whether that was the fault of Phil Collins.

(Update 8/03/2019, 12h55 : )

Since the last time I edited this posting, I learned, that Debian / Stretch, the Linux version I have installed on the computer I name ‘Phosphene’, only ships with ‘libopus v1.2~alpha2-1′ from the package repositories. Apparently, when using this version, the best one can hope for is an equivalent, to 128kbps MP3 -quality. This was the true reason, for which I was obtaining inferior results, along with the fact that I had given the command to encode my Opus Files using a version of ‘ffmpeg’, that just happened to include marginal support for ‘Opus’, instead of using the actual ‘opus-toolkit’ package.

What I have now done was, to download and custom-compile ‘libopus v1.3.1′, as well as its associated toolkit, and just to make sure that the programs work. Rumours have it, that when this version is used at a bit-rate of 96kbps, virtual transparency will result.

And I’ve written quite a long synopsis, as to why this might be so.

(Update 8/03/2019, 15h50 : )

I have now run an altered experiment, by encoding my Opus Files at 96kbps, and discovered to my amazement, that the sound I obtained seemed better, than what I had already obtained above, using 128kbps AAC-encoded Files.


 

(Update 10/02/2019, 12h25 : )

When I use the command ‘opusenc’, which I’ve custom-compiled as written above, it defaults to a frame-size of 20 milliseconds. Given a sampling rate of 48kHz, this amounts to a frame-size – or granule – of 960 samples. This is very different from what the developers were suggesting in their article in 2010 (see posting linked to above). With that sampling interval, the tonal accuracy will be approximately twice as good as it was with MP3 encoding, or with AAC encoding, without requiring that any “Hadamard Transforms” be used. With the default setting, Opus will generally be able to distinguish frequencies that are 25Hz apart, while it was in the nature of MP3 only to be able to distinguish between frequencies that are 40Hz apart, except for the fact that the lowest distinguishable frequency was near 20Hz.

If Hadamard Transforms are in fact used, then it now strikes me as more likely that they will be used to increase temporal resolution at the expense of spectral resolution, not the reverse. And the reason I think this is the following paragraph in the manpage, for the ‘opusenc’ command-line:

 


       --framesize N
              Set maximum frame size in milliseconds (2.5, 5, 10, 20, 40,  60,
              default: 20)
              Smaller  framesizes  achieve lower latency but less quality at a
              given bitrate.
              Sizes greater than 20ms  are  only  interesting  at  fairly  low
              bitrates.

 

 

 

(Update 8/12/2019, 6h00 : )

I have now listened very carefully to the Phil Collins Music encoded with AAC at 128kbps, and the exact same songs, encoded with Opus at 96kbps. And what I’ve come to find, is that Opus seems to preserve spectral complexity better than AAC. However, the AAC encoded versions of the same music, seem to provide slightly better perception of positioning all around the listener, of instruments and voices in the lower-mid-range frequencies, as a result of Stereo. And this would be, when the listener is using a good set of headphones.

Dirk

 

Dolby Atmos

My new Samsung Galaxy S9 smart-phone exceeds the audio capabilities of the older S-series phones, and the sound-chip of this one has a feature called “Dolby Atmos”. Its main premise is, that a movie may have had audio encoded according to either Dolby Atmos, or according to the older, analog ‘Pro Logic’ system, and that, using headphone spatialization, it can be played back with more or less correct positioning. Further, the playback of mere music can be made more rich.

(Updated 11/25/2018, 13h30 … )

Rather than just to write that this feature exists and works, I’m going to use whatever abilities I have to analyze the subject, and to try to form an explanation of how it works.

In This earlier posting, I effectively wrote the (false) supposition, that sound compression which works in the frequency domain, fails to preserve the phase position of the signal correctly. I explained why I thought so.

But in This earlier posting, I wrote what the industry had done in practice, which can result in the preservation of phase-positions, of frequency components.

The latter of the above two postings is the more-accurate. What follows from that is, that if the resolution of the compressed stream is high, meaning that the quantization step is small, phase position is likely to be preserved well, while if the resolution (of the sound) is poor, meaning that the quantization step is large, and the resulting integers small, poor phase information will also result, that may be so poor as only to observe the ±180⁰ difference that also follows, from recorded, non-zero coefficients being signed values.

‘Dolby Atmos’ is a multi-track movie sound system, that encodes actual speaker positions, but, not based on the outdated Pro Logic boxes, which were based on analog wires coming in. In order to understand what was done with Pro Logic, maybe the reader should also read This earlier posting of mine, which explains some of the general principles. In addition, while Pro Logic 1 and 2 had as outputs, physical speakers, Dolby Atmos on the S9 aims to use headphone spatialization, to achieve a similar effect.

I should also state from the beginning, that the implementation of Dolby Atmos in the Samsung S9 phone, allows the user to select between three modes when active:

  1. Movies,
  2. Music,
  3. Voice.

In addition to the actual surround decoding, the Samsung S9 changes the equalizer settings – yes, it also has a built-in equalizer.

(Updated 11/30/2018, 7h30 … )

Continue reading Dolby Atmos