In this posting I’m going to do something I rarely do, which is, something like a product review. I have purchased the following two headphones within the past few months:
The first set of headphones has an analog 3.5mm stereo input cable, which has a dual-purpose Mike / Headphone Jack, and comes either compatible with Samsung, or with Apple phones, while the second uses Bluetooth to connect to either brand of phone. I should add that the phone I use with either set of headphones is a Samsung Galaxy S9, which supports Bluetooth 5.
The first set of headphones requires a single, AAA alkaline battery to work properly. And this not only fuels its active noise cancelling, but also an equalizer chip that has become standard with many similar middle-price-range headphones. The second has a built-in rechargeable Lithium-Ion Battery, which is rumoured to be good for 10-15 hours of play-time, which I have not yet tested. Like the first, the second has an equalizer chip, but no active noise cancellation.
I think that right off the bat I should point out, that I don’t approve of this use of an equalizer chip, effectively, to compensate for the sound oddities of the internal voice-coils. I think that more properly, the voice-coils should be designed to deliver the best frequency response possible, by themselves. But the reality in the year 2019 is, that many headphones come with an internal equalizer chip instead.
What I’ve found is that the first set of headphones, while having excellent noise cancellation, has two main drawbacks:
- The jack into which the analog cable fits, is poorly designed, and can cause bad connections,
- The single, AAA battery can only deliver a voltage of 1.5V, and if the actual voltage is any lower, either because a Ni-MH battery was used in place of an alkaline cell, or, because the battery is just plain low, the low-voltage equalizer chip will no longer work fully, resulting in sound that reveals the deficiencies in the voice-coil.
The second set of headphones overcomes both these limitations, and I fully expect that its equalizer chips will have uniform behaviour, that my ears will be able to adjust to in the long term, even when I use them for hours or days. Also, I’d tend to say that the way the equalizer arrangement worked in the first set of headphones, was not complete in fulfilling its job, even when the battery was fully charged. Therefore, If I only had the money to buy one of the headphones, I’d choose the second set, which I just received today.
But, having said that, I should also add that I have two 12,000BTU air conditioners running in the Summer months, which really require the noise-cancellation of the first set of headphones, that the second set does not provide.
Also, I have an observation of why the EQ chip in the second set of headphones may work better than the similarly purposed chip in the first set…
(Updated 9/28/2019, 19h05 … )
(As of 9/26/2019 : )
That second set of headphones, in its main mode of operation, receives its audio via the wireless Bluetooth standard, which means that it receives a digital stream, which has therefore already been filtered correctly, to run at either 44.1kHz or at 48kHz, I suspect at 48kHz. The main way in which this can affect the design of the equalizer chip is, that the latter can perform all its filtering using numerical methods, while the chip in the first set of headphones is likely to be an analog chip. (:1)
There is one word of caution which I’d give readers of this blog though. If the Bluetooth standard of the reader’s phone is not as high as 5.0, then the sound quality they get may not be as good as what I’m getting, because it’s probably part of the BT5 standard, to add a form of audio encoding that’s suitable for high-definition audio – for music. There have been times in the past, when Bluetooth was just not a good way to connect high-quality headphones to a smart-phone.
As it stands, Audio Files on my phone that are MP3s, are noticeably worse in playback quality, than Files in other formats, which I encoded at 256kbps or higher, when listened to using the second set of headphones. What this means is that by and large, the sound quality of the music is not being capped, due to the way it’s being encoded into the Bluetooth 5 Stream.
Also, I should warn that even though both these headphones contain equalizer chips, what those chips do cannot be switched on or off, or customized by the user. Those chips are meant to be on constantly, in order for the devices to deliver their best sound.
And I suppose that one reason for which I’m writing this posting is, to ask the question, of whether technology is heading in the right direction, to put equalizer chips into headphones, which in turn have lower-quality voice-coils. And the answer I’ve come to is, that this all depends on results. I can think of ways in which equalizer chips could be inadequate, as well as ways in which they could be sophisticated, powerful, and up to the job. I’d say that with the second set of headphones listed above, the SoundLink Headphones without noise cancellation, technology is ‘back on track’, and delivering sound-quality which already existed in some high-end headphones in the late 1980s.
(Update 9/27/2019, 5h40 : )
I should mention that both these headphones have an auxiliary mode of operation, in which their active equalizer chips, and / or noise cancellation are not operative. With the “QuietComfort” headphones, this can simply be achieved by turning the power off, while with the “SoundLink” headphones, this requires additionally inserting a supplied, auxiliary cable, which has a 2.5mm jack on the headphone side, and a 3.5mm jack on the other side. However, since my main interest was in the maximum sound quality that these headphones can bring, with their EQ chips working, I did not focus much on this mode.
This mode will simply make obvious, what the ‘spectral personality’ of their internal voice-coils is.
(Update 9/27/2019, 17h40 : )
I suppose that an additional sort of question could be posed, of ‘Would the equalizer / decoding function of the Bose AE2 SoundLink headphones, include a brick wall filter, like the Sinc Filters of the 1980s?’
The answer which forms in my head, is loosely based on the concept, that the way in which the equalizer function has been implemented, may take the form that a series of frequency-component-corrections has been given as an objective, and that these corrections number maybe ?160? Such a series of numbers could have a Type 1, Discrete Cosine Transform computed, the output samples of which also number 160.
Using a set of 160 data-points would merely have as a special result, that if the input stream was in fact running at 48kHz, the resulting frequency-bands would be linear and spaced 300Hz apart, which would be a nice, even number to work with.
Yet, a corresponding set of 147 data-points would have as result, that if the input stream was running at 44.1kHz, the resulting frequency-bands would be spaced 300Hz apart as well.
If such a DCT has been computed, it could be applied to the sound stream as a “convolution”.
The punch-line to this idea would be, that the input stream could be over-sampled 2x, resulting in an intermediate sample-rate of 96kHz, and that out of the frequency-component-corrections, only the first 68 coefficients would actually be set to non-zero values. I.e., the equalizer would only have as goal, to output the frequencies from zero to 20.1kHz inclusively, meaning that the other 92 coefficients would all be set to zero. When the DCT of this data-set is computed, applied as a convolution, it would resemble a brick-wall filter approximately but not exactly, so that this applied DCT would then also perform whatever anti-aliasing was originally expected to follow, whenever a digitally sampled stream is converted to analog form.
(Update 9/28/2019, 9h20 : )
If the reader noticed that this concept of an equalizer, based on the Discrete Cosine Transform, seems to have frequency-bands that are ‘too wide’ at lower frequencies, and ‘very narrow’ at the higher frequencies, this observation corresponds to what listeners expect, of equalizers which are meant to be adjusted manually. There, logarithmic spacing of the frequency bands ‘makes sense’, because Humans tend to hear sound in terms of octaves of frequency.
However, when describing how objects vibrate over a frequency range, linearly spaced frequency-bands like that may ‘make more sense’. It’s very plausible to me that at higher frequencies, vibrating, physical objects will have more peaks and valleys in their amplitudes, than they would at lower frequencies. The only questions which seem to remain include, whether the spacing of 300 Hz / band is in fact narrow enough.
On that subject, if the bands were made much narrower, the question would turn into one, of whether the measurable amplitudes remain consistent. Small differences in the geometry of a listener’s head, can change how the amplitudes result, when the bands are very narrow. This in turn risks, leading to an equalizer that applies over a hundred, incorrect amplitude-corrections.
Further, I’ve taken into account that when computing any sort of discrete Fourier Transform, including the DCT, a change by one sample in the frequency domain corresponds to a difference of exactly half a cycle, over the entire interval, in the time domain. I’ve taken this into account, by assuming from the beginning of my text, that input sampling rates of 48kHz or 44.1kHz would need to be over-sampled at least 2x, to lead to sampling rates of 96kHz or 88.2kHz, before those samples are fed to the convolution, that would have a hypothetical interval of 160 or 147 samples.
What may confuse some people about these transforms is the fact, that in the frequency-domain, they seem to include the frequency of zero (the simple average of all the time-domain samples), but exclude the Nyquist Frequency. Thus, when counting, an interval of [0 … n-1] results in a quantity of (n) samples.
(Update 9/28/2019, 9h50 : )
I just want to reiterate, that this design concept of an equalizer is just a hypothesis of mine; I cannot know exactly what kind of equalizer has in fact been put into the second set of headphones listed above, by Bose. But my hypothesis will go so far as to suggest, that Electrical Engineers computed both DCTs ahead of time, and that the chips simply load one data-set or another into registers, when functioning as equalizers inside the headphones. My hypothesis will not go so far as to suggest, that the equalizer chip itself computes a DCT.
(Update 9/28/2019, 19h05 : )
An observation about Normalizing the Discrete Fourier Transforms:
I tend to conceptualize the Normalization any of the Discrete Fourier Transforms, including the Cosine Transforms, as existing separately from the concept of the transform itself. Therefore, I can also conceptualize that if two synchronized cosine-waves form a continuously varying product, and if one of those had unit amplitude, the average of that product will be half the original amplitude of the other wave. Thus, if the product of one of the waves with its reference wave was summed, and if the result was to state nothing but the original amplitude by itself, then the factor by which to normalize it follows logically as (2/n), where (n) was the number of samples. An exception exists when the average value of the signal over the interval was continuously multiplied by cos(0), but if the result was still to be applied to the entire output when decoding, such that the normalization of that one coefficient was actually (1/n).
- This would form an Engineering concept of normalizing the transforms, in which the decoding is simply assumed to be a summation.
- The Mathematically pure concept of normalizing the transforms is such, that it forms part of the definition of each transform. Therefore, because the inverse of each transform follows as another transform, each application of a transform is given equal responsibility, together with its normalization, to assure that the roundabout result is the original signal. What results is that with each application of the transform, at least the non-zero frequencies have a normalization of the square root, of (2/n).
If the concept was applied seriously, to use the Type 1 DCT of a set of coefficients, as though those coefficients were equalizer settings, so that the result is a convolution, that implements the equalizer, and if the assumption is also made, that the application of the convolution was a summation, then the first, Engineering concept of a normalization above must also be applied. But I’d conclude that certain refinements would improve sound quality:
- The resulting convolution could be ‘folded’, so that a reversed series of convolution-weights precedes the directly computed series. Because convolution weight zero states the centerpoint exactly, as well as the cos(0) product of the virtual equalizer settings, its normalization would become (2/n), just like the other normalizations.
- The virtual equalizer setting corresponding to a frequency of zero would be ignored, but the others could be positive or negative, relative to an assumed, additive contribution of the original signal (from the centerpoint of the folded DCT, or from convolution weight zero). Thus, the intent to reduce the amplitude of one of the non-zero frequency-components to zero, would actually be introduced to the DCT as a relative, virtual equalizer setting of (-1.0).
- A Kaiser Window might be applied to the resulting convolution.
Hypothetically, if such a filter series was not folded, the result would be similar to what one would obtain, if ‘only the trailing half of the Sinc Function’ was applied. The weight of the resulting first sample would be halved. Temporal resolution would appear excellent to most listeners’ ears because the short-term sound output would be unaltered. And in the long term, the amplitudes of continuous sine-waves would be adjusted correctly. Given some of the realities of Sound Technology today, I would look at this as an acceptable compromise, as long as I did not pay an exorbitant amount of money for the devices in question.