Some realizations about Digital Signal Processing

One of the realizations which I’ve just come across recently, about digital signal processing, is that apparently, when up-sampling a digital stream twofold, just for the purpose of playing it back, simply to perform a linear interpolation, to turn a 44.1kHz stream into an 88.2kHz, or a 48kHz stream into a 96kHz, does less damage to the sound quality, than I had previously thought. And one reason I think this is the factual realization that to do so, really achieves the same thing that applying a (low-pass) Haar Wavelet would achieve, after each original sample had been doubled. After all, I had already said, that Humans would have a hard time being able to hear that this has been done.

But then, given such an assumption, I think I’ve also come into more realizations, of where I was having trouble understanding what exactly Digital Signal Processors do. It might be Mathematically true to say, that a convolution can be applied to a stream after it has been up-sampled, but, depending on how many elements the convolution is supposed to have, whether or not a single DSP chip is supposed to decode both stereo channels or only one, and whether that DSP chip is also supposed to perform other steps associated with playing back the audio, such as, to decode whatever compression Bluetooth 4 or Bluetooth 5 have put on the stream, it may turn out that realistic Digital Signal Processing chips just don’t have enough MIPS – Millions of Instructions Per Second – to do all that.

Now, I do know that DSP chips exist that have more MIPS, but then those chips may also measure 2cm x 2cm, and may require much of the circuit-board they are to be soldered in to. Those types of chips are unlikely to be built-in to a mid-price-range set of (Stereo) Bluetooth Headphones, that have an equalization function.

But what I can then speculate further is that some combination of alterations of these ideas should work.

For example, the convolution that is to be computed could be computed on the stream before it has been up-sampled, and it could then be up-sampled ‘cheaply’, using the linear interpolation. The way I had it before, the half-used virtual equalizer bands would also accomplish a kind of brick-wall filter, whereas, to perform the virtual equalizer function on the stream before up-sampling would make use of almost all the bands, and doing it that way would halve the amount of MIPS that a DSP chip needs to possess. Doing it that way would also halve the frequency linearly separating the bands, which would have created issues at the low end of the audible spectrum.

Alternatively, implementing a digital 9- or 10-band equalizer, with the
bands spaced an octave apart, could be achieved after up-sampling, instead of before up-sampling, but again, much more cheaply in terms of computational power required.

Dirk

Comparing two Bose headphones, both of which use active technology.

In this posting I’m going to do something I rarely do, which is, something like a product review. I have purchased the following two headphones within the past few months:

  1. Bose QuietComfort 25 Noise Cancelling
  2. Bose AE2 SoundLink

The first set of headphones has an analog 3.5mm stereo input cable, which has a dual-purpose Mike / Headphone Jack, and comes either compatible with Samsung, or with Apple phones, while the second uses Bluetooth to connect to either brand of phone. I should add that the phone I use with either set of headphones is a Samsung Galaxy S9, which supports Bluetooth 5.

The first set of headphones requires a single, AAA alkaline battery to work properly. And this not only fuels its active noise cancelling, but also an equalizer chip that has become standard with many similar middle-price-range headphones. The second has a built-in rechargeable Lithium-Ion Battery, which is rumoured to be good for 10-15 hours of play-time, which I have not yet tested. Like the first, the second has an equalizer chip, but no active noise cancellation.

I think that right off the bat I should point out, that I don’t approve of this use of an equalizer chip, effectively, to compensate for the sound oddities of the internal voice-coils. I think that more properly, the voice-coils should be designed to deliver the best frequency response possible, by themselves. But the reality in the year 2019 is, that many headphones come with an internal equalizer chip instead.

What I’ve found is that the first set of headphones, while having excellent noise cancellation, has two main drawbacks:

  • The jack into which the analog cable fits, is poorly designed, and can cause bad connections,
  • The single, AAA battery can only deliver a voltage of 1.5V, and if the actual voltage is any lower, either because a Ni-MH battery was used in place of an alkaline cell, or, because the battery is just plain low, the low-voltage equalizer chip will no longer work fully, resulting in sound that reveals the deficiencies in the voice-coil.

The second set of headphones overcomes both these limitations, and I fully expect that its equalizer chips will have uniform behaviour, that my ears will be able to adjust to in the long term, even when I use them for hours or days. Also, I’d tend to say that the way the equalizer arrangement worked in the first set of headphones, was not complete in fulfilling its job, even when the battery was fully charged. Therefore, If I only had the money to buy one of the headphones, I’d choose the second set, which I just received today.

But, having said that, I should also add that I have two 12,000BTU air conditioners running in the Summer months, which really require the noise-cancellation of the first set of headphones, that the second set does not provide.

Also, I have an observation of why the EQ chip in the second set of headphones may work better than the similarly purposed chip in the first set…

(Updated 9/28/2019, 19h05 … )

Continue reading Comparing two Bose headphones, both of which use active technology.

Thoughts About Software Equalizers

If a software-equalizer possesses GUI controls that correspond to approximate octaves, or repeated 1-2-5 sequences, it is entirely likely to be implemented as a set of bandpass filters acting in parallel. However, the simplistic bandpass filters I was contemplating, would also have required that the signal be multiplied by a factor of 4, to achieve unit gain where their low-pass and high-pass cutoff frequencies join, as I described in this posting.

(Edit 03/23/2017:

Actually, the parameters which define each digital filter, are non-trivial to compute, but nevertheless computable when the translation into the digital domain has been carried out correctly. And so a type of equalizer can be achieved, based on derived bandpass-filters, on the basis that each bandpass-filter has been tuned correctly.

If the filters cross over at their -6db point, then one octave lower or higher, one filter will reach its -3db point, while the other will reach its -12db point. So instead of -12db, this combination would yield -15db.

The fact that the signal which has wandered into one adjacent band is at -3db with respect to the center of that band, does not lead to a simple summation, because there is also a phase-shift between the frequency-components that wander across.

I suppose that the user should be aware, that in such a case, the gain of the adjacent bands has not dropped to zero, at the peak of the current band, so that perhaps the signal will simplify, if the corner-frequencies have been corrected. This way, a continuous curve will result from discrete settings.

Now, if the intention is to design a digital bandpass filter with greater than 6 db /Octave falloff curves, the simplistic approach would be just to put two of the previous stages in series – into a pipeline resulting in second-order filters.

Also, the only way then to preserve the accuracy of the input samples, is to convert them into floating-point format first, for use in processing, after which they can be exported to a practical audio-format again. )

(Edit 03/25/2017 :

The way simplistic high-pass filters work, they phase-shift the signal close to +90⁰ far down along the part of the frequency-response-curve, which represents their roll-off. And simplistic low-pass filters will phase-shift the signal close to -90⁰ under corresponding conditions.

OTOH, Either type of filter is supposed to phase-shift their signal ±45⁰, at their -3db point.

What this means is that if the output from several band-pass filters is taken in parallel – i.e. summed – then the center-frequency of one band will be along the roll-off part of the curve of each adjacent band, which combined with the -3db point from either its high-pass or its low-pass component. But then if the output of this one central band is set to zero, the output from the adjacent bands will be 90⁰ apart from each other. )

(Edit 03/29/2017 :

A further conclusion of this analysis would seem to be, that even to implement an equalizer with 1 slider /Octave properly, requires that each bandpass-filter be a second-order filter instead. That way, when the signals wander across to the center-frequency of the slider for the next octave, they will be at -6db relative to the output of that slider, and 180⁰ phase-shifted with respect to each other. Then, setting the center slider to its minimum position will cause the adjacent ones to form a working Notch Filter, and will thus allow any one band to be adjusted arbitrarily low.

And, halfway between the slider-center-frequencies, the gain of each will again be -3db, resulting in a phase-shift of ±90 with respect to the other one, and achieving flat frequency-response, when all sliders are in the same position.

The problem becomes, that if a 20-band equalizer is attempted, because the 1 /Octave example already required second-order bandpass-filters, the higher one will require 4th-order filters by same token, which would be a headache to program… )

Continue reading Thoughts About Software Equalizers

How certain signal-operations are not convolutions.

One concept that exists in signal processing, is that there could be a definition of a filter, which is based in the time-domain, and that this definition can resemble a convolution. And yet, a derived filter could no longer be expressible perfectly as a convolution.

For example, the filter in question might add reverb to a signal recursively. In the frequency-domain, the closer two frequencies are, which need to be distinguished, the longer the interval is in the time-domain, which needs to be considered before an output sample is computed.

Well, reverb that is recursive would need to be expressed as a convolution with an infinite number of samples. In the frequency-domain, this would result in sharp spikes instead of smooth curves.

I.e., If the time-constant of the reverb was 1/4 millisecond, a 4kHz sine-wave would complete within this interval, while a 2kHz sine-wave would be inverted in phase 180⁰. What this can mean is that a representation in the frequency-domain may simply have maxima and minima, that alternate every 2kHz. The task might never be undertaken to make the effect recursive.

(Last Edited on 02/23/2017 … )

Continue reading How certain signal-operations are not convolutions.