A single time-delay can also be expressed in the frequency-domain.

Another way to state, that a stream of time-domain samples has been given a time-delay, is simply to state that each frequency-coefficient has been given a phase-shift, that depends both on the frequency of the coefficient, and on the intended time-delay.

A concern that some readers might have with this, is the fact that a number of samples need to be stored, in order for a time-delay to be executed in the time-domain. But as soon as differing values for coefficients, for a Fourier Transform, are spaced closer together, indicating in this case a longer time-delay, its computation also requires that a longer interval of samples in the time-domain need to be combined.

Now, if the reader would like to visualize what this would look like, as a homology to a graphical equalizer, then he would need to imagine a graphical equalizer the sliders of which can be made negative – i.e. one that can command, that one frequency come out inverted – so that then, if he was to set his sliders into the accurate shape of a sine-wave that goes both positive and negative in its settings, he should obtain a simple time-delay.

But there is one more reason for which this homology would be flawed. The type of Fourier Transform which is best-suited for this, would be the Discrete Fourier Transform, not one of the Discrete Cosine Transforms. The reason is the fact that the DFT accepts complex numbers as its terms. And so the reader would also have to imagine, that his equalizer not only have sliders that move up and down, but sliders with little wheels on them, from which he can give a phase-shift to one frequency, without changing its amplitude. Obviously graphical equalizers for music are not made that way.

Continue reading A single time-delay can also be expressed in the frequency-domain.

How Low-Latency CODECs can have a Time-Delay on PCs and Mobile Devices.

aptX is a CODEC, which uses two stages of “Linear Filters”, which are also known as “Convolutions”. And aptX gets used by some of the Bluetooth Headphones, that have it as a special feature, to be able to play HiFi music.

If we could just assume for the moment that each of the filters used by aptX is a 6-tap filter, meaning a filter with 6 coefficients, which is a realistic assumption, it would seem that ‘low latency’ is implied.

aptX subdivides the uncompressed spectrum into 4 sub-bands, about 5 kHz wide each, and each of which has been converted into a parallel 11.025 kHz stream of samples, for further processing. At first glance, one would assume that the latency of such a filter is the amount of time it takes, for an input sample of sound to make it past 6 coefficients then. This would mean that the latency of one filter stage is less than 1 millisecond. And so the next question which a casual observer might ask would be, ‘Why then, is there a noticeable time-delay when I listen to my Bluetooth Media?’

In this case, this time-delay has nothing to do with the Wavelets used, or the low-pass and band-pass filters themselves. When we listen to a stream on a PC, a Laptop, or a consumer Mobile Device such as a smart-phone, there are stages involved, which most users do not see, and which have nothing to do with these individual 6-tap filters.

aptX is actually implemented on the hardware side, within the Bluetooth chip-set of the source of the stream. It does not even rely on the CPU for the processing.

But what happens to any audio streamed on a consumer PC / Laptop / Mobile Device, is that first a user-space application needs to transfer the audio into kernel space, that next a kernel module needs to transfer the stream, essentially, to the hardware interface, and that then, firmware for the chip-set allows the latter to compress the stream on its way out via the Bluetooth antenna.

In consumer computing, every time an audio stream needs to be transferred from one process to another, there is a buffer that stores a minimum interval of sound. And the number of buffered samples can be higher than what we imagine, because software specialists try to make up for possible multi-tasking here, that could cause the stream to skip or drop, because the actual CPU has been called to do some background processing, outside of playing back the audio stream. Such a condition is called a “Buffer Underflow” when it happens. So the delay of each of these buffers is commonly kept high, so that all the audio we are hearing, has been delayed as a unit, and so that the CPU can still perform additional tasks, without necessarily causing the audio to skip.

Such buffering does not just happen once, in consumer electronics, but several times.

The situation is different, when aptX is built-in to professional equipment, that gets used in concerts etc.. Here, the chips are not embedded in all-purpose computing devices, but rather into dedicated devices. And here, the buffering has essentially been eliminated, so that the technology can be used live.

Dirk