About the Amplitudes of a Discrete Differential

One of the concepts which exist in digital signal processing, is that the difference between two consecutive input samples (in the time-domain) can simply be output, thus resulting in a differential of some sort, even though the samples of data do not represent a continuous function. There is a fact which must be observed to occur at (F = N / 2) – i.e. when the frequency is half the Nyquist Frequency, of (h / 2) , if (h) is the sampling frequency.

The input signal could be aligned with the samples, to give a sequence of [s0 … s3] equal to

0, +1, 0, -1

This set of (s) is equivalent to a sine-wave at (F = N / 2) . Its discrete differentiation [h0 … h3] would be

+1, +1, -1, -1

At first glance we might think, that this output stream has the same amplitude as the input stream. But the problem becomes that the output stream is by same token, not aligned with the samples. There is an implicit peak in amplitudes between (h0) and (h1) which is greater than (+1) , and an implicit peak between (h2) and (h3) more negative than (-1) . Any adequate filtering of this stream, belonging to a D/A conversion, will reproduce a sine-wave with a peak amplitude greater than (1).

(Edit 03/23/2017 :

In this case we can see, that samples h0 and h1 of the output stream, would be phase-shifted 45⁰ with respect to the zero crossings and to the peak amplitude, that would exist exactly between h0 and h1. Therefore, the amplitude of h0 and h1 will be the sine-function of 45⁰ with respect to this peak value, and the actual peak would be (the square root of 2) times the values of h0 and h1. )

And so a logical question which anybody might want an answer to would be, ‘Below what frequency does the gain cross unity gain?’ And the answer to that question is revealed by Differential Calculus. If a sine-wave has a peak amplitude of (1), then its instantaneous differential equals (2 π F) , which is also known as (ω) , at zero-crossing. It follows that unit gain will only take place at (F = N / π) . This is a darned low frequency in practice. If the sampling rate was 44.1kHz, this is achieved somewhere around 7 kHz, and music, for which that sampling rate was devised, easily contains sound energy above that frequency.

What follows is also a reason for which by itself, offers poor performance in compressing signals. It usually needs to be combined with other methods of data-reduction, thus possibly resulting in the lossy . And another approach which uses , is , the last of which is a proprietary codec, which minimizes the loss of quality that might otherwise stem from using .

I believe this observation is also relevant to This Earlier Posting of mine, which implied a High-Pass Filter with a cutoff frequency of 1 kHz, that would be part of a Band-Pass Filter. My goal was to obtain a gain of at least 0.5 , over the entire interval, and to simplify the Math.

(Edited 03/21/2017 . )

Continue reading About the Amplitudes of a Discrete Differential

How certain signal-operations are not convolutions.

One concept that exists in signal processing, is that there could be a definition of a filter, which is based in the time-domain, and that this definition can resemble a convolution. And yet, a derived filter could no longer be expressible perfectly as a convolution.

For example, the filter in question might add reverb to a signal recursively. In the frequency-domain, the closer two frequencies are, which need to be distinguished, the longer the interval is in the time-domain, which needs to be considered before an output sample is computed.

Well, reverb that is recursive would need to be expressed as a convolution with an infinite number of samples. In the frequency-domain, this would result in sharp spikes instead of smooth curves.

I.e., If the time-constant of the reverb was 1/4 millisecond, a 4kHz sine-wave would complete within this interval, while a 2kHz sine-wave would be inverted in phase 180⁰. What this can mean is that a representation in the frequency-domain may simply have maxima and minima, that alternate every 2kHz. The task might never be undertaken to make the effect recursive.

(Last Edited on 02/23/2017 … )

Continue reading How certain signal-operations are not convolutions.

A Thought on SRS

Today, when we buy a laptop, we assume that its internal speakers offer inferior sound by themselves, but that through the use of a feature named ‘SRS’, they are enhanced, so that sound which simply comes from two speakers in front of us, seems to fill the space around us, kind of how surround-sound would work.

The immediate problem with Linux computers is, that they do not offer this enhancement. However, technophiles have known for a long time that this problem can be solved.

The underlying assumption here is, that the stereo being sent to the speakers should act as if each channel was sent to one ear in an isolated way, as if we were using headphones.

The sound that leaves the left speaker, reaches our right ear with a slightly longer time-delay, than the time-delay with which it reaches our left ear, and a converse truth exists for the right speaker.

It has always been possible to time-delay and attenuate the sound that came from the left speaker in total, before subtracting the result from the right speaker-output, and vice-verso. That way, the added signal that reaches the left ear from the left speaker, cancels with the sound that reached it from the right speaker…

The main problem with that effect, is that it will mainly seem to work when the listener is positioned in front of the speakers, in exactly one position.

I have just represented a hypothetical setup in the time-domain. There can exist a corresponding representation in the frequency-domain. The only problem is, that this effect cannot truly be achieved just with one graphical equalizer setting, because it affects (L+R) differently from how it affects (L-R). (L+R) would be receiving some recursive, negative reverb, while (L-R) would be receiving some recursive, positive reverb. But reverb can also be expressed by a frequency-response curve, as long as that has sufficiently fine resolution.

This effect will also work well with MP3-compressed stereo, because with Joint Stereo, an MP3 stream is spectrally complex in its reproduction of the (L-R) component.

I expect that when companies package SRS, they do something similar, except that they may tweak the actual frequency-response curves into something simpler, and they may also incorporate a compensation, for the inferior way the speakers reproduce frequencies.

Simplifying the curves would allow the effect to break down less, when the listener is not perfectly positioned.

We do not have it under Linux.

(Edit 02/24/2017 : A related effect is possible, by which 2 or more speakers are converted into an effectively-directional speaker-system. I.e., the intent could be, that sound which reaches our filter as the (L) channel, should predominantly leave the speaker-set at one angle, while sound which reaches our filter as the (R) channel, should leave the speaker-set at an opposing angle.

In fact, if we have an entire array of speakers – i.e. a speaker-bar – then we can apply the same sort of logic to them, as we would apply to a phased-array radar system.

The main difference with such a system, as opposed to one based on the Inter-Aural Delay, is that this one would absolutely require we know the distance between the speakers. And then we would use that distance, as the basis for our time-delays… )

Continue reading A Thought on SRS