Today, when we buy a laptop, we assume that its internal speakers offer inferior sound by themselves, but that through the use of a feature named ‘SRS’, they are enhanced, so that sound which simply comes from two speakers in front of us, seems to fill the space around us, kind of how surround-sound would work.
The immediate problem with Linux computers is, that they do not offer this enhancement. However, technophiles have known for a long time that this problem can be solved.
The underlying assumption here is, that the stereo being sent to the speakers should act as if each channel was sent to one ear in an isolated way, as if we were using headphones.
The sound that leaves the left speaker, reaches our right ear with a slightly longer time-delay, than the time-delay with which it reaches our left ear, and a converse truth exists for the right speaker.
It has always been possible to time-delay and attenuate the sound that came from the left speaker in total, before subtracting the result from the right speaker-output, and vice-verso. That way, the added signal that reaches the left ear from the left speaker, cancels with the sound that reached it from the right speaker…
The main problem with that effect, is that it will mainly seem to work when the listener is positioned in front of the speakers, in exactly one position.
I have just represented a hypothetical setup in the time-domain. There can exist a corresponding representation in the frequency-domain. The only problem is, that this effect cannot truly be achieved just with one graphical equalizer setting, because it affects (L+R) differently from how it affects (L-R). (L+R) would be receiving some recursive, negative reverb, while (L-R) would be receiving some recursive, positive reverb. But reverb can also be expressed by a frequency-response curve, as long as that has sufficiently fine resolution.
This effect will also work well with MP3-compressed stereo, because with Joint Stereo, an MP3 stream is spectrally complex in its reproduction of the (L-R) component.
I expect that when companies package SRS, they do something similar, except that they may tweak the actual frequency-response curves into something simpler, and they may also incorporate a compensation, for the inferior way the speakers reproduce frequencies.
Simplifying the curves would allow the effect to break down less, when the listener is not perfectly positioned.
We do not have it under Linux.
(Edit 02/24/2017 : A related effect is possible, by which 2 or more speakers are converted into an effectively-directional speaker-system. I.e., the intent could be, that sound which reaches our filter as the (L) channel, should predominantly leave the speaker-set at one angle, while sound which reaches our filter as the (R) channel, should leave the speaker-set at an opposing angle.
In fact, if we have an entire array of speakers – i.e. a speaker-bar – then we can apply the same sort of logic to them, as we would apply to a phased-array radar system.
The main difference with such a system, as opposed to one based on the Inter-Aural Delay, is that this one would absolutely require we know the distance between the speakers. And then we would use that distance, as the basis for our time-delays… )
(Edit 02/13/2017 : ) According to my description of this in the time-domain, if an attempt was made to find a simple implementation which does not depend on a specific set of speakers, the main problem would not be in guessing the delay. There is a time-delay which is sometimes taken as constant for Human listeners, which is also called the Inter-Aural Delay.
The bigger problem would be, in trying to set the correct attenuation.
If two speakers were placed in front of the listener at +/- 30⁰, then some substantial attenuation, such as a multiplier of 0.5, might seem appropriate. In that case, the time-delay should also be the sine-function of 30⁰, times the IAD, i.e. 0.5 times the IAD. But often, the distance from the speakers will be greater, so that the angle between them will be narrower.
Choosing multipliers that become closer to 1.0 will cause a crescendo according to the infinite series I suggested, so that at 1.0 the limit of amplitudes would be infinity. At 0.5 , the maximum amplitude would have an upper-bound of 2x the original amplitude.
Thus, if the speakers are closer together, the system would try to work them harder, just to achieve the apparent effect of surround-sound. And then, what the listener will mainly hear is the level of imperfection, which will sound like meaningless reverb.
Also, the pathway the sound takes to our ears is not always a straight line, due to details in the speaker-setup, as well as the fact that the space in front of the speakers may add more reverb. I.e., walls in the room that reflect sound, will feed off the exaggerated (L-R) component, and will reverberate at different time-constants, that are not predictable to the people who attempt a generic implementation.
A multiplier slightly higher than 0.5 might otherwise seem reasonable, but then, even this wording suggests that some amount of guesswork should go into an implementation.
(Edit 02/24/2017 :
My reasoning is as follows:
If the crossed-over sound-pressure is reduced by 1/2, due to cancellation, then the crossed-over sound-energy is actually reduced to 1/4. And then the question which remains, is whether an intended signal can mask an unintended signal-component at the same frequency, which only has 1/4 the energy, i.e. an unintended component at -6db.
Also, it seems clear that one would apply a simplistic band-pass filter to the crossed-over signals, which admits everything from 1-6 kHz if based on the IAD, but which might be required to admit a narrower band of frequencies, if based on the distance between the speakers. )
The highest-possible frequencies are also most likely, to have a random phase-position with respect to what was intended, when distances change very slightly. And below some frequency, this effect should really not make any sense.
h = Sampling Frequency N = h / 2 ! ω = 2 π F ω = 2 sin( πF / h ) k = 1 / (ω + 1) a == k -> αN < 1 a == (1 + k) / 2 -> αN == 1 a == sqrt(2) k -> αF == 1 F1 = 500 Hz F2 = 5000 Hz a1 = 0.5 * k1 h0 = l0 = 0 hn = k1 hn-1 + a1 (sn - sn-1) ln = k2 ln-1 + (1 - k2) hn If h < 22.05kHz, Omit the Low-Pass Filter and F2.
(Above Equations Revised 11/29/2017 . )
(Edit 02/14/2017 : ) Some designers of high-pass filters might casually think, that they should be normalized to have a gain equal to 1, at the Nyquist Frequency. But a Mathematician would instead argue, that the cutoff-frequency could hypothetically have been chosen as much higher than that, such that (F >> N), in which case to maintain a gain of 1 at (N) would no longer follow, for which reason the general assumption changes, to a gain slightly less than 1 at (N), but which is systematically consistent with the Math of the high-pass filter.
The Nyquist Frequency is then just another frequency.
Yet, all these normalizations really just act as a single gain-multiplier, pre-applied to the filter.
The corresponding issue does not exist for low-pass filters, because frequencies lower than zero were never even hypothetically considered.