One concept that exists in signal processing, is that there could be a definition of a filter, which is based in the time-domain, and that this definition can resemble a convolution. And yet, a derived filter could no longer be expressible perfectly as a convolution.
For example, the filter in question might add reverb to a signal recursively. In the frequency-domain, the closer two frequencies are, which need to be distinguished, the longer the interval is in the time-domain, which needs to be considered before an output sample is computed.
Well, reverb that is recursive would need to be expressed as a convolution with an infinite number of samples. In the frequency-domain, this would result in sharp spikes instead of smooth curves.
I.e., If the time-constant of the reverb was 1/4 millisecond, a 4kHz sine-wave would complete within this interval, while a 2kHz sine-wave would be inverted in phase 180⁰. What this can mean is that a representation in the frequency-domain may simply have maxima and minima, that alternate every 2kHz. The task might never be undertaken to make the effect recursive.
(Last Edited on 02/23/2017 … )
Likewise, the exercise can exist, to know that a signal has been convolved by a finite number of elements, but to deconvolve it. The time-domain representation of that would be, to run a local model of how the assumed convolution affects every output sample, and then to subtract that effect from the current output sample, which gets fed in to the local model anyway.
Such an exercise, if expressed as a convolution itself, would need to possess an infinite number of elements. And then a simplification which can be undertaken, is only to compute a finite set of weights, that are meant to belong to the deconvolution, and to use the resulting convolution.
This latter approach may also be helpful in voice synthesis, where a Linear Predictive Coding has been acquired for a voice, where the resonance it prescribes could continue indefinitely, as an addition of the LPC response to past output samples is added to some multiple of the current input sample, but where to have the voice seem to reverberate could be unacceptable. In such a case as well, a finite number of elements can be computed, that are derived from the LPC data captured from a real voice, so that the synthesis of this voice will be better-defined in time.
In such a situation it would also make sense, to apply a multiplier to the product of the local model, that is less than one, before adding or subtracting this product from the input sample recursively. Such a multiplier would work similarly to how a simple high-pass filter does, except that the cutoff-frequency would be replaced with the length of the interval of elements:
k = n / (n + 1)
(Edit : ) One effect which I have heard numerous times in the past, was of a computer-generated voice that sounded exaggeratedly shrill, even though an attempt must have been underway, to deliver the benefits of maybe 6 elements of Linear Predictive Coding or more. And the most probable reason for which this might have been happening, would have been that programmers had fed back the delayed signal, that was meant to reverberate, with some arbitrary multiplier closer to 0.5 , thus turning their series into a harsh, high-pass filter.
Interestingly enough, I have not noticed this artifact with simulated voices, programmed in more-recent years.
(Edit 02/23/2017 : )
Another situation in which such reasoning applies, is in my recently-repeated subject of ‘achieving a surround-sound effect from a pair of cheap laptop speakers, which the user is merely sitting in front of.’ As I have written, it is easy to express this problem in the time-domain, but then, the simple-and-dirty solution suffers from the fact that simple-and-dirty high-pass and low-pass filters already attenuate their cutoff frequencies to (1/2) amplitude. This effectively means, the gain near the cutoff frequency will be too low, while midway between the two endpoints, the gain will also be too high.
As I had already written, a preferred solution might be to design two equalizers, one applied to the (L+R) component, and the other applied to the (L-R) component. Maxima in one equalizer would correspond to Minima in the other. Each equalizer could have settings spaced approximately 2 kHz apart, such that there will be a total of 11 settings, from 0 to 20 kHz, but under the fictitious assumption that the sampling rate was 44 kHz, when we know it was 44.1 kHz.
Where do I get the idea, that I can suddenly consider and/or design a high-quality filter, where before I was considering a low-quality filter?
(Edited 02/28/2017 : I can additionally pretend that each equalizer-setting was 1 coefficient of a Type 1 Discrete Cosine Transform, of an interval of 11 samples. It should be possible to compute this Discrete Cosine Transform as its own inverse, and arrive at a convolution… In fact, it should be possible to recompute that convolution every time a user changes an equalizer-setting, from a GUI. And then, to re-apply it in mid-stream. )