In This Posting, I elaborated at length, about Polynomial Approximation that is not overdetermined, but rather exactly defined, by a set of unknown (`y`

) values along a set of known time-coordinates (`x`

). Just to summarize, If the sample-time-points are known to be arbitrary X-coordinates 0, 1, 2 and 3, then the matrix (`X1`

) can state the powers of these coordinates, and If additionally the vector (`A`

) stated the coefficients of a polynomial, then the product (` X1 * A `

) would also produce the four y-values as vector (`Y`

).

`X1`

can be computed before the algorithm is designed, and its inverse, (` X1^-1 `

), would be such that (` X1^-1 * Y = A `

). Hence, given a prepared matrix, a linear multiplication can derive a set of coefficients easily from a set of variable y-values.

Well this idea can be driven further. There could be *another* arbitrary set of x-coordinates 1.0, 1.25, 1.5, 1.75 , which are meant to be a 4-point interpolation within the first set. And then another matrix could be prepared before the algorithm is committed, called (`X2`

), which states the powers of *this* sequence. And then (` X2 * A = Y' `

), where (`Y'`

) is a set of interpolated samples.

What follows from this is that (` X2 * X1^-1 * Y = Y' `

). But wait a moment. Before the algorithm is ever burned onto a chip, the matrix (` X2 * X1^-1 `

) can be computed by Human programmers. We could call that our constant matrix (`X3`

).

So a really cheap interpolation scheme could start with a vector of 4 samples (`Y`

), and derive the 4 interpolated samples (`Y'`

) just by doing one matrix-multiplication (` X3 * Y = Y' `

). It would just happen that

`Y'[1] = Y[2]`

And so we could already guess off the top of our heads, that the first row of `X3`

should be equal to ( 0, 1, 0, 0 ).

While this idea would certainly be considered obsolete by standards today, it would correspond roughly to the amount of computational power a single *digital* chip would have had in real-time, in the late 1980s… ?

I suppose that an important question to ask would be, ‘Aside from just stating that this interpolation smooths the curve, what else does it cause?’ And my answer would be, that Although it Allows for (undesirable) Aliasing of Frequencies to occur during playback, when the encoded ones are close to the Nyquist Frequency, If the encoded Frequencies are about 1/2 that or lower, Very Little Aliasing will still take place. And so, over most of the audible spectrum, this will still act as a kind of low-pass filter, although over-sampling has taken place.

Dirk

P.S. If it was our goal *today*, to design a 2x / 1-octave oversampling method, it would not be set in stone, that we would always be using the highly-accurate ‘Half-Band Filter’, which is really still most suited to analog processing, although of course, the half-band also has a numerical implementation. A situation could require a solution much cheaper, computationally.

And then what we *could* do, would be just to extract the 3rd row from (`X3`

) above, and compute our single interpolated point each time, as the dot-product of the nearest 4 input samples, with this constant 3rd row from the matrix.

```
This is Yacas version '1.3.3'.
Yacas is Free Software--Free as in Freedom--so you can redistribute Yacas or
modify it under certain conditions. Yacas comes with ABSOLUTELY NO WARRANTY.
See the GNU General Public License (GPL) for the full conditions.
Type ?license or ?licence to see the GPL; type ?warranty for warranty info.
See http://yacas.sf.net for more information and documentation on Yacas.
Type ?? for help. Or type ?function for help on a function.
To exit Yacas, enter Exit(); or quit or Ctrl-c.
Type 'restart' to restart Yacas.
To see example commands, keep typing Example();
In> X := {{1, 0, 0, 0},{1, 1, 1, 1},{1, 2, 4, 8},{1, 3, 9, 27}};
Out> {{1,0,0,0},{1,1,1,1},{1,2,4,8},{1,3,9,27}}
In> P := Inverse(X);
Out> {{1,0,0,0},{(-11)/6,3,(-3)/2,1/3},{1,(-5)/2,2,(-1)/2},{(-1)/6,1/2,(-1)/2,1/6}}
In> PrettyForm(P);
/ \
| ( 1 ) ( 0 ) ( 0 ) ( 0 ) |
| |
| / -11 \ ( 3 ) / -3 \ / 1 \ |
| | --- | | -- | | - | |
| \ 6 / \ 2 / \ 3 / |
| |
| ( 1 ) / -5 \ ( 2 ) / -1 \ |
| | -- | | -- | |
| \ 2 / \ 2 / |
| |
| / -1 \ / 1 \ / -1 \ / 1 \ |
| | -- | | - | | -- | | - | |
| \ 6 / \ 2 / \ 2 / \ 6 / |
\ /
Out> True
In> P * {1, 1, 1, 1};
Out> {1,0,0,0}
In> 7 * 49;
Out> 343
In> X2 := {{1,1,1,1},{1,5/4,25/16,125/64},{1,3/2,9/4,27/8},\
In> {1,7/4,49/16,343/64}};
Out> {{1,1,1,1},{1,5/4,25/16,125/64},{1,3/2,9/4,27/8},{1,7/4,49/16,343/64}}
In> PrettyForm(X2);
/ \
| ( 1 ) ( 1 ) ( 1 ) ( 1 ) |
| |
| ( 1 ) / 5 \ / 25 \ / 125 \ |
| | - | | -- | | --- | |
| \ 4 / \ 16 / \ 64 / |
| |
| ( 1 ) / 3 \ / 9 \ / 27 \ |
| | - | | - | | -- | |
| \ 2 / \ 4 / \ 8 / |
| |
| ( 1 ) / 7 \ / 49 \ / 343 \ |
| | - | | -- | | --- | |
| \ 4 / \ 16 / \ 64 / |
\ /
Out> True
In> X3 := X2 * P;
Out> {{0,1,0,0},{(-7)/128,105/128,35/128,(-5)/128},{(-1)/16,9/16,9/16,(-1)/16},{(-5)/128,35/128,105/128,(-7)/128}}
In> PrettyForm(X3);
/ \
| ( 0 ) ( 1 ) ( 0 ) ( 0 ) |
| |
| / -7 \ / 105 \ / 35 \ / -5 \ |
| | --- | | --- | | --- | | --- | |
| \ 128 / \ 128 / \ 128 / \ 128 / |
| |
| / -1 \ / 9 \ / 9 \ / -1 \ |
| | -- | | -- | | -- | | -- | |
| \ 16 / \ 16 / \ 16 / \ 16 / |
| |
| / -5 \ / 35 \ / 105 \ / -7 \ |
| | --- | | --- | | --- | | --- | |
| \ 128 / \ 128 / \ 128 / \ 128 / |
\ /
Out> True
In> Y1 := {1, N(Cos(0.75*Pi)), N(Cos(1.5*Pi)), N(Cos(2.25*Pi))};
Out> {1,-0.7071067811,0.0572269e-11,0.7071067811}
In> A1 := P * Y1;
Out> {1.,(-0.1338822509e3)/36,9.6568542495/4,(-0.5794112549e2)/144}
```

(Edit 07/31/2016 : ) It might strike the reader a little bit surprising, that this Math produces a 3rd row for Matrix (`X3`

), with 1/9 the significance of samples `Y[1]`

or `Y[4]`

in finding `Y'(1.5)`

, compared to the stronger weight of `Y[2]`

and `Y[3]`

. Yet, according to my own common sense, this is warranted, because `Y[1]`

is also 3x as distant along the X-axis, from `Y'(1.5)`

, as `Y[2]`

is… Since this row has no cubic but only quadratic terms, `3^2 = 9`

, hence `Y[1]`

and `Y[4]`

*should* only be 1/9 as significant.

(Edit 08/02/2016 : ) On the basis of only 4 samples, it is difficult in general to ascertain how well an interpolation ‘works’, that is supposed to distinguish between frequency-components that are close. Doing so generally requires that a larger number of input samples be given.

For example, if we were using a simple Quadrature Mirror Filter, with a Daubechies Wavelet, it is assumed that 8 coefficients form a good base. Yet, even those corresponding samples would have been derived from an initial 4 samples, by up-sampling the real input data. So perhaps, good results *are* possible starting with only 4 real samples?

I did some Computer Algebra this afternoon, in which I allowed “Maxima” to compute the definite integral of the product, of the Polynomial at 3/4 Nyquist Frequency, with the Aliased Frequency, Cosine Function at 5/4 Nyquist Frequency, over the defined interval from (0) to (3). And what I found, was that the correlation was only aboutÂ 7.8% ! I also did the corresponding computation below, with the Intended Signal at 3/4 Nyquist Frequency, in

This little Typesetting Exercise

To find 97.7%.

However, I think that in practice, the amount of suppression is equal, to the factor by which the interpolated value is closer to the ideal value, than the aliased wave would be.

I seem to have discovered, that at or above 3/4 Nyquist Frequency, this method does not reject aliased frequency components much better than a simpler, linear interpolation would. However, at or below 1/2 Nyquist Frequency, it does, as shown in the earlier posting.

Dirk

(Edit 08/21/2016 : ) If the interpolated sample lies exactly between the intended sine-wave, and the aliased one, then the rejection is actually NULL, because both are being represented equally. And trust me, at 0.9 Nyquist Frequency, i.e. If we are reproducing a 20 kHz intended signal at a 44.1 kHz sampling rate, this situation will occur in an obvious way, during intervals of 10 up-sampled, samples, where the apparent output amplitude will temporarily seem to be low.

## 2 thoughts on “A Cheapo Idea To Throw Out There, On Digital Oversampling”