Some time ago, I posted an idea, on how the concept of a polynomial approximation can be simplified, in terms of the real-time Math that needs to be performed, in order to produce 4x oversampling, in which the positions of the interpolated samples with respect to time, are fixed positions.
In order for the reader to understand the present posting, which is a reiteration, he or she would need to read the posting I linked to above. Without reading that posting, the reader will not understand the Matrix, which I included below.
There was something clearly wrong with the idea, which I wrote above, but what is wrong, is not the fact that I computed, or assume the usefulness, of a product between two matrices. What is wrong with the idea as first posted above, is that the order of the approximation is only 4, thus implying a polynomial of the 3rd degree. This is a source of poor approximations close to the Nyquist Frequency.
But As I wrote before, the idea of using anything based on polynomials, can be extended to 7th-order approximations, which imply polynomials of the 6th degree. Further, there is no reason why a 7×7 matrix cannot be pre-multiplied by a 3×7 matrix. The result will only be a 3×7 matrix.
Hence, if we were to assume that such a matrix is to be used, this is the worksheet, which computed what that matrix would have to be:
The way this would be used in a practical application is, that a vector of input-samples be formed, corresponding to
t = [ -3, -2, -1, 0, +1, +2, +3 ]
And that the interpolation should result corresponding to
t = [ 0, 1/4, 1/2, 3/4 ]
Further, the interpolation at t = 0 does not need to be recomputed, as it was already provided by the 4th element of the input vector. So the input-vector would only need to be multiplied by the suggested matrix, to arrive at the other 3 values. After that, a new sample can be used as the new, 7th element of the vector, while the old 1st element is dropped, so that another 3 interpolated samples can be computed.
This would be an example of an idea which does not work out well according to a first approximation, but which will produce high-quality results, when the method is applied more rigorously.
I have already mentioned in this blog, that I use an application named ‘Celestia‘, which basically anybody can download and ‘play with’. This is an Astronomy application, which displays to the user graphically, not only what the sky above the Earth would have looked like at some arbitrary point in time, but also what the sky – i.e. the star field – would look like, as seen from elsewhere in the near regions of space, such as anywhere from within our own solar system, or from the closest neighbors to our own sun.
In fact, I even uploaded a YouTube Video, which explains to anybody, the basic usage of this program.
This is another video which I uploaded at 1920×1080 resolution, but which the viewer may have to play with somewhat, even after he has made it full-screen, to switch its resolution to True HD.
(Edit 11/07/2017 :
When recording the above screen-cast, I nearly sidetracked the main subject – of how to navigate the program – with the question, of how to change the Field Of View, the ‘FOV’, indicated in the bottom-right-hand corner of the screen. I do know from experience, that when ‘in synchronous orbit around the Moon’, and looking back at the Earth, using the scroll-wheel of the mouse does not make the earth look any larger or smaller, because using the scroll-wheel will then only change the distance with which my camera-position is orbiting the Moon.
The way to adjust the FOV is finally, to hold down the <Shift> Key, and then to click-and-drag with the Left Mouse Button.
Also, the distinction is known to me, between how this program defines ‘a synchronous orbit’, and what a synchronous orbit would be, correct to Physics. A synchronous orbit needs to have one specific altitude, at which a stable, circular orbit has the same rotational period, as the body we’re orbiting. In the case of the moon, this may not even be achievable. Yet, the way ‘Celestia’ defines a synchronous orbit, is as my screen-cast shows. )
The problem with Kepler’s Laws is, that they not only assume Newtonian Gravity, but They also assume that each orbit is only affected by the gravitational interaction of two bodies: A body assumed to be orbiting, and the body around which the first is assumed to be orbiting. The problem with this is the fact that in the real Universe, every body that causes gravitation, eventually exerts that gravitation on any other body – up to infinite distance. The fact that each and every person standing on the surface of the Earth, experiences the gravitation of the Earth, is a kind of proof of that. In theory, the gravitation of the Earth not only affects the orbit of the moon, but to a lesser extent, also the orbits of Mars and Venus – except for the fact that some people fail to know, that at the distance of Mars, for example, the gravitation of the Earth would be assumed negligible in strength. The effect of a car that passes the street in front of an Inhabitant of Earth, is in fact stronger, than the effect of the gravitation of Mars, on the same Inhabitant. And this is simply because, the strength of gravitation does decrease, as the reciprocal of distance squared.
But what this means is that over longer time-frames, the orbits of the planets become ‘perturbed.’ Mostly, this is due to the effects of the gravitation of Gas Giants, on the smaller planets of our solar system, but it really applies generally, wherever gravitation exists.
Well The programmers of Celestia took this into consideration, when they created that program. What they did, was to assume that Kepler’s laws generally apply, when they are fed a single parameter – time – and that they predict elliptical orbits, as a linear function of time. But, because real orbits are perturbed, it was at some point thought best, that time first be fed through a polynomial, to arrive at the parameters, which are then fed into Kepler’s Model, such as the one parameter that states, in what phase of its orbit a planet is, as orbiting our sun.
In reality, this method of applying a polynomial, does not reflect any physical realities, of ‘how the orbits work’. What it reflects is that Real Astronomers at some time in the past, used very powerful computers in order to compute gravitational interactions, and that the result of their simulation was a continuous sequence of planetary positions, which next needed to be stated somehow. The reader might think, that nothing but a series of numerical values would be needed, except that one problem with that idea would be, that effectively an infinite series of numerical values would be needed, because any time-interval can be magnified, and the motion of planets is supposed to remain continuous.
And so what Astronomers did, was to express the results of their simulation as a polynomial approximation, which consumers’ computers can next turn into real values, for what the position of a planet is supposed to be, at any precise point in time.
In other words, the use of a polynomial approximation served essentially, as a type of data-compression, and not as a physical model of gravity.
I have posted several times, that it is possible to pre-compute a matrix, such that to multiply a set of input-samples by this matrix, will result in the coefficients of a polynomial, and that next, a fractional position within the center-most interval of this polynomial can be computed – as a polynomial – to arrive at a smoothing function. There is a difference between how I represented this subject, and how it would be implemented.
I assumed non-negative values for the Time parameter, from 0 to 3 inclusively, such that the interval from 1 … 2 can be smoothed. This might work well for degrees up to 3, i.e. for orders up to 4. But in order to compute the matrices accurately, even using computers, when the degree of the polynomial is anything greater than 3, it makes sense to assume x-coordinates from -1 … +2 , or from -3 … +3 . Because, we can instruct a computer to divide by 3 to the 6th power more easily than by 6 to the 6th power.
And then in general, the evaluation of the polynomial will take place over the interval 0 … +1 .
The results can easily be shifted anywhere along the x-axis, as long as we only do the interpolation closest to the center. But the computation of the inverse matrix cannot.
Also, if it was our goal to illustrate the system to a reader who is not used to Math, then the hardest fact to prove, would be that the matrix of terms has a non-zero determinant, and is therefore invertible, when some of the terms are negative, as it was before.
In This Posting, I elaborated at length, about Polynomial Approximation that is not overdetermined, but rather exactly defined, by a set of unknown (y) values along a set of known time-coordinates (x). Just to summarize, If the sample-time-points are known to be arbitrary X-coordinates 0, 1, 2 and 3, then the matrix (X1) can state the powers of these coordinates, and If additionally the vector (A) stated the coefficients of a polynomial, then the product ( X1 * A ) would also produce the four y-values as vector (Y).
X1 can be computed before the algorithm is designed, and its inverse, ( X1^-1 ), would be such that ( X1^-1 * Y = A ). Hence, given a prepared matrix, a linear multiplication can derive a set of coefficients easily from a set of variable y-values.
Well this idea can be driven further. There could be another arbitrary set of x-coordinates 1.0, 1.25, 1.5, 1.75 , which are meant to be a 4-point interpolation within the first set. And then another matrix could be prepared before the algorithm is committed, called (X2), which states the powers of this sequence. And then ( X2 * A = Y' ), where (Y') is a set of interpolated samples.
What follows from this is that ( X2 * X1^-1 * Y = Y' ). But wait a moment. Before the algorithm is ever burned onto a chip, the matrix ( X2 * X1^-1 ) can be computed by Human programmers. We could call that our constant matrix (X3).
So a really cheap interpolation scheme could start with a vector of 4 samples (Y), and derive the 4 interpolated samples (Y') just by doing one matrix-multiplication ( X3 * Y = Y' ). It would just happen that
Y' = Y
And so we could already guess off the top of our heads, that the first row of X3 should be equal to ( 0, 1, 0, 0 ).
While this idea would certainly be considered obsolete by standards today, it would correspond roughly to the amount of computational power a single digital chip would have had in real-time, in the late 1980s… ?
I suppose that an important question to ask would be, ‘Aside from just stating that this interpolation smooths the curve, what else does it cause?’ And my answer would be, that Although it Allows for (undesirable) Aliasing of Frequencies to occur during playback, when the encoded ones are close to the Nyquist Frequency, If the encoded Frequencies are about 1/2 that or lower, Very Little Aliasing will still take place. And so, over most of the audible spectrum, this will still act as a kind of low-pass filter, although over-sampling has taken place.