A Clarification on Polynomial Approximations… Refining the Exercise

Some time ago, I posted an idea, on how the concept of a polynomial approximation can be simplified, in terms of the real-time Math that needs to be performed, in order to produce 4x oversampling, in which the positions of the interpolated samples with respect to time, are fixed positions.

In order for the reader to understand the present posting, which is a reiteration, he or she would need to read the posting I linked to above. Without reading that posting, the reader will not understand the Matrix, which I included below.

There was something clearly wrong with the idea, which I wrote above, but what is wrong, is not the fact that I computed, or assume the usefulness, of a product between two matrices. What is wrong with the idea as first posted above, is that the order of the approximation is only 4, thus implying a polynomial of the 3rd degree. This is a source of poor approximations close to the Nyquist Frequency.

But As I wrote before, the idea of using anything based on polynomials, can be extended to 7th-order approximations, which imply polynomials of the 6th degree. Further, there is no reason why a 7×7 matrix cannot be pre-multiplied by a 3×7 matrix. The result will only be a 3×7 matrix.

Hence, if we were to assume that such a matrix is to be used, this is the worksheet, which computed what that matrix would have to be:

Work-Sheet

The way this would be used in a practical application is, that a vector of input-samples be formed, corresponding to

t = [ -3, -2, -1, 0, +1, +2, +3 ]

And that the interpolation should result corresponding to

t = [ 0, 1/4, 1/2, 3/4 ]

Further, the interpolation at t = 0 does not need to be recomputed, as it was already provided by the 4th element of the input vector. So the input-vector would only need to be multiplied by the suggested matrix, to arrive at the other 3 values. After that, a new sample can be used as the new, 7th element of the vector, while the old 1st element is dropped, so that another 3 interpolated samples can be computed.

This would be an example of an idea which does not work out well according to a first approximation, but which will produce high-quality results, when the method is applied more rigorously.

Dirk

 

About +90⁰ Phase-Shifting

I have run into people, who believe that a signal cannot be phase-advanced in real-time, only phase-delayed. And as far as I can tell, this idea stems from the misconception, that in order for a signal to be given a phase-advance, some form of prediction would be needed. The fact that this is not true can best be visualized, when we take an analog signal, and derive another signal from it, which would be the short-term derivative of the first signal. ( :1 ) Because the derivative would be most-positive at points in its waveform where the input had the most-positive slope, and zero where the input was at its peak, we would already have derived a sine-wave for example, that will be phase-advanced 90⁰ with respect to an input sine-wave.

90-deg-phase-y

But the main reason this is not done, is the fact that a short-term derivative also acts as a high-pass filter, which progressively doubles in output amplitude, for every octave of frequencies.

What can be done in the analog domain however, is that a signal can be phase-delayed 90⁰, and the frequency-response kept uniform, and then simply inverted. The phase-diagram of each of the signal’s frequency-components will then show, the entire signal has been phase-advanced 90⁰.

90-deg-phase

(Updated 11/29/2017 : )

Continue reading About +90⁰ Phase-Shifting

Observations about the Z-Buffer

Any game-engine currently on the market, uses the GPU of your computer – or your tablet – to do most of the work of rendering 3D scenes to a 2D screen, that also represents a virtual camera-position. There are two constants about this process which th game-engine defines, which are the closest distance at which fragments are allowed to be rendered, which I will name ‘clip-near’, and the maximum distance rendering is to be extended to, which I will name ‘clip-far’.

Therefore, what some users might expect, is that the Z-buffer, which determines the final outcome of the occlusion of the fragments, should contain a simple value from [ clip-near … clip-far ) . However, this is not truly how the Z-buffer works. And the reason why has to do with its origins. The Z-buffer belonging to the earliest rendering-hardware was only a 16-bit value, associated with each output pixel! And so a system needed to be developed that could use this extremely low resolution, according to which distances closer to (clip-near) would be spaced closer together, and according to which distance closer to (clip-far) could receive a smaller number of Z-values, since at that distance, the ability of the player even to distinguish differences in distances, was also diminished.

And so the way hardware-rendering began, was in this Z-buffer-value representing a fractional value between [ 0.0 … 1.0 ) . In other words, it was decided early-on, that these 16 bits followed a decimal point – even though they were ones and zeros – and that while (0) could be reached exactly, (1.0) could never be reached. And, because game-engine developers love to use 4×4 matrices, there could exist a matrix which defines conversion from the model-view matrix to the model-view-projection matrix, just so that a single matrix could minimally be sent to the graphics card for any one model to render, which would do all the necessary work, including to determine screen-positions and to determine Z-buffer-values.

The rasterizer is given a triangle to render, and rasterizes the 2D space between, to include all the pixels, and to interpolate all the parameters, according to an algorithm which does not need to be specialized, for one sort of parameter or another. The pixel-coordinates it generates are then sent to any Fragment Shader (in modern times), and three main reasons their number does not actually equal the number of screen-pixels are:

  1. Occlusion obviates the need for many FS-calls.
  2. Either Multi-Sampling or Super-Sampling tampers with the true number of fragments that need to be computed, and in the case of Multi-Sampling, in a non-constant way.
  3. Alpha Entities“, whose textures have an Alpha channel in addition to R, G, B per texel, are translucent and do not write the Z-buffer, thereby requiring that Entities behind them additionally be rendered.

And so there exists a projection-matrix which I can suggest which will do this (vertex-related) work:

 


| 1.0 0.0 0.0 0.0 |
| 0.0 1.0 0.0 0.0 |
| 0.0 0.0 1.0 0.0 |
| 0.0 0.0  a   b  |

a = clip-far / (clip-far - clip-near)
b = - (clip-far * clip-near) / (clip-far - clip-near)


 

One main assumption I am making, is that a standard, 4-component position-vector is to be multiplied by this matrix, which has the components named X, Y, Z and W, and the (W) component of which equals (1.0), just as it should. But as you can see, now, the output-vector has a (W) component, which will no longer equal (1.0).

The other assumption which I am making here, is that the rasterizer will divide (W) by (Z), once for every output fragment. This last request is not unreasonable. In the real world, when objects move further away from us, they seem to get smaller in the distance. Well in the game-world, we can expect the same thing. Therefore by default, we would already be dividing (X) and (Y) by (Z), to arrive at screen-coordinates from ( -1.0 … +1.0 ), regardless of what the real-world distances from the camera were, that also led to (Z) values.

This gives the game-engine something which photographic cameras fail to achieve at wide angles: Flat Field. The position from the center of the screen, becomes the tangent-function, of a view-angle from the Z-coordinate.

Well, to divide (X) by (Z), and then to divide (Y) by (Z), would actually be two GPU-operations, where to scalar-multiply the entire output-vector, including (X, Y, Z, W) by (1 / Z), would only be one GPU-operation.

Well in the example above, as (Z -> clip-far), the operation would compute:

 



W = a * Z + b

  = (clip-far * clip-far) / (clip-far - clip-near) -
    (clip-far * clip-near) / (clip-far - clip-near)

  = clip-far * (clip-far - clip-near) /
            (clip-far - clip-near)

  = clip-far

Therefore,
  (W / Z) = (W / clip-far) = 1.0


 

And, when (Z == clip-near), the operation would compute:

 



W = a * Z + b

  = (clip-far * clip-near) / (clip-far - clip-near) -
    (clip-far * clip-near) / (clip-far - clip-near)

  = 0.0


 

Of course I understand that a modern graphics card will have a 32-bit Z-buffer. But then all that needs to be done, for backwards-compatibility with the older system, is to receive a fractional value that has 32 bits instead of 16.

Now, there are two main derivations of this approach, which some game engines offer as features, but which can be achieved just by feeding in a slightly different set of constants to a matrix, which the GPU can work with in an unchanging way:

  • Rendering to infinite world coordinates,
  • Orthogonal camera-views.

The values that are needed for the same matrix will be:

Continue reading Observations about the Z-Buffer

Whether the Columns of Matrices have a Natural Order

This article is meant for readers who, like me, have studied Linear Algebra and who, like me, are curious about Quantum Mechanics.

Are the columns of matrices in a given, natural order, as we write them? Well, if we are using the matrix as a rotation matrix in CGI – i.e. its elements are derived from the trig functions of Euler Angles – then the column order depends, on the order in which we have labeled coordinates to be X, Y and Z. We are not free to change this order in the middle of our calculations, but if we decide that X, Y and Z are supposed to form a different set, then we need to use different matrices as well.

(Edited 02/15/2018 :

OTOH, We also know that a matrix can be an expression of a system of simultaneous equations, which can be solved manually through Gauss-Jordan Elimination on the matrix. If we have found that our system has infinitely many solutions, then we are inclined to say that certain variables are the “Leading Variables” while the others are the “Free Variables”. It is being taught today, that the Free Variables can also be made our parameters, so that the set of values for the Leading Variables follows from those parameters. But wait. Should it not be arbitrary for certain combinations of variables, which follows from which?

The answer is, that if we simply use Gauss-Jordan Elimination, and if two variables are connected as having possibly infinite combinations of values, then it will always be the variables stated earlier in the equations which become the Leading, and the ones stated later in the equations will become the Free Variables. We could restate the entire equations with the variables in some other order, and then surely enough, the variable that used to be a Free one will have become a new Leading one, and vice-versa. (And if we do so, the parametric equations for the other Leading variables will generally also change.)

As of 02/15/2018:

This is an observation which I once made, based on certain exercises in Linear Algebra, as taught, having been simplified in the way I described. Eventually, systems of solutions will come up in the real world, in which a Free Variable actually precedes a Leading Variable, both in the order they get mentioned in equations, as well as according to the order of matrix-columns. The corresponding row with a single 1, corresponding to those Free Variables, will not occur, so that it also cannot be added to or subtracted from what will be the earlier row, in such a case.

The later observation follows from the fact that such solutions have infinitely many solutions:

  1. If the Free Variables are just given a set of values that follow from the solution-set, Then the Leading variables would still need to have definite values, as defined by the same solution-matrix,
  2. If the Free Variables are given a set of values, that no longer follow from the solution-set, it will not follow that a different set of values for the Leading Variables, will make such extraneous solutions viable.

End of edit, 02/15/2018 )

The order of the columns, has become the order of discovery.

This could also have ramifications for Quantum Mechanics, where matrices are sometimes used. QM used matrices at first, in an effort to be empirical, and to acknowledge that we as Humans, can only observe a subset of the properties which particles may have. And then what happens in QM, is that some of the matrices used are computed to have Eigenvalues, and if those turn out to be real numbers, they are also thought to correspond to observable properties of particles, while complex Eigenvalues are stated – modestly enough – not to correspond to observable properties of the particle.

Even though this system seems straightforward, it is not foolproof. A Magnetic North Pole corresponds according to Classical Principles, to an angle from which an assumed current is always flowing arbitrarily, clockwise or counter-clockwise. It should follow then, that from a different perspective, a current which was flowing clockwise before, should always be flowing counter-clockwise. And yet according to QM, monopoles should exist.

Continue reading Whether the Columns of Matrices have a Natural Order