The Cumulative Effect, Of Adding Many Random Numbers

The question must have crossed many people’s minds, of what the cumulative effect is, if they take the same calculated risk many times, i.e., if they add a series of numbers, each of which is random, and for the sake of argument, if each numbers has the same standard deviation.

The formal answer to that question is explained in This WiKiPedia Article. What the article states, is that ‘If two independently-random numbers are added, their expected values are added, as well as their variance, to give the expected value and the variance of the sum.’

But, what I already know, is that standard deviation is actually the square root of variance. Conversely, variance is already standard deviation squared. Therefore, the problem could be such, that the standard deviation of the individual numbers is known in advance, but that (n) random numbers are to be added. And then, because it is the square root of variance, the standard deviation of the sum will increase, as the square root of (n), times whatever the standard deviation of any one number in the series was.

This realization should be important to any people, who have a gambling problem, because people may have a tendency to think, that if they had ‘bad luck’ at a gambling table, ‘future good luck’ will come, to cancel out the bad luck they’ve already experienced. This is generally untrue, because as (n) increases, the square root of (n) will also just take the sum – of individual bets if the reader wishes – further and further away, from the expected value, because the square root of (n) will still increase. On average!

But, if we are to consider the case of gambling, then we must also take into account the expected value, which is just the average return of one bet. In the real-world case of gambling, this value is biased against the player, and earns the gambling establishment its profit. Well, according to what I wrote above, this will continue to increase linearly.

Now, the question which may come to mind next would be, what effect such a summation of data has on averages. And the answer lies in the fact that the square root of (n), is a half-power of (n). A full power of (n) would grow linearly with (n), while the zero-power of (n), would just stay constant.

And so the effect of summing many random numbers will first of all be, that the maximum and the minimum result theoretically possible, will be (n) times as far apart as they were for any one random number. This reflects the possibility, that ‘if (n) dice were rolled’, they could theoretically all come up as the maximum value possible, or all come up as the minimum value possible. And what this does to the graph of the distribution, is it initially makes the domain of the distribution curve linearly wider, along the x-axis, as a function of (n) – as the first power of (n).

(Updated 05/16/2018 … )

Continue reading The Cumulative Effect, Of Adding Many Random Numbers

The wording ‘Light Values’ can play tricks on people.

What I wrote before, was that between (n) real, 2D photos, 1 light-value can be sampled.

Some people might infer that I meant, always to use the brightness value. But this would actually be wrong. I am assuming that color footage is being used.

And if I wanted to compare pixel-colors, to determine best-fit geometry, I would most want to go by a single hue-value.

If the color being mapped averages to ‘yellow’ – which facial colors do – then hue would be best-defined as ‘the difference between the Red and Green channels’.

But the way this works out negatively, is in the fact that actual photographic film which was used around 1977, differentiated most poorly between between Red and Green, as did any chroma / video signal. And Peter Cushing was being filmed in 1977, so that our reconstruction of him might appear in today’s movies.

So then an alternative might be, ‘Normalize all the pixels to have the same luminance, and then pick whichever primary channel that the source was best-able to resolve into minute details, on a physical level.’

Maybe 1977 photographic projector-emulsions differentiated the Red primary channel best?

Further, given that there are 3 primary colors in most forms of graphics digitization, and that I would remove the overall luminance, it would follow that maybe 2 actual remaining color channels could be used, the variance of each computed separately, and the variances added?

In general, it is Mathematically safer to add Variances, than it would be to add Deviations, where Variance corresponds to Deviation squared, and where Variance therefore also corresponds to Energy, if Deviation corresponded to Potential. It is more generally agreed that Energy and its homologues are conserved quantities.

Dirk

 

Modern Photogrammetry

Modern Photogrammetry makes use of a Geometry Shader – i.e.  Shader which starts with a coarse grid in 3D, and which interpolates a fine grid of microplygons, again in 3D.

The principle goes, that a first-order, approximate 3D model provides per-vertex “normal vector” – i.e. vectors that always stand out at right angles from the 3D model’s surface in an exact way, in 3D – and that a Geometry Shader actually renders many interpolated points, to several virtual camera positions. And these virtual camera positions correspond in 3D, to the assumed positions from which real cameras photographed the subject.

The Geometry Shader displaces each of these points, but only along their interpolated normal vector, derived from the coarse grid, until the position which those points render to, take light-values from the real photos, that correlate to the closest extent. I.e. the premise is that at some exact position along the normal vector, a point generated by a Geometry Shader will have positions on all the real camera-views, at which all the real, 2D cameras photographed the same light-value. Finding that point is a 1-dimensional process, because it only takes place along the normal vector, and can thus be achieved with successive approximation.

(Edit 01/10/2017 : To make this easier to visualize. If the original geometry was just a rectangle, then all the normal vectors would be parallel. Then, if we subdivided this rectangle finely enough, and projected each micropolygon some variable distance along that vector, There would be no reason to say that there exists some point in the volume in front of the rectangle, which would not eventually be crossed. At a point corresponding to a 3D surface, all the cameras viewing the volume should in principle have observed the same light-value.

Now, if the normal-vectors are not parallel, then these paths will be more dense in some parts of the volume, and less dense in others. But then the assumption becomes, that their density should never actually reach zero, so that finer subdivision of the original geometry can also counteract this to some extent.

But there can exist many 3D surfaces, which would occupy more than one point along the projected path of one micropolygon – such as a simple sphere in front of an initial rectangle. Many paths would enter the sphere at one distance, and exit it again at another. There could exist a whole, complex scene in front of the rectangle. In those cases, starting with a coarse mesh which approximates the real geometry in 3D, is more of a help than a hindrance, because then, optimally, again there is only one distance of projection of each micropolygon, that will correspond to the exact geometry. )

Now one observation which some people might make, is that the initial, coarse grid might be inaccurate to begin with. But surprisingly, this type of error cancels out. This is because each microploygon-point will have been displaced from the coarse grid enough, that the coarse grid will finally no longer be recognizable from the positions of micropolygons. And the way the micropolygons are displaced is also such, that they never cross paths – since their paths as such are interpolated normal vectors – and so no Mathematical contradictions can result.

To whatever extent geometric occlusion has been explained by the initial, coarse model.

Granted, If the initial model was partially concave, then projecting all the points along their normal vector will eventually cause their paths to cross. But then this also defines the extent, at which the system no longer works.

But, According to what I just wrote, even the lighting needs to be consistent between one set of 2D photos, so that any match between their light-values actually has the same meaning. And really, it’s preferable to have about 6 such photos…

Yet, there are some people who would argue, that superior Statistical Methods could still find the optimal correlations in 1-dimensional light-values, between a higher number of actual photos…

One main limitation to providing photogrammetry in practice, is the fact that the person doing it may have the strongest graphics card available, but that he eventually needs to export his data to users who do not. So in one way it works for public consumption, the actual photogrammetry will get done on a remote server – perhaps a GPU farm, but then simplified data can actually get downloaded onto our tablets or phones, which the mere GPU of that tablet or phone is powerful enough to render.

But the GPU of the tablet or phone is itself not powerful enough, to do the actual successive approximation of the micropolygon-points.

I suppose, that Hollywood might not have that latter limitation. As far as they are concerned, all their CGI specialists could all have the most powerful GPUs, all the time…

Dirk

P.S. There exists a numerical approach, which simplifies computing Statistical Variance in such a way, that Variance can effectively be computed between ‘an infinite number of sample-points’, at a computational cost which is ‘only proportional to the number of sample-points’. And the equation is not so complicated.

s = Mean(X2) - ( Mean(X) )2

(Next)

Continue reading Modern Photogrammetry