The wording ‘Light Values’ can play tricks on people.

What I wrote before, was that between (n) real, 2D photos, 1 light-value can be sampled.

Some people might infer that I meant, always to use the brightness value. But this would actually be wrong. I am assuming that color footage is being used.

And if I wanted to compare pixel-colors, to determine best-fit geometry, I would most want to go by a single hue-value.

If the color being mapped averages to ‘yellow’ – which facial colors do – then hue would be best-defined as ‘the difference between the Red and Green channels’.

But the way this works out negatively, is in the fact that actual photographic film which was used around 1977, differentiated most poorly between between Red and Green, as did any chroma / video signal. And Peter Cushing was being filmed in 1977, so that our reconstruction of him might appear in today’s movies.

So then an alternative might be, ‘Normalize all the pixels to have the same luminance, and then pick whichever primary channel that the source was best-able to resolve into minute details, on a physical level.’

Maybe 1977 photographic projector-emulsions differentiated the Red primary channel best?

Further, given that there are 3 primary colors in most forms of graphics digitization, and that I would remove the overall luminance, it would follow that maybe 2 actual remaining color channels could be used, the variance of each computed separately, and the variances added?

In general, it is Mathematically safer to add Variances, than it would be to add Deviations, where Variance corresponds to Deviation squared, and where Variance therefore also corresponds to Energy, if Deviation corresponded to Potential. It is more generally agreed that Energy and its homologues are conserved quantities.

Dirk

 

There are situations in which Photogrammetry won’t do the job.

In place of painting a human actor with a laser-grid, there now exists an alternative, which is called “Photogrammetry”. This is a procedure, by which multiple 2D photos from different angles, of the same subject, are combined by a computer program into an optimal 3D model.

The older photogrammetry required humans in the loop, while the newer approaches do not.

With the 3D grid-lines, a trick they use is to have their cameras take two sets of photos: First with the grid off, and then with the grid on. The second is used to make the 3D model, while the first is used to create the texture-images.

One main problem with photogrammetry is instead, that the subject must have exactly the same geometry in 3D, shared between 4, 5, 6 photos etc., depending on how high we want the level of quality to be.

Peter Cushing, for example, would need to have been standing in a circle of cameras once, that all fired at once, in order to have been recreated in “Star Wars – Rogue One”.

Instead, the stock footage consists of many 2D views, each from one perspective, each with the subject in a different pose, each with the subject bearing a different facial expression, each with his hair done slightly differently…

That type of footage tends to be the least useful for photogrammetry.

So what they probably did, was try to create a 3D model of him ‘to the best of their human ability’. And the way human vision works, that data only needs to be wrong by one iota, for the viewer ‘not to see the same person’.

Similarly, I still don’t think that a 3D Texture, as opposed to a 2D Texture, can just be photographed. 3D, Tangent-Mapped Textures need to have normal-maps generated, which derive from depth-maps, and these depth-maps tend to be the works of human Texture Artists, who Paint – yes, Paint them.

They can sometimes also be ‘off’. The Texture Artist may exaggerate certain scars or pimples that the real Actor had, and cause the 3D model not to look real – again.

Dirk

(Next)