In this YouTube Video:
I told potential viewers, that there can be more than one way, in which a 2D Image can be transformed into some form of 3D Geometry, in the form of a mesh, suitable for CGI. This took the form of Terrain Objects.
Some of my readers may already know a lot about Terrain Objects, but then again, some may not.
There was a detail to Terrain Objects which my screen-cast failed to explain. Given a serious game engine, or other 3D rendering engine, this will offer its content-developer a variety of different objects, which he or she can build a game, etc., out of. And most game engines, will actually implement Terrain Objects as being different Entities, from generic Models. Not only that, but Convex Models exist in addition to the types of Models, that would be used to represent Actors… And the exact way in which this is organized usually depends on the game engine.
What most game engines will do is actually allow their content-developer just to specify a height-map, which refers to the 2D image the pixel-values of which are the heights, and to convert this into a 3D mesh behind the scenes for him. Not only that, but powerful game engines will actually support ‘Chunked Terrain’ in some way, which means that the Terrain of a given Game Level is subdivided into Chunks, not all of which need to be loaded onto the graphics card at once.
The reason fw this is done, is the fact that the actual 3D mesh consumes far more graphics memory, than a 2D Image would, especially in the case of Terrains. Not having to load the geographical definition of an entire world at once, has its benefits.
But I also felt that it was beyond the scope of my video, to explain that.
(Update 05/08/2018, 15h35 … )
(As of 05/08/2018, 9h00 : )
There was a bug, in how the Terrain Object was being deformed, by Blender, which I did comment about in the screen-cast. But the initial idea which I voiced in the screen-cast, as to why this bug was taking place – certain pixels in the original 2D Image being equal to zero – turned out to be the wrong explanation.
Instead, when the first program produced a 257×257 pixel image, its intent was, that the number of pixels corresponded exactly, to the number of vertexes that will result, when the number of (3D) quads becomes a power of two (256). The real problem is, that the Blender-feature fails to map its 3D Model perfectly, to the Image that is to deform it. And so in the ±Y direction, Blender overshoots the 2D Image by exactly 1 pixel, in each direction. This is just a minor bug within Blender 2.79b .
The quick fix for this is, to set the Texture Mapping to ‘Extend’, as shown below:
The reader might expect that ‘normal’ texture-mapping rules apply here, but, because this texture is not being mapped as such, only used as input for a Blender tool, normal texture-mapping rules do not apply.
Further down the road a problem will ensue, if a 257×257 2D Image is being used to displace 256×256 quads, but overshoots by 1+1 pixel. That problem will be, that the position at which the 2D Image is being sampled, will also be out-of-alignment over the entire 3D Model. Blender will perform an interpolation, to correspond to where, between the pixels, the image has been sampled. But, the mere fact that an interpolation is being performed, will also tend to reduce the sharpness of the height-map.
And so even though it seemed logical to use a 257×257 Image, in the future, I’d need to produce a 513×513 Image, when the quads I’m to deform number 256×256, just so that the damage caused by the aliasing is kept under control. This issue was actually why, the Terrain Object which resulted at first, was just a bit smoother, than it was supposed to be.
(Update 05/08/2018, 15h35 : )
I’ve learned that this is a bug specifically, in how Blender imports the poorly-supported .SGI File format.
Even when staying with a 257×257 pixel Image, for a 256×256 quad Terrain, the following command, which makes use of ImageMagick, solves the problem:
convert -define png:compression-level=9 terrain2.sgi terrain2.png