Quickie: How 2D Graphics is just a Special Case of 3D Graphics

I have previously written in-depth, about what the rendering pipeline is, by which 3D graphics are rendered to a 2D, perspective view, as part of computer games, or as part of other applications that require 3D, in real time. But one problem with my writing in-depth might be, that people fail to see some relevance in the words, if the word-count goes beyond 500 words.

So I’m going to try to summarize it more-briefly.

Vertex-Positions in 3D can be rotated and translated, using matrices. Matrices can be composited, meaning that if a sequence of multiplications of position-vectors by known matrices accomplishes what we want, then a multiplication by a single, derived matrix can accomplish the same thing.

According to DirectX 9 or OpenGL 2.x , 3D objects consisted of vertices that formed triangles, the positions and normal-vectors of which were transformed and rotated, respectively, and where vertices additionally possessed texture-coordinates, which could all be processed by “Vertex Pipelines”. The output from Vertex Pipelines was then rasterized and interpolated, and fed to “Pixel Pipelines”, that performed per-screen-pixel computations on the interpolated values, and on how these values were applied to Texture Images which were sampled.

All this work was done by dedicated graphics hardware, which is now known as a GPU. It was not done by software.

One difference that exists today, is that the specialization of GPU cores into Vertex- and Pixel-Pipelines no longer exists. Due to something called Unified Shader Model, any one GPU-core can act either as a Vertex- or as a Pixel-Shader, and powerful GPUs possess hundreds of cores.

So the practical question does arise, how any of this applies to 2D applications, such as Desktop Compositing. And the answer would be, that it has always been possible to render a single rectangle, as though oriented in a 3D coordinate system. This rectangle, which is also referred to as a “Quad”, first gets Tessellated, which means that it receives a diagonal subdivision into two triangles, which are still references to the same 4 vertices as before.

When an application receives a drawing surface, onto which it draws its GUI – using CPU-time – the corners of this drawing surface have 2D texture coordinates that are combinations of [ 0 ] and ( +1 ) . The drawing-surfaces themselves can be input to the GPU as though Texture Images. And the 4 vertices that define the position of the drawing surface on the desktop, can simply result from a matrix, that is much simpler than any matrix would have needed to be, that performed rotation in 3D etc., before a screen-positioning could be formed from it. Either way, the Vertex Program only needs to multiply the (notional) positions of the corners of a drawing surface, by a single matrix, before a screen-position results. This matrix does not need to be computed from complicated trig functions in the 2D case.

And the GPU renders the scene to a frame-buffer, just as it rendered 3D games.

Wings 3D Has a GUI Problem on my builds of Linux.

In keeping with my recent theme, of testing various 3D editing applications available under Linux, I next tried out “Wings3D”, which is available in the Debian repositories / package manager.

And one detail which I found frustrating, was that some of the dialog boxes – if not all of them – have problems displaying correctly on the current build of Linux.

I get the impression that Wings3D is actually quite powerful. But unless the Right-Mouse-Button (‘RMB’) clicks reveal their full Context Menus, it can be hard to get started with this application.

There was just a specific exercise which I undertook this evening, which was to try assigning a custom material – and therefore an arbitrary texture image – to an arbitrary 3D model, even in a situation where the texture image made no sense, just to get a feel for how it’s done. While getting started with this, I learned that objects must be U,V-mapped first – which is actually nothing new. And, advanced 3D editors such as Wings3D, do have a semi-automatic U,V-mapping option. The trick is for a person who has never used Wings3D before – actually to find it!

We can select a model to work on, and then we’re supposed to RMB, and from the context-menu, pick the ‘UV-Editor’. The problem with my build of Linux is, that the last entry of the context-menu is actually this option, and it doesn’t display correctly! Instead, in its place, all we get to see is a dot. When I’m looking for a UV-Editor option in a context-menu, to UV-Edit a specific model, I don’t usually think, that the entry which is actually displaying As A Dot – Is It.

Once I discovered this detail, I was able to make progress by trial-and-error.

There is a way for users in general to make out the entries in the Wings3D context menus, that are not being displayed correctly, and that is to hover over those entries, and to observe what the bottom, green bar tells us. It will update, and display information about how to use the menu-entry being hovered over – including what that entry is called !

Dirk

(Edit 08/24/2017 : )

For Debian / Jessie, the package manager only offers version 1.5.3 . But we can actually install version 2.1.5 quite well, from the Web-site.

Widening Our 3D Graphics Capabilities under FOSS

Just so that I can say that my 3D Graphics / Model Editing capabilities are not strictly limited to “Blender”, I have just installed the following Model Editors on the Linux computer I name ‘Klystron’, that are not available through my package-manager:

I felt that it might help others for me to note the URLs above, since correct and useful URLs can be hard to find.

In addition, I installed the following Ray-Tracing Software-Rendering Engines, which do not come with their own Model Editors:

Finally, the following was always available through my package manager:

• Blender
• K-3D
• MeshLab
• Wings3D

• PovRay

In order to get ‘Ayam’ to run properly – i.e., be able to load its plugins and therefore load ‘Aqsis’ shaders, I needed to create a number of symlinks like so:

( Last Updated on 10/31/2017, 7h55 )

The role Materials play in CGI

When content-designers work with their favorite model editors or scene editors, in 3D, towards providing either a 3D game or another type of 3D application, they will often not map their 3D models directly to texture images. Instead, they will often connect each model to one Material, and the Material will then base its behavior on zero or more texture images. And a friend of mine has asked, what this describes.

Effectively, these Materials replace what a programmed shader would do, to define the surface properties of the simulated, 3D model. They tend to have a greater role in CPU rendering / ray tracing than they do with raster-based / DirectX or OpenGL -based graphics, but high-level editors may also be able to apply Materials to the hardware-rendered graphics, IF they can provide some type of predefined shader, that implements what the Material is supposed to implement.

A Material will often state such parameters as Gloss, Specular, Metallicity, etc.. When a camera-reflection-vector is computed, this reflection vector will land in some 3D direction relative to the defined light sources. Hence, a dot-product can be computed between it and the direction of the light source. Gloss represents the power to which this dot-product needs to be raised, resulting in specular highlights that become narrower. Often Gloss must be compensated for the fact that the integral of a power-function, is less than (1.0) times a higher power-function, and that therefore, the average brightness of a surface with gloss would seem to decrease…

But, if a content-designer enrolls a programmed shader, especially a Fragment Shader, than this shader replaces everything that a Material would otherwise have provided. It is often less-practical, though not impossible, to implement a programmed shader in software-rendered contexts, where mainly for this reason, the use of Materials still prevails.

Also, the notion often occurs to people, however unproven, that Materials will only provide basic shading options, such as ‘DOT3 Bump-Mapping‘, so that programmed shaders need to be used if more-sophisticated shading options are required, such as Tangent-Mapping. Yet, as I just wrote, every blend-mode a Material offers, is being defined by some sort of predefined shader – i.e. by a pre-programmed algorithm.

OGRE is an open-source rendering system, which requires that content-designers assign Materials to their models, even though hardware-rendering is being used, and then these Materials cause shaders to be loaded. Hence, if an OGRE content-designer wants to code his own shader, he must first also define his own Material, which will then load his custom shader.