More about Framebuffer Objects

In the past, when I was writing about hardware-accelerated graphics – i.e., graphics rendered by the GPU – such as in this article, I chose the phrasing, according to which the Fragment Shader eventually computes the color-values of pixels ‘to be sent to the screen’. I felt that this over-simplification could make my topics a bit easier to understand at the time.

A detail which I had deliberately left out, was that the rendering target may not be the screen in any given context. What happens is that memory-allocation, even the allocation of graphics-memory, is still carried out by the CPU, not the GPU. And ‘a shader’ is just another way to say ‘a GPU program’. In the case of a “Fragment Shader”, what this GPU program does can be visualized better as shading, whereas in the case of a “Vertex Shader”, it just consists of computations that affect coordinates, and may therefore be referred to just as easily as ‘a Vertex Program’. Separately, there exists the graphics-card extension, that allows for the language to be the ARB-language, which may also be referred to as defining a Vertex Program. ( :4 )

The CPU sets up the context within which the shader is supposed to run, and one of the elements of this context, is to set up a buffer, to which the given, Fragment Shader is to render its pixels. The CPU sets this up, as much as it sets up 2D texture images, from which the shader fetches texels.

The rendering target of a given shader-instance may be, ‘what the user finally sees on his display’, or it may not. Under OpenGL, the rendering target could just be a Framebuffer Object (an ‘FBO’), which has also been set up by the CPU as an available texture-image, from which another shader-instance samples texels. The result of that would be Render To Texture (‘RTT’).

Continue reading More about Framebuffer Objects

Understanding that The GPU Is Real

A type of graphics hardware which once existed, was an arrangement by which a region of memory was formatted to correspond directly to screen-pixels, and by which a primitive set of chips would rasterize that memory-region, sending the analog-equivalent of pixel-values to an output-device, such as a monitor, even while the CPU was writing changes to the same memory-region. This type of graphics arrangement is currently referred to as “A Framebuffer Device”. Since the late 1990s, these types of graphics have been replaced by graphics, that possess a ‘GPU’ – a Graphics Processing Unit. The acronym GPU follows similarly to how the acronym ‘CPU’ is formed, the latter of which stands for Central Processing Unit.

A GPU is essentially a kind of co-processor, which does a lot of the graphics-work that the CPU once needed to do, back in the days of framebuffer-devices. The GPU has been optimized, where present, to give real-time 2D, perspective-renderings of 3D scenes, that are fed to the GPU in a language that is either some version of DirectX, or in some version of OpenGL. But, modern GPUs are also capable of performing certain 2D tasks, such as to accelerate the playback of compressed video-streams at very high resolutions, and to do Desktop Compositing.

wayland_1

wayland_2

What they do is called raster-based rendering, as opposed to ray-tracing, where ray-tracing cannot usually be accomplished in real-time.

And modern smart-phones and tablets, also typically have GPUs, that give them some of their smooth home-screen effects and animations, which would all be prohibitive to program under software-based graphics.

The fact that some phone or computer has been designed and built by Apple, does not mean that it has no GPU. Apple presently uses OpenGL as its main language to communicate 3D to its GPUs.

DirectX is totally owned by Microsoft.

The GPU of a general-purpose computing device often possesses additional protocols for accepting data from the CPU, other than DirectX or OpenGL. The accelerated, 2D decompressed video-streams would be an example of that, which are possible under Linux, if a graphics-driver supports ‘vdpau‘ …

Dirk

 

The role Materials play in CGI

When content-designers work with their favorite model editors or scene editors, in 3D, towards providing either a 3D game or another type of 3D application, they will often not map their 3D models directly to texture images. Instead, they will often connect each model to one Material, and the Material will then base its behavior on zero or more texture images. And a friend of mine has asked, what this describes.

Effectively, these Materials replace what a programmed shader would do, to define the surface properties of the simulated, 3D model. They tend to have a greater role in CPU rendering / ray tracing than they do with raster-based / DirectX or OpenGL -based graphics, but high-level editors may also be able to apply Materials to the hardware-rendered graphics, IF they can provide some type of predefined shader, that implements what the Material is supposed to implement.

A Material will often state such parameters as Gloss, Specular, Metallicity, etc.. When a camera-reflection-vector is computed, this reflection vector will land in some 3D direction relative to the defined light sources. Hence, a dot-product can be computed between it and the direction of the light source. Gloss represents the power to which this dot-product needs to be raised, resulting in specular highlights that become narrower. Often Gloss must be compensated for the fact that the integral of a power-function, is less than (1.0) times a higher power-function, and that therefore, the average brightness of a surface with gloss would seem to decrease…

But, if a content-designer enrolls a programmed shader, especially a Fragment Shader, than this shader replaces everything that a Material would otherwise have provided. It is often less-practical, though not impossible, to implement a programmed shader in software-rendered contexts, where mainly for this reason, the use of Materials still prevails.

Also, the notion often occurs to people, however unproven, that Materials will only provide basic shading options, such as ‘DOT3 Bump-Mapping‘, so that programmed shaders need to be used if more-sophisticated shading options are required, such as Tangent-Mapping. Yet, as I just wrote, every blend-mode a Material offers, is being defined by some sort of predefined shader – i.e. by a pre-programmed algorithm.

OGRE is an open-source rendering system, which requires that content-designers assign Materials to their models, even though hardware-rendering is being used, and then these Materials cause shaders to be loaded. Hence, if an OGRE content-designer wants to code his own shader, he must first also define his own Material, which will then load his custom shader.

Continue reading The role Materials play in CGI

d3dcompiler_47.dll

Microsoft has stipulated, that in the future, all DirectX applications will need to use a DLL named D3DCOMPILER_47, if not of some greater version. What this driver does seems quite useful.

In the past, it had always been a limitation in the design of raster-based graphics, that actual shader code needed to be fed to the graphics drivers directly as text. As an alternative, D3DCOMPILER allows for the shader code to be compiled into bytecode, which DirectX drivers can understand. This also has uses in obfuscating copyrighted content. But mainly, it provides performance improvements in fact, because many games today have highly complex “Compute Shaders” etc..

When we install the Windows SDK on a Windows 7 or a Windows 10 computer, not the same libraries and header files will install each time.

(Some Text Deleted Because Erroneous Before.)

Dirk

(Edit 09/18/2016 : )

I have learned, that the library D3DCOMPILER_47.DLL is supposed to reside in both the System32 and the SysWOW64 folders (on a 64-bit Windows system), and is auto-generated there, much as D3DX9_43.DLL was, by the DirectX system on the computer, which is supposed to be running the game in question.

Because the DirectX system on Windows 7 will generate at maximum, D3DCOMPILER_43.DLL , some game developers have been including their own version of the _47 DLL, in the same directory as the EXE File, which is also the first directory the EXE File tends to search for it.

But a problem with that is, that some version of this library will be built specifically for Windows 8.1 for example, and will therefore not run properly on Windows 7. This produces an obscure error message, which I do not recall the exact spelling of.

Yet, the onus is not on the OGRE developers to supply their own version of this library. They do supply a version with their SDKs specifically, but that one is now built to run on Windows 8.1 as well.

Further, version _47 may only be needed for DirectX 11 applications, while _43 may still be good enough for Legacy DirectX 9. Yet, if OGRE was coded to use _47, then it will use that version throughout, even if I was only to try building their DirectX 9 rendering system.

The correct Path on my Windows 7 box ‘Mithral’ for this DLL, is

 


C:\Program Files (x86)\Windows Kits\8.1\Redist\D3D\x86

 

Contrarily to what the spelling of this Path may suggest, the DLL version I find there does load correctly on Windows 7. And, I am based on the conventional 32-bit build of OGRE. The above Path is related to one, which will also reveal the 64-bit DLL.

As for OGRE still requiring the Legacy DirectX SDK to build, this is a question of which version of the header files it has been programmed to expect, as well as of providing both the DXGUID and D3DX Static Libraries, whose file-names end in .LIB, as opposed to any DLL Files, which are the Dynamically Linked Libraries.

It is common, for the compilation of software to rely on such static libraries, whereas to run the software should only require the DLL Files, under Windows.