More about Framebuffer Objects

In the past, when I was writing about hardware-accelerated graphics – i.e., graphics rendered by the GPU – such as in this article, I chose the phrasing, according to which the Fragment Shader eventually computes the color-values of pixels ‘to be sent to the screen’. I felt that this over-simplification could make my topics a bit easier to understand at the time.

A detail which I had deliberately left out, was that the rendering target may not be the screen in any given context. What happens is that memory-allocation, even the allocation of graphics-memory, is still carried out by the CPU, not the GPU. And ‘a shader’ is just another way to say ‘a GPU program’. In the case of a “Fragment Shader”, what this GPU program does can be visualized better as shading, whereas in the case of a “Vertex Shader”, it just consists of computations that affect coordinates, and may therefore be referred to just as easily as ‘a Vertex Program’. Separately, there exists the graphics-card extension, that allows for the language to be the ARB-language, which may also be referred to as defining a Vertex Program. ( :4 )

The CPU sets up the context within which the shader is supposed to run, and one of the elements of this context, is to set up a buffer, to which the given, Fragment Shader is to render its pixels. The CPU sets this up, as much as it sets up 2D texture images, from which the shader fetches texels.

The rendering target of a given shader-instance may be, ‘what the user finally sees on his display’, or it may not. Under OpenGL, the rendering target could just be a Framebuffer Object (an ‘FBO’), which has also been set up by the CPU as an available texture-image, from which another shader-instance samples texels. The result of that would be Render To Texture (‘RTT’).

Continue reading More about Framebuffer Objects

Understanding that The GPU Is Real

A type of graphics hardware which once existed, was an arrangement by which a region of memory was formatted to correspond directly to screen-pixels, and by which a primitive set of chips would rasterize that memory-region, sending the analog-equivalent of pixel-values to an output-device, such as a monitor, even while the CPU was writing changes to the same memory-region. This type of graphics arrangement is currently referred to as “A Framebuffer Device”. Since the late 1990s, these types of graphics have been replaced by graphics, that possess a ‘GPU’ – a Graphics Processing Unit. The acronym GPU follows similarly to how the acronym ‘CPU’ is formed, the latter of which stands for Central Processing Unit.

A GPU is essentially a kind of co-processor, which does a lot of the graphics-work that the CPU once needed to do, back in the days of framebuffer-devices. The GPU has been optimized, where present, to give real-time 2D, perspective-renderings of 3D scenes, that are fed to the GPU in a language that is either some version of DirectX, or in some version of OpenGL. But, modern GPUs are also capable of performing certain 2D tasks, such as to accelerate the playback of compressed video-streams at very high resolutions, and to do Desktop Compositing.

wayland_1

wayland_2

What they do is called raster-based rendering, as opposed to ray-tracing, where ray-tracing cannot usually be accomplished in real-time.

And modern smart-phones and tablets, also typically have GPUs, that give them some of their smooth home-screen effects and animations, which would all be prohibitive to program under software-based graphics.

The fact that some phone or computer has been designed and built by Apple, does not mean that it has no GPU. Apple presently uses OpenGL as its main language to communicate 3D to its GPUs.

DirectX is totally owned by Microsoft.

The GPU of a general-purpose computing device often possesses additional protocols for accepting data from the CPU, other than DirectX or OpenGL. The accelerated, 2D decompressed video-streams would be an example of that, which are possible under Linux, if a graphics-driver supports ‘vdpau‘ …

Dirk

 

About the Black Borders Around some of my Screen-Shots

One practice I have, is to take simple screen-shots of my Linux desktop, using the KDE-compatible utility named ‘KSnapshot’. It can usually be activated, by just tapping on the ‘Print-Screen’ keyboard-key, and if not, KDE can be customized with a hot-key combination to launch it just as easily.

If I use this utility to take a snapshot, of one single application-window, then it may or may not happen, that the screen-shot of that window has a wide, black border. And the appearance of this border, may confuse my readers.

The reason this border appears, has to do with the fact that I have Desktop Compositing activated, which on my Linux systems is based on a version of the Wayland Compositor, that has been built specifically, to work together with the X-server.

One of the compositing effects I have enabled, is to draw a bluish halo around the active application-window. Because this is introduced as much as possible, at the expense of GPU power and not CPU power, it has its own way of working, specific to OpenGL 2 or OpenGL 3. Essentially, the application draws its GUI-window into a specifically-assigned memory region, called a ‘drawing surface’, but not directly to the screen-area to be seen. Instead, the drawing surface of any one application window, is taken by the compositor to be a Texture Image, just like 3D Models would have Texture Images. And then the way Wayland organizes its scene, essentially just simplifies the computation of coordinates. Because OpenGL versions are optimized for 3D, they have specialized way to turn 3D coordinates into 2D, screen-coordinates, which the Wayland Compositor bypasses for the most part, by feeding the GPU some simplified matrices, where the GPU would be able to accept much more complex matrices.

In the end, in order for any one application-window to receive a blue halo, to indicate that it is the one, active application in the foreground, its drawing surface must be made larger to begin with, than what the one window-size would normally require. And then, the blue halo exists statically within this drawing-surface, but outside the normal set of coordinates of the drawn window.

The halo appears over the desktop layout, and over other application windows, through the simple use of alpha-blending on the GPU, using a special blending-mode:

  • The inverse of the per-texel alpha determines by how much the background should remain visible.
  • If the present window is not the active window, the background simply replaces the foreground.
  • If the present window is the active window, the two color-values add, causing the halo to seem to glow.
  • The CPU can decide to switch the alpha-blending mode of an entity, without requiring the entity be reloaded.

KSnapshot sometimes recognizes, that if instructed to take a screen-shot of one window, it should copy a sub-rectangle of the drawing surface. But in certain cases the KSanpshot utility does not recognize the need to do this, and just captures the entire drawing surface. Minus whatever alpha-channel the drawing surface might have, since screen-shots are supposed to be without alpha-channels. So the reader will not be able to make out the effect, because by the time a screen-shot has been saved to my hard-drive, it is without any alpha-channel.

And there are two ways I know of by default, to reduce an image that has an alpha-channel, to one that does not:

  1. The non-alpha, output-image can cause the input image to appear, as though in front of a checkerboard-pattern, taking its alpha into account,
  2. The non-alpha, output-image can cause the input image to appear, as though just in front of a default-color, such as ‘black’, but again taking its alpha into account.

This would be decided by a library, resulting in a screen-shot, that has a wide black border around it. This represents the maximum extent, by which static, 2D effects can be dawn in – on the assumption that those effects were defined on the CPU, and not on the GPU.

So, just as the actual application could be instructed to draw its window into a sub-rectangle of the whole desktop, it can be instructed to draw its window into a sub-rectangle, of its assigned drawing-surface. And with this effect enabled, this is indeed how it’s done.

Dirk

 

I now own a Kindle PaperWhite (eBook Reader).

I know some friends, who are about my own age, and who would swear, that as long as they own a tablet, on which the Amazon Kindle app can be installed and run, they see no need for a physical, Kindle Device. And the main reason seems to be their old-school thinking, that one highly-versatile device, need not be replaced by more-specialized devices, the features of one are a subset of another.

( Updated below on 07/30/2017 … )

My friends date back to the era, before Lithium-Ion batteries, when the more-versatile devices were simply plugged into an A/C outlet, and assumed to run indefinitely. They would probably ask me, ‘You own a more-versatile Tablet. Why did you go ahead and buy a Kindle Device?’

I can think of two answers:

  1. I want to have the technology, and
  2. I actually want the leisure, of being able to read an entire book and relax while doing so.

A long time ago, essentially all forms of 2D displays were active displays. There existed LED and LCD, which had in common, that they had their own light-sources, which during full sunlight, need to overpower the sunlight, in order to define white as anything brighter than black. While the origin of LCDs was to overcome this – during the last century – the fact that LCDs needed to be transformed into high-res, full-color displays, meant that they needed backlights, approximately 50% of the light-energy of which they did not absorb and allowed through, for typical images. Their claim to being ‘transfelctive’ was long on the transmissive, but short on the reflective. When this sort of an ‘improved’ LCD was required to act as a reflective display, it generally scattered back less than 50% of the incident light-energy, and with the typical glass shields in front of them, the glare during bright light was made even worse.

As some of my readers already understand, when the Kindle was first invented, it also pioneered the use of a kind of passive display, which was at some point in time named ‘e-Paper’. It’s quite apart from LCD-technology, in that in reflective mode, the pixels of it that are meant to be white, actually scatter back more than 80% of the incident light. And black pixels are truly dark. So the surface is as readable by default, as a sheet of paper would be, with ink printed on it. And it requires about as much battery-charge to run, as a sheet of paper (exaggeration intentional here), since it’s not generally required to act as a light-source.

Now, in the way some people think, this fact might get obscured, by the fact that modern Kindles employ a kind of e-Paper, with an additional backlight. In theory, I can turn the backlight completely down, to conserve the battery-life as much as possible, at which point the bright pixels take on a slightly yellowish tint, much like older, browned paper would. But the text is just slightly more readable, when there is non-zero backlight. And, if I am ever to read in a dark room, I’ll need non-zero backlight for sure.

Because I’m a slightly older man, I also have slightly poorer vision, than I did as a teenager, and so I think I actually need slightly more backlight (a level numbered ’10’) , than an average teenager would need, to read in a partially-lit environment. In a completely dark room, I’d turn up the backlight even higher than that.

To be completely up-to-date about it, the back-lit Kindles are not even the most-modern, because by now, there exist Kindles with e-Paper and a Front-Light. But on my own terms, I actually consider the slightly more-basic Kindles, such as the PaperWhite, to be better, than the most-recent models, that offer endlessly-more features, and that consume more battery-charge than mine would.

On an Android tablet, the battery actually prevents us from reading anything for more than a few hours – maybe 2 or 3 tops – at a time. This used to stand in my way of rediscovering reading. Now, a Kindle will allow me to read more than a whole book, at whatever time of day seems convenient, and without interrupting me with a depleting battery.

Continue reading I now own a Kindle PaperWhite (eBook Reader).