About the Black Borders Around some of my Screen-Shots

One practice I have, is to take simple screen-shots of my Linux desktop, using the KDE-compatible utility named ‘KSnapshot’. It can usually be activated, by just tapping on the ‘Print-Screen’ keyboard-key, and if not, KDE can be customized with a hot-key combination to launch it just as easily.

If I use this utility to take a snapshot, of one single application-window, then it may or may not happen, that the screen-shot of that window has a wide, black border. And the appearance of this border, may confuse my readers.

The reason this border appears, has to do with the fact that I have Desktop Compositing activated, which on my Linux systems is based on a version of the Wayland Compositor, that has been built specifically, to work together with the X-server.

One of the compositing effects I have enabled, is to draw a bluish halo around the active application-window. Because this is introduced as much as possible, at the expense of GPU power and not CPU power, it has its own way of working, specific to OpenGL 2 or OpenGL 3. Essentially, the application draws its GUI-window into a specifically-assigned memory region, called a ‘drawing surface’, but not directly to the screen-area to be seen. Instead, the drawing surface of any one application window, is taken by the compositor to be a Texture Image, just like 3D Models would have Texture Images. And then the way Wayland organizes its scene, essentially just simplifies the computation of coordinates. Because OpenGL versions are optimized for 3D, they have specialized way to turn 3D coordinates into 2D, screen-coordinates, which the Wayland Compositor bypasses for the most part, by feeding the GPU some simplified matrices, where the GPU would be able to accept much more complex matrices.

In the end, in order for any one application-window to receive a blue halo, to indicate that it is the one, active application in the foreground, its drawing surface must be made larger to begin with, than what the one window-size would normally require. And then, the blue halo exists statically within this drawing-surface, but outside the normal set of coordinates of the drawn window.

The halo appears over the desktop layout, and over other application windows, through the simple use of alpha-blending on the GPU, using a special blending-mode:

  • The inverse of the per-texel alpha determines by how much the background should remain visible.
  • If the present window is not the active window, the background simply replaces the foreground.
  • If the present window is the active window, the two color-values add, causing the halo to seem to glow.
  • The CPU can decide to switch the alpha-blending mode of an entity, without requiring the entity be reloaded.

KSnapshot sometimes recognizes, that if instructed to take a screen-shot of one window, it should copy a sub-rectangle of the drawing surface. But in certain cases the KSanpshot utility does not recognize the need to do this, and just captures the entire drawing surface. Minus whatever alpha-channel the drawing surface might have, since screen-shots are supposed to be without alpha-channels. So the reader will not be able to make out the effect, because by the time a screen-shot has been saved to my hard-drive, it is without any alpha-channel.

And there are two ways I know of by default, to reduce an image that has an alpha-channel, to one that does not:

  1. The non-alpha, output-image can cause the input image to appear, as though in front of a checkerboard-pattern, taking its alpha into account,
  2. The non-alpha, output-image can cause the input image to appear, as though just in front of a default-color, such as ‘black’, but again taking its alpha into account.

This would be decided by a library, resulting in a screen-shot, that has a wide black border around it. This represents the maximum extent, by which static, 2D effects can be dawn in – on the assumption that those effects were defined on the CPU, and not on the GPU.

So, just as the actual application could be instructed to draw its window into a sub-rectangle of the whole desktop, it can be instructed to draw its window into a sub-rectangle, of its assigned drawing-surface. And with this effect enabled, this is indeed how it’s done.




The concept seems rather intuitive, by which a single object or entity can be translucent. But another concept which is less intuitive, is that the degree to which it is so can be stated once per pixel, through an alpha-channel.

Just as every pixel can possess one channel for each of the three additive primary colors: Red, Green and Blue, It can possess a 4th channel named Alpha, which states on a scale from [ 0.0 … 1.0 ] , how opaque it is.

This does not just apply to the texture images, whose pixels are named texels, but also to Fragment Shader output, as well as to the pixels actually associated with the drawing surface, which provide what is known as destination alpha, since the drawing surface is also the destination of the rendering, or its target.

Hence, there exist images whose pixels have a 4channel format, as opposed to others, with a mere 3-channel format.

Now, there is no clear way for a display to display alpha. In certain cases, alpha in an image being viewed is hinted by software, as a checkerboard pattern. But what we see is nevertheless color-information and not transparency. And so a logical question can be, what the function of this alpha-channel is, which is being rendered to.

There are many ways in which the content from numerous sources can be blended, but most of the high-quality ones require, that much communication takes place between rendering-stages. A strategy is desired in which output from rendering-passes is combined, without requiring much communication between the passes. And alpha-blending is a de-facto strategy for that.

By default, closer entities, according to the position of their origins in view space, are rendered first. What this does is put closer values into the Z-buffer as soon as possible, so that the Z-buffer can prevent the rendering of the more distant entities as efficiently as possible. 3D rendering starts when the CPU gives the command to ‘draw’ one entity, which has an arbitrary position in 3D. This may be contrary to what 2D graphics might teach us to predict.

Alas, alpha-entities – aka entities that possess alpha textures – do not write the Z-buffer, because if they did, they would prevent more-distant entities from being rendered. And then, there would be no point in the closer ones being translucent.

The default way in which alpha-blending works, is that the alpha-channel of the display records the extent to which entities have been left visible, by previous entities which have been rendered closer to the virtual camera.

Continue reading Alpha-Blending