About the Black Borders Around some of my Screen-Shots

One practice I have, is to take simple screen-shots of my Linux desktop, using the KDE-compatible utility named ‘KSnapshot’. It can usually be activated, by just tapping on the ‘Print-Screen’ keyboard-key, and if not, KDE can be customized with a hot-key combination to launch it just as easily.

If I use this utility to take a snapshot, of one single application-window, then it may or may not happen, that the screen-shot of that window has a wide, black border. And the appearance of this border, may confuse my readers.

The reason this border appears, has to do with the fact that I have Desktop Compositing activated, which on my Linux systems is based on a version of the Wayland Compositor, that has been built specifically, to work together with the X-server.

One of the compositing effects I have enabled, is to draw a bluish halo around the active application-window. Because this is introduced as much as possible, at the expense of GPU power and not CPU power, it has its own way of working, specific to OpenGL 2 or OpenGL 3. Essentially, the application draws its GUI-window into a specifically-assigned memory region, called a ‘drawing surface’, but not directly to the screen-area to be seen. Instead, the drawing surface of any one application window, is taken by the compositor to be a Texture Image, just like 3D Models would have Texture Images. And then the way Wayland organizes its scene, essentially just simplifies the computation of coordinates. Because OpenGL versions are optimized for 3D, they have specialized way to turn 3D coordinates into 2D, screen-coordinates, which the Wayland Compositor bypasses for the most part, by feeding the GPU some simplified matrices, where the GPU would be able to accept much more complex matrices.

In the end, in order for any one application-window to receive a blue halo, to indicate that it is the one, active application in the foreground, its drawing surface must be made larger to begin with, than what the one window-size would normally require. And then, the blue halo exists statically within this drawing-surface, but outside the normal set of coordinates of the drawn window.

The halo appears over the desktop layout, and over other application windows, through the simple use of alpha-blending on the GPU, using a special blending-mode:

  • The inverse of the per-texel alpha determines by how much the background should remain visible.
  • If the present window is not the active window, the background simply replaces the foreground.
  • If the present window is the active window, the two color-values add, causing the halo to seem to glow.
  • The CPU can decide to switch the alpha-blending mode of an entity, without requiring the entity be reloaded.

KSnapshot sometimes recognizes, that if instructed to take a screen-shot of one window, it should copy a sub-rectangle of the drawing surface. But in certain cases the KSanpshot utility does not recognize the need to do this, and just captures the entire drawing surface. Minus whatever alpha-channel the drawing surface might have, since screen-shots are supposed to be without alpha-channels. So the reader will not be able to make out the effect, because by the time a screen-shot has been saved to my hard-drive, it is without any alpha-channel.

And there are two ways I know of by default, to reduce an image that has an alpha-channel, to one that does not:

  1. The non-alpha, output-image can cause the input image to appear, as though in front of a checkerboard-pattern, taking its alpha into account,
  2. The non-alpha, output-image can cause the input image to appear, as though just in front of a default-color, such as ‘black’, but again taking its alpha into account.

This would be decided by a library, resulting in a screen-shot, that has a wide black border around it. This represents the maximum extent, by which static, 2D effects can be dawn in – on the assumption that those effects were defined on the CPU, and not on the GPU.

So, just as the actual application could be instructed to draw its window into a sub-rectangle of the whole desktop, it can be instructed to draw its window into a sub-rectangle, of its assigned drawing-surface. And with this effect enabled, this is indeed how it’s done.

Dirk

 

Alpha-Blending

The concept seems rather intuitive, by which a single object or entity can be translucent. But another concept which is less intuitive, is that the degree to which it is so can be stated once per pixel, through an alpha-channel.

Just as every pixel can possess one channel for each of the three additive primary colors: Red, Green and Blue, It can possess a 4th channel named Alpha, which states on a scale from [ 0.0 … 1.0 ] , how opaque it is.

This does not just apply to the texture images, whose pixels are named texels, but also to Fragment Shader output, as well as to the pixels actually associated with the drawing surface, which provide what is known as destination alpha, since the drawing surface is also the destination of the rendering, or its target.

Hence, there exist images whose pixels have a 4channel format, as opposed to others, with a mere 3-channel format.

Now, there is no clear way for a display to display alpha. In certain cases, alpha in an image being viewed is hinted by software, as a checkerboard pattern. But what we see is nevertheless color-information and not transparency. And so a logical question can be, what the function of this alpha-channel is, which is being rendered to.

There are many ways in which the content from numerous sources can be blended, but most of the high-quality ones require, that much communication takes place between rendering-stages. A strategy is desired in which output from rendering-passes is combined, without requiring much communication between the passes. And alpha-blending is a de-facto strategy for that.

By default, closer entities, according to the position of their origins in view space, are rendered first. What this does is put closer values into the Z-buffer as soon as possible, so that the Z-buffer can prevent the rendering of the more distant entities as efficiently as possible. 3D rendering starts when the CPU gives the command to ‘draw’ one entity, which has an arbitrary position in 3D. This may be contrary to what 2D graphics might teach us to predict.

Alas, alpha-entities – aka entities that possess alpha textures – do not write the Z-buffer, because if they did, they would prevent more-distant entities from being rendered. And then, there would be no point in the closer ones being translucent.

The default way in which alpha-blending works, is that the alpha-channel of the display records the extent to which entities have been left visible, by previous entities which have been rendered closer to the virtual camera.

Continue reading Alpha-Blending

The laptop ‘Klystron’ suspends to RAM half-decently.

One subject which I have written a lot about, was that as soon as I close the lid of the laptop I name ‘Klystron’, it seems to lose its WiFi signal, and that that can get in the way of comfortable use, because to close the lid also helps shield the keyboard of dust etc..

This Linux laptop boots decently fast, and yet is still a hassle to reboot very often. And so I needed to come up with a different way of solving my problem, on a practical level. My solution for now, is to tell the laptop to Suspend To RAM, as soon as I close the lid. That way, the WiFi signal is gone more properly, and when the laptop resumes its session, the scripts that govern this behavior also re-initialize the WiFi chipset and its status on my LAN. This causes less confusion with running Samba servers etc., on my other computers.

There is a bit of terminology, which I do not think that the whole population understands, but which I think that people are simply using differently from how it was used in my past.

It used to be, that under Linux, we had ‘Suspend To RAM’ and ‘Suspend To Disk’. In the Windows world, these terms corresponded to ‘Standby’ and ‘Hibernate’ respectively. Well in the terminology today, they stand for ‘Sleep’ and ‘Hibernate’, borrowing those terms from mobile devices.

There are two types of Suspend working in any case.

In past days of Linux, we could not cause a laptop just to Hibernate. We needed to install special packages and modify the Grand Unified Bootloader, before we could even Suspend To Disk. Suspending To RAM used to be less reliable. Well one development with modern Linux which I welcome, among many, is the fact that Sleep and Hibernate should, in most cases, work out-of-the-box.

I just tried Sleep mode tonight, and it works 90%.

One oddity: When we Resume, on this laptop, the message is displayed on the screen numerous times, of a CPU Error. But after a few seconds of CPU errors, apparently the session is restored without corruption. Given that I have 300 (+) processes, I cannot be 100% sure that the Restore is perfectly without corruption. But I am reasonably sure, with one exception:

The second oddity is of greater relevance. After Waking Up, the clock of the laptop seems to be displaced 2 days and a certain number of hours into the future. This bug has been observed on some other devices, and I needed to add a script to the configuration files as a workaround, which simply sets the system clock back that many days and that many hours, after Waking. Thankfully, I believe that doing so, was as much of a workaround as was needed.

One side-effect of not having done so, before being aware of the problem, was that the ‘KNotify’ alarms for the next two days have also all sounded, so that it will take another two days, before personal organizer – PIM – notifications may sound for me again.

The fact that numerous CPU errors are displayed bothers me not. What that means, is that the way the CPU goes to sleep, and then wakes up, involves power-cycling in ways that do not guarantee the integrity of data throughout. But it would seem that good programming of the kernel does provide data integrity, with the exception of the system clock issue.

But the fact that the hardware is a bit testy when using the Linux version of Sleep, also suggests that maybe this is also the kind of laptop that powers down its VRAM. It is a good thing then, that I disabled the advanced compositing effects, that involve vertex arrays.


 

There is a side-note on the desktop cube animation I wanted to make.

In general, when raster-rendering a complex scene with models, each model is defined by a vertex array, an index array, one or more texture images etc., and the vertex array stores the model geometry statically, as relative to the coordinate-origin of the model. Then, a model-view-projection matrix is applied – or just a rotation matrix for the normal vectors – to position it with respect to the screen. Moving the models is then a question of the CPU updating the model-view matrix only.

Well when a desktop cube animation is the only model in the scene, as part of compositing, I think that the way in which this is managed differs slightly. I think that what happens here, is that instead of the cube having vertex coordinates of +/- 1 all the time, the model-view matrix is kept as an identity matrix.

Instead, the actual vertex data is rewritten to the vertex array, to reposition the vertices with respect to the view.

Why is this significant? Well, if it was true that Suspending the session To RAM also cut power to the VRAM, it would be useful to know, which types of data stored therein will seem corrupted after a resume, and which will not.

Technically, texture images can also get garbled. But if all it takes is one frame cycle for texture images to get refreshed, the net result is that the displayed desktop will look normal again, by the time the user unlocks it.

Similarly, if the vertex array of the only model is being rewritten by the CPU, doing so will also rewrite the header information in the vertex array, that tells the GPU how many vertices there are, as well as rewriting the normal vectors, as when they are a part of any normal vertex animation, etc.. So anything resulting from the vertex array should still not look corrupted.

But one element which generally does not get rewritten, is the index array. The index array states in its header information, whether the array is a point list, a line list, a triangle list, a line strip, a triangle strip… It then states how many primitives exist, for the GPU to draw. And then it states sets of elements, each of which is a vertex number.

The only theoretical reason fw the CPU would rewrite that, would be if the topology of the model was to change, which is as good as never in practice. And so, if the VRAM gets garbled, what was stored in the index array would get lost – and not refreshed.

And this can lead to the view, of numerous nonsensical triangles on the screen, which many of us have learned to associated with a GPU crash.

 

Dirk

 

I have now chosen against keeping the desktop cube animations.

In This Posting, I wrote that I had enabled a ‘desktop cube’, a compositing effect, on the laptop I name ‘Klystron’. In fact, this KDE effect consisted of two separate effects, the ‘Desktop Cube’, and the ‘Cube Animation’, which look very similar to each other.

Since then, I have discovered that enabling this much compositing (1) prevents me from turning compositing off quickly, with <Alt>+<Shift>+F12, and (2) prevents the screensavers from activating, reducing my screen locking capability to a simple screen locker.

It did not result in any crashes or errors, but presumably to prevent such errors, KDE just did not launch the screensaver.

So even though I felt that it was fun for a while to have these effects enabled, they added rather little to the functionality. I am in favor of using compositing, to whatever extent doing so causes the GPU to take work off the hands of the CPU. But after that, once increased use of these effects interferes in any way with the reliability of the S/W, I will choose against further increases.

And so now I have reversed these desktop effect settings, and gotten my screensaver back.

However, I also double-checked what I had run in to in This Posting, to see whether the GL 3+ Render System now works, belonging to OGRE 1.10 . And alas, the behavior of the OGRE GL 3+ Render System is unchanged. I did not push it to a full crash, but read the debug output again, before it came to that.

Dirk

(Edit 4/24/2016 : ) This behavior, of not having a screensaver, was logical. With the Desktop Cube effect, the usual KDE output was being rendered to 4 of the faces of a cube, defined as target texture images in OpenGL (Render To Texture, ‘RTT’). This cube was then rendered as a 3D Entity, to the actual screen displayed.

This 3D scene poses the question, of where the screensaver should be inserted. In theory, it would have been possible for the KDE screensaver to be rendered to each of these 4 active cube faces, but not to the actual screen, that acted as the virtual camera position. A simple lock-screen was rendered there.

This was similar, to how the KDE wallpaper was also potentially different, from the Desktop Cube, effect wallpaper. With this effect, there is the standard KDE wallpaper which is seen on each of the active cube faces, which is different from the wallpaper one might choose for the whole scene.

It is a good thing, that the developers added as default behavior, to offer a regular lock screen for the display, when the real screensaver was running, and potentially rendering its animation to 4 of the faces of the cube. Because in the latter place, that screen saver would also have been failing to lock the display.

And one of the behaviors which I have read about, state that there can be problems on some computers, to go into and out of Suspend Mode, if there is active 3D rendering (on the GPU) taking place in the background. A Desktop Cube would be an example, where a 3D rendering pipeline has been inserted, in such a way that it is not removed or deactivated, when the effect seems passive, or when we try to put the laptop into Suspend Mode.

This has sometimes resulted in crashes, because the graphics driver was not up to the Debian version of Suspend Mode, or even because the Index Buffer of the GPU contained garbled data, because the VRAM of the GPU was losing power, in Debian Suspend Mode.