About the Black Borders Around some of my Screen-Shots

One practice I have, is to take simple screen-shots of my Linux desktop, using the KDE-compatible utility named ‘KSnapshot’. It can usually be activated, by just tapping on the ‘Print-Screen’ keyboard-key, and if not, KDE can be customized with a hot-key combination to launch it just as easily.

If I use this utility to take a snapshot, of one single application-window, then it may or may not happen, that the screen-shot of that window has a wide, black border. And the appearance of this border, may confuse my readers.

The reason this border appears, has to do with the fact that I have Desktop Compositing activated, which on my Linux systems is based on a version of the Wayland Compositor, that has been built specifically, to work together with the X-server.

One of the compositing effects I have enabled, is to draw a bluish halo around the active application-window. Because this is introduced as much as possible, at the expense of GPU power and not CPU power, it has its own way of working, specific to OpenGL 2 or OpenGL 3. Essentially, the application draws its GUI-window into a specifically-assigned memory region, called a ‘drawing surface’, but not directly to the screen-area to be seen. Instead, the drawing surface of any one application window, is taken by the compositor to be a Texture Image, just like 3D Models would have Texture Images. And then the way Wayland organizes its scene, essentially just simplifies the computation of coordinates. Because OpenGL versions are optimized for 3D, they have specialized way to turn 3D coordinates into 2D, screen-coordinates, which the Wayland Compositor bypasses for the most part, by feeding the GPU some simplified matrices, where the GPU would be able to accept much more complex matrices.

In the end, in order for any one application-window to receive a blue halo, to indicate that it is the one, active application in the foreground, its drawing surface must be made larger to begin with, than what the one window-size would normally require. And then, the blue halo exists statically within this drawing-surface, but outside the normal set of coordinates of the drawn window.

The halo appears over the desktop layout, and over other application windows, through the simple use of alpha-blending on the GPU, using a special blending-mode:

  • The inverse of the per-texel alpha determines by how much the background should remain visible.
  • If the present window is not the active window, the background simply replaces the foreground.
  • If the present window is the active window, the two color-values add, causing the halo to seem to glow.
  • The CPU can decide to switch the alpha-blending mode of an entity, without requiring the entity be reloaded.

KSnapshot sometimes recognizes, that if instructed to take a screen-shot of one window, it should copy a sub-rectangle of the drawing surface. But in certain cases the KSanpshot utility does not recognize the need to do this, and just captures the entire drawing surface. Minus whatever alpha-channel the drawing surface might have, since screen-shots are supposed to be without alpha-channels. So the reader will not be able to make out the effect, because by the time a screen-shot has been saved to my hard-drive, it is without any alpha-channel.

And there are two ways I know of by default, to reduce an image that has an alpha-channel, to one that does not:

  1. The non-alpha, output-image can cause the input image to appear, as though in front of a checkerboard-pattern, taking its alpha into account,
  2. The non-alpha, output-image can cause the input image to appear, as though just in front of a default-color, such as ‘black’, but again taking its alpha into account.

This would be decided by a library, resulting in a screen-shot, that has a wide black border around it. This represents the maximum extent, by which static, 2D effects can be dawn in – on the assumption that those effects were defined on the CPU, and not on the GPU.

So, just as the actual application could be instructed to draw its window into a sub-rectangle of the whole desktop, it can be instructed to draw its window into a sub-rectangle, of its assigned drawing-surface. And with this effect enabled, this is indeed how it’s done.

Dirk

 

I now own a Kindle PaperWhite (eBook Reader).

I know some friends, who are about my own age, and who would swear, that as long as they own a tablet, on which the Amazon Kindle app can be installed and run, they see no need for a physical, Kindle Device. And the main reason seems to be their old-school thinking, that one highly-versatile device, need not be replaced by more-specialized devices, the features of one are a subset of another.

( Updated below on 07/30/2017 … )

My friends date back to the era, before Lithium-Ion batteries, when the more-versatile devices were simply plugged into an A/C outlet, and assumed to run indefinitely. They would probably ask me, ‘You own a more-versatile Tablet. Why did you go ahead and buy a Kindle Device?’

I can think of two answers:

  1. I want to have the technology, and
  2. I actually want the leisure, of being able to read an entire book and relax while doing so.

A long time ago, essentially all forms of 2D displays were active displays. There existed LED and LCD, which had in common, that they had their own light-sources, which during full sunlight, need to overpower the sunlight, in order to define white as anything brighter than black. While the origin of LCDs was to overcome this – during the last century – the fact that LCDs needed to be transformed into high-res, full-color displays, meant that they needed backlights, approximately 50% of the light-energy of which they did not absorb and allowed through, for typical images. Their claim to being ‘transfelctive’ was long on the transmissive, but short on the reflective. When this sort of an ‘improved’ LCD was required to act as a reflective display, it generally scattered back less than 50% of the incident light-energy, and with the typical glass shields in front of them, the glare during bright light was made even worse.

As some of my readers already understand, when the Kindle was first invented, it also pioneered the use of a kind of passive display, which was at some point in time named ‘e-Paper’. It’s quite apart from LCD-technology, in that in reflective mode, the pixels of it that are meant to be white, actually scatter back more than 80% of the incident light. And black pixels are truly dark. So the surface is as readable by default, as a sheet of paper would be, with ink printed on it. And it requires about as much battery-charge to run, as a sheet of paper (exaggeration intentional here), since it’s not generally required to act as a light-source.

Now, in the way some people think, this fact might get obscured, by the fact that modern Kindles employ a kind of e-Paper, with an additional backlight. In theory, I can turn the backlight completely down, to conserve the battery-life as much as possible, at which point the bright pixels take on a slightly yellowish tint, much like older, browned paper would. But the text is just slightly more readable, when there is non-zero backlight. And, if I am ever to read in a dark room, I’ll need non-zero backlight for sure.

Because I’m a slightly older man, I also have slightly poorer vision, than I did as a teenager, and so I think I actually need slightly more backlight (a level numbered ’10’) , than an average teenager would need, to read in a partially-lit environment. In a completely dark room, I’d turn up the backlight even higher than that.

To be completely up-to-date about it, the back-lit Kindles are not even the most-modern, because by now, there exist Kindles with e-Paper and a Front-Light. But on my own terms, I actually consider the slightly more-basic Kindles, such as the PaperWhite, to be better, than the most-recent models, that offer endlessly-more features, and that consume more battery-charge than mine would.

On an Android tablet, the battery actually prevents us from reading anything for more than a few hours – maybe 2 or 3 tops – at a time. This used to stand in my way of rediscovering reading. Now, a Kindle will allow me to read more than a whole book, at whatever time of day seems convenient, and without interrupting me with a depleting battery.

Continue reading I now own a Kindle PaperWhite (eBook Reader).

The PC Graphics Cards have specifically been made Memory-Addressable.

Please note that this posting does not describe

  • Android GPUs, or
  • Graphics Chips on PCs and Laptops, which use shared memory.

I am writing about the big graphics cards which power-users and gamers install into their PCs, which have a special bus-slot, and which cost as much money in themselves, as some computers cost.

The way those are organized physically, they possess one or more GPU, and DDR Graphics RAM, which loosely correspond to the CPU and RAM on the motherboard of your PC.

The GPU itself contains registers, which are essentially of two types:

  • Per-core, and
  • Shared

When coding shaders for 3D games, the GPU-registers do not fulfill the same function, as addresses in GRAM. The addresses in Graphics RAM typically store texture images, vertex arrays in their various formats, and index buffers, as well as frame-buffers for the output. In other words, the GRAM typically stores model-geometry and 2D or 3D images. The registers on the GPU are typically used as temporary storage-locations, for the work of shaders, which are again, separately loaded onto the GPUs, after they are compiled by the device-drivers.

A major feature which the designers of graphics cards have given them, is to extend the system memory of the PC onto the graphics card, in such a way that most of its memory actually has hardware-addresses as well.

This might not include the GPU-registers that are specific to one core, but I think does include shared GPU-registers.

Continue reading The PC Graphics Cards have specifically been made Memory-Addressable.

“Hardware Acceleration” is a bit of a Misnomer.

The term gets mentioned quite frequently, that certain applications offer to give the user services, with “Hardware Acceleration”. This terminology can in fact be misleading – in a way that has no consequences – because computations that are hardware-accelerated, are still being executed according to software that has either been compiled or assembled into micro-instructions. Only, those micro-instructions are not to be executed on the main CPU of the machine.

Instead, those micro-instructions are to be executed either on the GPU, or on some other coprocessor, which provide the accelerating hardware.

Often, the compiling of code meant to run on a GPU – even though the same, in theory as regular software – has its own special considerations. For example, this code often consists of only a few micro-instructions, over which great care must be taken to make sure that they run correctly on as many GPUs as possible. I.e., when we are coding a shader, KISS is often a main paradigm. And the possibility crops up often in practice, that even though the code is technically correct, it does not run correctly on a given GPU.

I do not really know how it is with SIMD coprocessors.

But this knowledge would be useful to have, in order to understand this posting of mine.

Of course, there exists a major contradiction to what I just wrote, in OpenCL and CUDA.

Continue reading “Hardware Acceleration” is a bit of a Misnomer.