Setting Up VESAFB Under GRUB2

In This Earlier Posting, I had written, that I switched to the proprietary nVidia graphics-drivers on the computer I name ‘Plato’, but that for the purposes of managing several console-sessions using

  • <Ctrl>+<Alt>+F1 ,
  • <Ctrl>+<Alt>+F7

My customary solution, to set up ‘uvesafb‘, no longer works. What happens is that everything runs fine, until the command is given to switch back to the X-server session, at which point the system crashes. Thus, as I had left it at first, console-sessions were available, but at some horribly-low default resolution (without ‘uvesafb’). This had to be remedied, and the way I chose to solve this was actually to use the older ‘vesafb’, which is not a 3rd-party frame-buffer ‘device’, but rather a set of kernel-instructions / kernel-settings, which can be specified in the file ‘/etc/default/grub’.

Because my computers use ‘GRUB2′, the most-elegant way to solve this problem would be, to put the following two lines / uncomment and adapt, like so:

 


GRUB_GFXMODE=1920x1080
GRUB_GFXPAYLOAD_LINUX="keep"


 

But, on ‘Plato’, this solution was not available, because 1920×1080 was not an available frame-buffer resolution. On this machine, I’d need to have set the highest-possible VESA resolution first, and then have been in the position of having to state next, whether to use “keep” or some other, available resolution, actually to start Linux.

This might have resulted in a ‘lightdm’ log-in screen, set to an unsuitable resolution, all the way until the user logs in, and the Plasma 5 desktop manager re-establishes his or her personal, desktop-resolution – just because, 1920×1080 was not available from the GRUB.

Instead, the following first command reveals which frame-buffer resolutions are available on any one machine, and then it’s still possible today, to give the option ” vga=#” , using the exact code which was provided by the first command:

Continue reading Setting Up VESAFB Under GRUB2

I’ve finally installed the proprietary nVidia graphics drivers.

In this earlier posting, I had written about the fact that the project was risky, to switch from the open-source ‘Nouveau’ graphics drivers, which are provided by a set of packages under Debian / Linux that contain the word ‘Mesa’, to the proprietary ‘nVidia’ drivers. So risky, that for a long time I faltered at doing this.

Well just this evening I made the switch. Under Debian / Stretch – aka Debian 9, this switch is relatively straightforward to accomplish. What we do is to switch to a text-session, using <Ctrl>+<Alt>+F1, and then kill the X-server. From there, we essentially just need to give the command (as root):

apt-get install nvidia-driver nvidia-settings nvidia-xconfig

Giving this command essentially allows the Debian package-managers to perform all the post-install steps, such as black-listing the Nouveau drivers. One should expect that this command has much work as its side-effects, as it pulls in quite a few dependencies.

(Edit 04/30/2018 :

In addition, the user must have up-to-date kernel / Linux -headers installed, because to install the graphics driver, also requires to build DKMS kernel modules. But, it’s always my assumption that I’d have kernel headers installed myself. )

When I gave this command the first time, apt-get suggested additional packages to me, which I wrote down on a sheet of paper. And then I answered ‘No’ to the question of whether or not to proceed (without those), so that I could add all the suggested packages onto a new command-line.

(Update 05/05/2018 :

The additional, suggested packages which I mentioned above, offer the ‘GLVND’ version of GLX. With nVidia, there are actually two ways to deliver GLX, one of which is an nVidia-centered way, and the other of which is a generic way. ‘GLVND’ provides the generic way. It’s also potentially more-useful, if later-on, we might  want to install the 32-bit versions as well.

However, if we fail to add any other packages to the command-line, then, the graphics-driver will load, but we won’t have any OpenGL capabilities at all. Some version of GLX must also be installed, and my package manager just happened to suggest the ‘GLVND’ packages.

Without OpenGL at all, the reader will be very disappointed, especially since even his desktop-compositing will not be running – at first.

The all-nVidia packages, which are not the ‘GLVND’ packages, offer certain primitive inputs from user-space applications, which ‘GLVND’ does not implement, because those instructions are not generically a part of OpenGL. Yet, certain applications do exist, which require the non-‘GLVND’ versions of GLX to be installed, and I leave it up to the reader to find out which packages do that – if the reader needs them – and to write their names on a sheet of paper, prior to switching drivers.

It should be noted, that once we’ve decided to switch to either ‘GLVND’- or the other- version of GLX, trying to change our minds, and to switch to the other version, is yet another nightmare, which I have not even contemplated so far. I’m content with the ‘GLVND’- GLX version. )

(Edited 04/30/2018 :

There is one aspect to installing up-to-date nVidia drivers which I should mention. The GeForce GTX460 graphics card does not support 3rd-party frame-buffers. These 3rd-party frame-buffer drivers would normally allow, <Ctrl>+<Alt>+F1, to show us not only a text-session, but one with decent resolution. Well, with the older, legacy graphics-chips, what I’d normally do is to use the ‘uvesafb’ frame-buffer drivers, just to obtain that. With modern nVidia hardware and drivers, this frame-buffer driver is incompatible. It even causes crashes, because with it, essentially, two drivers are trying to control the same hardware.

Just this evening, I tried to get ‘uvesafb’ working one more time, to no avail, just as it does work on the computer I name ‘Phoenix’. )

So the way it looks now for me, the text-sessions are available, but only in very low resolution. They only exist for emergencies now.

But this is the net result I obtained, after I had disabled the ‘uvesafb’ kernel module again:

 


dirk@Plato:~$ infobash -v
Host/Kernel/OS  "Plato" running Linux 4.9.0-6-amd64 x86_64 [ Kanotix steelfire-nightly Steelfire64 171013a LXDE ]
CPU Info        8x Intel Core i7 950 @ clocked at Min:1600.000Mhz Max:2667.000Mhz
Videocard       NVIDIA GF104 [GeForce GTX 460]  X.Org 1.19.2  [ 1920x1080 ]
Processes 262 | Uptime 1:16 | Memory 3003.9/12009.6MB | HDD Size 2000GB (6%used) | GLX Renderer GeForce GTX 460/PCIe/SSE2 | GLX Version 4.5.0 NVIDIA 375.82 | Client Shell | Infobash v2.67.2
dirk@Plato:~$

dirk@Plato:~$ clinfo | grep units
  Max compute units                               7
dirk@Plato:~$ clinfo | grep multiple
  Preferred work group size multiple              32
dirk@Plato:~$ clinfo | grep Warp
  Warp size (NV)                                  32
dirk@Plato:~$


 

So what this means in practice, is that I have OpenGL 4.5 on the computer named ‘Plato’ now, as well as having a fully-functional install of ‘OpenCL‘ and ‘CUDA‘, contrarily to what I had according to this earlier posting.

Therefore, GPU-computing will not just exist in theory for me now, but also in practice.

And this displays, that the graphics card on that machine ‘only’ possesses 224 cores after all, not the 7×48 which I had expected earlier, according to a Windows-based tool – no longer installed.

(Updated 04/29/2018 … )

Continue reading I’ve finally installed the proprietary nVidia graphics drivers.

How SDL Accelerates Video Output under Linux.

What we might know about the Linux, X-server, is that it offers pure X-protocol to render such features efficiently to the display, as Text with Fonts, Simple GUI-elements, and small Bitmaps such as Icons… But then, when it’s needed to send moving pictures to the display, we need extensions, which serious Linux-users take for granted. One such extension is the Shared-Memory extension.

Its premise is that the X-server shares a region of RAM with the client application, into which the client-application can draw pixels, which the X-server then transfers to Graphics Memory.

For moving pictures, this offers one way in which they can also be ~accelerated~, because that memory-region stays mapped, even when the client-application redraws it many times.

But this extension does not make significant use of the GPU, only of the CPU.

And so there exists something called SDL, which stands for Simple Direct Media Layer. And one valid question we may ask ourselves about this protocol, is how it achieves a speed improvement, if it’s only installed on Linux systems as a set of user-space libraries, not drivers.

(Updated 10/06/2017 : )

Continue reading How SDL Accelerates Video Output under Linux.