Why I don’t just compile binaries for 32-bit Linux.

One fact which I’ve posted about before is, that from time to time, I post source-code on this blog, and additionally, try to compile that into binaries, which will run on the computers of people who do not know how to compile it. In this context, I apologize to Mac users. I do not have the tools at my disposal, also to offer ‘OS/X’ or ‘MacOS’ binaries. But, I will sometimes offer 32-bit and 64-bit Windows executables (=binaries).

What some people might wonder is, ‘Why does Dirk not also compile 32-bit Linux executables, in addition to 64-bit Linux executables?’

And there is a reason, beyond sheer laziness.

The way it works with 64-bit Windows systems is, that each of them has a 32-bit Windows subsystem, which allows it to run 32-bit applications for backwards-compatibility. This 32-bit subsystem is also one reason, why it’s generally possible just to compile C++ programs to 32-bit Windows targets, on any 64-bit Windows computer (that has the tools in-place for 64-bit Windows targets in the first place).

Unfortunately, the Linux world is not as rosy.

Some Linux systems – actually, most 64-bit Linux systems, I think – are what’s called “Multi-Arch”, and what this means is, that there is a set of 32-bit libraries, in addition to a full set of 64-bit libraries. The 32-bit libraries are usually installed as dependencies of specific 32-bit executables.

The way the world of compiling software works, is, that after code has been compiled into Object Files, these Object Files, that are already binary in content, must be linked to Libraries, either static or shared, before an executable is built.

Hence, the compiler flag ‘-m32‘ will tell a Linux computer, to force compilation of object code to ‘the 32-bit, Intel, i386 architecture’, as it’s sometimes referred to, even if the CPU isn’t ultimately an Intel. But, i386 -architecture Object Files, must also be linked to present, 32-bit Libraries.

Here’s what some people may not know about the Linux world, and its Multi-Arch (userland) members: The number of 32-bit libraries they will ultimately have installed, will not be ~one tenth as many~, as the native 64-bit Libraries, which most of their computers run on (if those are in fact, multi-arch, 64-bit PCs). Hence, if a program simply consists of the “Hello World!” example, nothing will go wrong.

But, if the software project needs to be linked to 40 (+) libraries, then chances are that the host computer has, maybe, the 32-bit version of 4 of those on-hand…

Further, I use certain automated tools, such as ‘linuxdeployqt‘, which re-links executables that have already been linked to 64-bit libraries on my own computer, so that instead, they will be linked as autonomously as possible, to libraries in a generated ‘AppImage’. I cannot rely on this tool being Multi-Arch as well.

And so, in certain ways, when a Linux computer is serving as a build platform, it can be harder and not easier, than it is with Windows, just to target some other platform. More typically, that Linux computer will be installed as having the same platform as it’s targeting.

Sorry again.

Now, an exception exists, where Debian Maintainers have cross-compiled many of their packages to run on novel architectures, such as, on the ARM CPUs that power most Android devices. This is a very tedious and complex process, by which those maintainers first have to cross-compile the libraries, resulting in library packages, and then, link each executable to its compatible set of compiled libraries, resulting in ‘end-user-packages’. (:1)

 

If my readers truly only have 32-bit Linux computers and want to run my executables, and, If I provided a 32-bit Windows executable, then usually, that executable will run just fine ‘under Wine’. One could try that.

 

(Updated 8/04/2021, 18h15… )

Continue reading Why I don’t just compile binaries for 32-bit Linux.

How configuring VirtualBox to use Large Pages is greatly compromised under Linux.

One of the things which Linux users will often do is, to set up a Virtual Machine such as VirtualBox, so that a legitimate, paid-for instance of Windows can run as a Guest System, to our Linux Host System. And, because of the way VMs work, there is some possibility that to get them to use “Large Pages”, which under Linux have simply been named “Huge Pages”, could improve overall performance, mainly because without Huge Page support, the VM needs to allocate Memory Maps, which are subdivided into 512 standard pages, each of which has a standard size of 4KiB. What this means is that in practice, 512 individual memory allocations usually take place, where the caching and remapping requires 2MiB of memory. Such a line of memory can also end up, getting saved to the .VDI File – in the case of VirtualBox, from 512 discontiguous pieces of RAM.

The available sizes of Huge Pages depend on the CPU, and, in the case of the x86 / x86_64 CPUs, they tend to be either 2MiB in size or 1GiB, where 2MiB is already quite ambitious. One way to set this up is being summarized in the following little snip of commands, which need to be given as user:

 


VBoxManage modifyvm "PocketComp_20H2" --nestedpaging on
VBoxManage modifyvm "PocketComp_20H2" --largepages on

 

In my example, I’ve given these commands for the Virtual machine instance named ‘PocketComp_20H2‘, and, if the CPU is actually an Intel with ‘VT-x’ (hardware support for virtualization), large page or huge page -support should be turned on. Yet, like several other people, what I obtained next in the log file for the subsequent session, was the following line of output:

 


00:00:31.962754 PGMR3PhysAllocateLargePage: allocating large pages takes too long (last attempt 2813 ms; nr of timeouts 1); DISABLE

 

There exist users who searched the Internet in vain, for an explanation of why this feature would not work. I want to explain here, what goes wrong with most simple attempts. This is not really an inability of the platform to support the feature, as much as it’s an artifact, of how the practice of Huge Pages under Linux, differs from the theoretical, hypothetical way in which some people might want to use them. What will happen, if Huge Pages are to be allocated after the computer has started fully, is that Linux will be excruciatingly slow in doing so, at the request of the VM, because some RAM would need to be defragmented first.

This is partially due to the fact, that VirtualBox will want to map all the virtual RAM of the Guest System using them, and not, the .VDI File. (:1)  I.e., if the very modest Guest System has 4GiB of (virtual) RAM, it implies that 2048 Huge (2MiB) Pages will be needed, and those will take several minutes to allocate. If that Guest System is supposed to have larger amounts of RAM, the problem just gets worse. If the VM fails to allocate them within about 2 seconds of requesting them, it aborts, and continues with standard pages.

What Linux will offer as an alternative behaviour is, to allocate a fixed number of Virtual Pages on boot-up – when the memory is not yet very fragmented – and then, to allow any applications which ‘know how’, to help themselves to some of those Huge Pages. Thus, if 128 Huge Pages are to be preallocated, then the following snip shows, roughly how to do so, assuming a Debian distro. (:2)  Lines that begin with hash-marks (‘#‘) are commands that would need to be given as root. I estimate this number of Huge Pages to be appropriate for a system with 12GiB of RAM:

 


# groupadd hugetlbfs
# adduser dirk hugetlbfs
# getent group hugetlbfs

hugetlbfs:x:1002:dirk


# cd /etc
# edit text/*:sysctl.conf

vm.nr_hugepages = 128
vm.hugetlb_shm_group = 1002

# edit text/*:fstab

hugetlbfs       /hugepages      hugetlbfs mode=1770,gid=1002        0       0


# ulimit -H -l

(...)


# cd /etc/security
# edit text/*:limits.conf

@hugetlbfs      -       memlock         unlimited



 

The problem here is, that for a Guest System with 4GiB of virtual RAM to launch, 2048 Huge Pages would need to be preallocated, not, 128. To make things worse, Huge Pages cannot be swapped out! They remain locked in RAM. This means that they also get subtracted from the maximum number of KiB that a user is allowed to lock in RAM. In effect, 4GiB of RAM would end up, being tied up, not doing anything useful, until the user actually decides to start his VM (at which point, little additional RAM should be requested by VirtualBox).

Now, there could even exist Linux computers which are set up, on that set of assumptions. Those Linux boxes do not count as standard personal, desktop computers.

If the user wishes to know, how slow Linuxes tend to be, actually allocating some number of Huge Pages, after they have started to run fully, then he or she can just enter the following commands, after configuring the above, but, before rebooting. Normally, a reboot is required after what is shown has been configured, but instead, the following commands could be given in a hurry. My username ‘dirk‘ will still not belong to the group ‘hugetlbfs‘…

 


# sync ; echo 3 > /proc/sys/vm/drop_caches
# sysctl -p

 

I found that, on a computer which had run for days, with RAM that had gotten very fragmented, the second command took roughly 30 seconds to execute. Imagine how long it might take, if 2048 Huge Pages are indeed to be allocated, instead of 128.


 

What some people have researched on the Web – again, to find that nobody seems to have the patience to provide a full answer – is if, as indicated above, the mount-point for the HugeTLBFS is ‘/hugepages‘ – which few applications today would still try to use – whether that mount-point could just be used as a generic Ramdisk. Modern Linux applications simply use “Transparent Huge Pages”, not, access to this mount-point as a Ramdisk. And the real answer to this hypothetical question is No…

(Updated 5/20/2021, 8h20… )

 

Continue reading How configuring VirtualBox to use Large Pages is greatly compromised under Linux.

Kernel Updates Today, Downtime – VirtualBox Maybe Affected

I take the unusual approach, of hosting my Web-site, and this blog, on my personal computer at home. This implies that the visibility of this blog on the Web, is only as good the reliability of my PC, which I name ‘Phoenix’. Along with the computer I name ‘Plato’, Phoenix received a kernel-update today, which required a reboot.

The kernel-update took place uneventfully.

But, my site and blog would not have remained visible, from 20h20 until about 20h30 this evening.

I apologize for any inconvenience to my readers.

(Update 05/02/2018 … )

Contrarily to first appearances, this kernel-update did seem to have a side-effect: On one of my computers, it prevented the VirtualBox kernel-modules from being built…

Continue reading Kernel Updates Today, Downtime – VirtualBox Maybe Affected