Getting FreeFem++ to display impressive visuals under Linux.

One of my Computing habits is, to acquire many frameworks, for performing Scientific or Analytical Computing, even though, in all honesty, I have little practical use for them, most of the time. They are usually of some Academic curiosity to me.

Some of the examples familiar to me are, ‘wxMaxima‘ (which can also be installed under Linux, directly from the package manager), ‘Euler Math Toolbox‘ (which, under Linux, is best run using Wine), and ‘SageMath‘ (which IMHO, is best installed under Linux, as a lengthy collection of packages, from the standard repositories, using the package manager, that include certain ‘Jupyter’ packages). In addition to that, I’d say that ‘Python‘ can also be a bit of a numerical toolbox, beyond what most programming languages can be, such as C++, yet, a programming language primarily, which under Linux, is best installed as a lengthy collection of packages through the package manager. And a single important reason is the fact that a Python script can perform arbitrary-precision integer arithmetic natively, and, with a small package named ‘python3-gmpy2′, can also perform arbitrary-precision floating-point arithmetic easily. If a Linux user wanted to do the same, using C, he or she would need to learn the library ‘GMP’ first, and that’s not an easy library to use. Also, there exists IPython, although I don’t know how to use that well. AFAICT, this consists mainly of an alternative shell, for interacting with Python, which makes it available through the Web-interface called “Jupyter”. Under Debian Linux, it is best installed as the packages ‘ipython3′, ‘python3-ipython-genutils’, ‘python3-ipykernel’, ‘python3-nbconvert’, and ‘python3-notebook’, although simply installing those packages, does not provide a truly complete installation… Just as one would want a ‘regular’ Python installation to have many additional packages, one would want ‘IPython’ to benefit from many additional packages as well.

But then, that previous paragraph also touches on an important issue. Each Scientific Computing platform I learn, represents yet-another scripting language I’d need to learn, and if I had to learn 50 scripting languages, ultimately, my brain capacity would become diluted, so that I’d master none of them. So, too much of a good thing can actually become a bad thing.

As a counter-balance to that, it can attract me to a given Scientific Computing platform, if it can be made to output good graphics. And, another Math platform which can, is called “FreeFem“. What is it? It’s a platform for solving Partial Differential Equations. Those equations need to be distinguished from simple derivatives, in that they are generally equations, in which a derivative of a variable is being stated on one side (the “left, bilinear side”), but in which a non-derivative function of the same variable is being stated on the other (the “right side”). What this does, is to make the equation a kind of recursive problem, the complexity of which really exceeds that of simple integrals. (:2)  Partial Differential Equations, or ‘PDE’s, are to multi-variable Calculus, as Ordinary Differential Equations, or ‘ODE’s, are to single-variable Calculus. Their being “partial” derives from their derivatives being “partial derivatives”.

In truth, Calculus at any level should first be studied at a University, before computers should be used as a simplified way of solving its equations.

FreeFem is a computing package, that solves PDEs using the “Finite Element Method”. This is a fancy way of saying, that the software foregoes finding an exact analytical solution-set, instead providing an approximation, in return for which, it will guarantee some sort of solution, in situations, where an exact, analytical solution-set could not even be found. There are several good applications. (:1)

But I just found myself following a false idea tonight. In search of getting FreeFem to output its results graphically, instead of just running in text mode, I next wasted a lot of my time, custom-compiling FreeFem, with linkage to my many libraries. In truth, such custom-compilation is only useful under Linux, if the results are also going to be installed to the root file-system, where the libraries of the custom-compile are also going to be linked to at run-time. Otherwise, a lot of similar custom-compiled software simply won’t run.

What needs to be understood about FreeFem++ – the executable and not the libraries – is, that it’s a compiler. It’s not an application with a GUI, from which features could be explored and evoked. And this means that a script, which FreeFem can execute, is written much like a C++ program, except that it has no ‘main()‘ function, and isn’t entirely procedural in its semantics.

And, all that a FreeFem++ script needs, to produce a good 2D plot, is the use of the ‘plot()‘ function! The example below shows what I mean:




I was able to use an IDE, which I’d normally use to write my C++ programs, and which is named “Geany”, to produce this – admittedly, plagiarized – visual. The only thing I needed to change in my GUI was, the command that should be used, to execute the program, without compiling it first. I simply changed that command to ‘FreeFem++ "./%f"‘.

Of course, if the reader wants in-depth documentation on how to use this – additional – scripting language, then This would be a good link to find that at, provided by the developers of FreeFem themselves. Such in-depth information will be needed, before FreeFem will solve any PDEs which may come up within the course of the reader’s life.

But, what is not really needed would be, to compile FreeFem with support for many back-ends, or to display itself as a GUI-based application. In fact, the standard Debian version was compiled by its package maintainers, to have as few dependencies as possible (‘X11′), and thus, only to offer a minimal back-end.

(Updated 7/14/2021, 21h45… )

Continue reading Getting FreeFem++ to display impressive visuals under Linux.

What’s what with the Aqsis Ray-Tracer, and Qt4 project development under Linux.

One of the subjects I once wrote about was, that I had a GUI installed on a Debian 8 / Jessie computer, which was called ‘Ayam’, and that it was mainly a front-end for 3D / Perspective graphics development, using the Aqsis Ray-Tracer (v1.8). And a question which some of my readers might have by now, could be, why this feature seems to have slipped out of the hands of the Linux software repositories, and therefore, out of the hands of users, of up-to-date Linux systems. Progress is supposed to go forwards and not backwards.

In order to answer that question, I feel that I must provide information, which starts completely from the opposite end. There exists a ‘GUI Library’, i.e., a library of function-calls that can be used by application programmers, to give their applications a GUI, but without having to do much of the coding, that actually generates the GUI itself. The GUI Library in question is called “Qt”. It’s a nice library, but, as with so many other versions of software, there are noticeably the Major Qt versions 4, 5 and 6 by now. While in the world at large, Qt4 is considered to be deprecated, and Qt6 is the bleeding edge under development, Linux software development has largely still been based on Qt4 and Qt5. So, what, you may ask?

Firstly, while it is still possible to develop applications using Qt4 on a Debian 8 or Debian 9 system, switching to using it is not as easy under Linux, as it is under Windows. When using the ‘Qt SDK’ under Windows, one installs whichever ‘Kit’ one wants, and then compiles their Qt project with that Kit. There could be a Qt4, a Qt5 and a Qt6 Kit, and they’d all work. In fact, there is more than one of each…

Under Linux, if one wants to switch to compiling code that was still written with Qt4, one actually needs to switch the configuration of one entire computer to Qt4 development, by installing the package ‘qt4-default‘. This will replace the package ‘qt5-default‘ and set everything up for (backwards-oriented) Qt4 development. Yet, Qt4 code can still be written and compiled.

And So, my reader might ask, what does this have to do with Aqsis, and potentially not being able to use it anymore?

Well, there is another resource known as ‘Boost’, which is a collection of Libraries that do many sophisticated things with data, that do not necessarily involve any sort of GUI. Aqsis happens to depend on Boost. But, there was one component of Aqsis, which was the program ‘piqslr‘, the sole purpose of which was, to provide a quick preview of a small part of a 3D Scene, so that an artist could make small adjustments to this scene, without having to re-render the whole scene every time. Such features might seem minor at first glance, but in fact they’re major, just as it’s major, to have a GUI to gestalt the scene, which in turn controls a ray-tracing program in the background, which may not have a GUI.

Well, ‘piqslr‘ was one program, that requires both Boost and Qt4. And, Qt4 is no longer receiving any sort of development time. Qt4’s present version is final. And, all versions of Qt need to feed their C++ source code through what they call a “Meta-Object Compiler” (its ‘MOC’). And ‘piqslr‘ needed to have both the header files for Boost included in its source code, as well as being programmed in Qt4.

Qt4’s MOC was still able to parse the header files of Boost v1.55 (which was standard with Debian 8) without getting lost in them. But if source code includes the corresponding header files, for Boost v1.62 (which became the standard with Debian 9), the Qt4 MOC just spits out a tangled snarl of error messages. I suppose that that is what one gets, when one wanted to modify the basic way C++ behaves, even in such a minor way.

And so, what modern versions of Aqsis, which are included in the repositories, offer, are the programs that do the actual ray-tracing, but no longer, that previewer, that had the program-name ‘piqslr‘. I suppose this development next invites the question, of who is supposed to do something about it. And the embarrassing answer to that question is, that in the world of Open-Source Software, it’s not the Debian – or any other, Linux – developers who should be patching that problem, but rather, the developers of Aqsis itself. They could, if they had the time and money, just rewrite ‘piqslr‘ to use Qt5, which can handle up-to-date Boost headers just fine. But instead, the Aqsis developers have been showing the world a Web-page for the past 5 years, that simply makes the vague statement, that a completely new version is around the corner, which will then also not be compatible with the old version anymore.

(Updated 5/21/2021, 19h15… )

Continue reading What’s what with the Aqsis Ray-Tracer, and Qt4 project development under Linux.

How configuring VirtualBox to use Large Pages is greatly compromised under Linux.

One of the things which Linux users will often do is, to set up a Virtual Machine such as VirtualBox, so that a legitimate, paid-for instance of Windows can run as a Guest System, to our Linux Host System. And, because of the way VMs work, there is some possibility that to get them to use “Large Pages”, which under Linux have simply been named “Huge Pages”, could improve overall performance, mainly because without Huge Page support, the VM needs to allocate Memory Maps, which are subdivided into 512 standard pages, each of which has a standard size of 4KiB. What this means is that in practice, 512 individual memory allocations usually take place, where the caching and remapping requires 2MiB of memory. Such a line of memory can also end up, getting saved to the .VDI File – in the case of VirtualBox, from 512 discontiguous pieces of RAM.

The available sizes of Huge Pages depend on the CPU, and, in the case of the x86 / x86_64 CPUs, they tend to be either 2MiB in size or 1GiB, where 2MiB is already quite ambitious. One way to set this up is being summarized in the following little snip of commands, which need to be given as user:


VBoxManage modifyvm "PocketComp_20H2" --nestedpaging on
VBoxManage modifyvm "PocketComp_20H2" --largepages on


In my example, I’ve given these commands for the Virtual machine instance named ‘PocketComp_20H2‘, and, if the CPU is actually an Intel with ‘VT-x’ (hardware support for virtualization), large page or huge page -support should be turned on. Yet, like several other people, what I obtained next in the log file for the subsequent session, was the following line of output:


00:00:31.962754 PGMR3PhysAllocateLargePage: allocating large pages takes too long (last attempt 2813 ms; nr of timeouts 1); DISABLE


There exist users who searched the Internet in vain, for an explanation of why this feature would not work. I want to explain here, what goes wrong with most simple attempts. This is not really an inability of the platform to support the feature, as much as it’s an artifact, of how the practice of Huge Pages under Linux, differs from the theoretical, hypothetical way in which some people might want to use them. What will happen, if Huge Pages are to be allocated after the computer has started fully, is that Linux will be excruciatingly slow in doing so, at the request of the VM, because some RAM would need to be defragmented first.

This is partially due to the fact, that VirtualBox will want to map all the virtual RAM of the Guest System using them, and not, the .VDI File. (:1)  I.e., if the very modest Guest System has 4GiB of (virtual) RAM, it implies that 2048 Huge (2MiB) Pages will be needed, and those will take several minutes to allocate. If that Guest System is supposed to have larger amounts of RAM, the problem just gets worse. If the VM fails to allocate them within about 2 seconds of requesting them, it aborts, and continues with standard pages.

What Linux will offer as an alternative behaviour is, to allocate a fixed number of Virtual Pages on boot-up – when the memory is not yet very fragmented – and then, to allow any applications which ‘know how’, to help themselves to some of those Huge Pages. Thus, if 128 Huge Pages are to be preallocated, then the following snip shows, roughly how to do so, assuming a Debian distro. (:2)  Lines that begin with hash-marks (‘#‘) are commands that would need to be given as root. I estimate this number of Huge Pages to be appropriate for a system with 12GiB of RAM:


# groupadd hugetlbfs
# adduser dirk hugetlbfs
# getent group hugetlbfs


# cd /etc
# edit text/*:sysctl.conf

vm.nr_hugepages = 128
vm.hugetlb_shm_group = 1002

# edit text/*:fstab

hugetlbfs       /hugepages      hugetlbfs mode=1770,gid=1002        0       0

# ulimit -H -l


# cd /etc/security
# edit text/*:limits.conf

@hugetlbfs      -       memlock         unlimited


The problem here is, that for a Guest System with 4GiB of virtual RAM to launch, 2048 Huge Pages would need to be preallocated, not, 128. To make things worse, Huge Pages cannot be swapped out! They remain locked in RAM. This means that they also get subtracted from the maximum number of KiB that a user is allowed to lock in RAM. In effect, 4GiB of RAM would end up, being tied up, not doing anything useful, until the user actually decides to start his VM (at which point, little additional RAM should be requested by VirtualBox).

Now, there could even exist Linux computers which are set up, on that set of assumptions. Those Linux boxes do not count as standard personal, desktop computers.

If the user wishes to know, how slow Linuxes tend to be, actually allocating some number of Huge Pages, after they have started to run fully, then he or she can just enter the following commands, after configuring the above, but, before rebooting. Normally, a reboot is required after what is shown has been configured, but instead, the following commands could be given in a hurry. My username ‘dirk‘ will still not belong to the group ‘hugetlbfs‘…


# sync ; echo 3 > /proc/sys/vm/drop_caches
# sysctl -p


I found that, on a computer which had run for days, with RAM that had gotten very fragmented, the second command took roughly 30 seconds to execute. Imagine how long it might take, if 2048 Huge Pages are indeed to be allocated, instead of 128.


What some people have researched on the Web – again, to find that nobody seems to have the patience to provide a full answer – is if, as indicated above, the mount-point for the HugeTLBFS is ‘/hugepages‘ – which few applications today would still try to use – whether that mount-point could just be used as a generic Ramdisk. Modern Linux applications simply use “Transparent Huge Pages”, not, access to this mount-point as a Ramdisk. And the real answer to this hypothetical question is No…

(Updated 5/20/2021, 8h20… )


Continue reading How configuring VirtualBox to use Large Pages is greatly compromised under Linux.

How to know, whether our Qt 5 C++ projects make use of dynamically-loaded plug-ins.

I have used Qt Creator, with Qt version 5.7.1, to create some simplistic GUI applications as exercises. And then, I used the tool named ‘linuxdeployqt’, in order to bundle those (compiled) applications into AppImage’s, which should, in principle, run on other users’ Linux computers. (:4)  But, when using these tools, a question arose in my head, which I was also able to deduce the answer to quickly, in the form of, ‘Why does linuxdeployqt exist separately from linuxdeploy? Why does the developer need a tool which bundles library files, but which exists separately, just for C++ Qt applications? Obviously, some AppImages are not even based on that GUI library.’

And the short answer to that question is, ‘Because unlike some other application frameworks, Qt is based heavily on plug-ins, which none of my simpler exercises required explicitly.’ And, what’s so special about plug-ins? Aside from the fact that they extend the features of the application framework, plug-ins have as special property, that the application can decide to load them at run-time, and not with static awareness, at build-time. What this means is that a tool such as ‘linuxdeploy’ will scan the executable, which has been built by whichever compiler and/or IDE the developer used, will find that this executable is linked to certain shared libraries (with static awareness), but will not recognize some of the libraries which that executable needs to run, just because those are plug-ins, which the application will only decide to load at run-time.

Hence, to get the full benefit of using ‘linuxdeployqt’, the developer ‘wants to’ end up in a situation similar to the situation described Here. Granted, the person in question had run in to a bug, but his usage of the feature was correct.

This usage differs from my earlier usage, in that I never fed any ‘extra-plugins’ list to ‘linuxdeployqt’, yet, when I used the tool for creating the AppImage, my (project’s) plug-ins folder was populated with certain libraries, that were also plug-ins. And so, a legitimate question which an aspiring developer could ask would be, ‘How do I know, whether my Qt application loads plug-ins dynamically at run-time, so that I’ll know, whether I also need to specify those when bundling my AppImage?’ After all, it would seem that in certain cases, the plug-ins are in fact loaded with static awareness, which means that ‘linuxdeployqt’ can just recognize that the application is loading them, without the developer having had to make any special declaration of the fact.

One possible answer to that question might be ‘Test the AppImage, and see if it runs.’ But one problem with that answer would be, that if the executable cannot find certain plug-ins as bundled with the AppImage, the danger exists, that it may find those on the Host Machine, and that the application will end up running on some hosts, but not on other hosts, depending on what version of Qt the recipient has installed, and, depending on which plug-ins that recipient also has installed. And so, a better way must exist, for the developer to know the answer to this question.

(Updated 4/05/2021, 8h55… )

Continue reading How to know, whether our Qt 5 C++ projects make use of dynamically-loaded plug-ins.