How to know, whether our Qt 5 C++ projects make use of dynamically-loaded plug-ins.

I have used Qt Creator, with Qt version 5.7.1, to create some simplistic GUI applications as exercises. And then, I used the tool named ‘linuxdeployqt’, in order to bundle those (compiled) applications into AppImage’s, which should, in principle, run on other users’ Linux computers. (:4)  But, when using these tools, a question arose in my head, which I was also able to deduce the answer to quickly, in the form of, ‘Why does linuxdeployqt exist separately from linuxdeploy? Why does the developer need a tool which bundles library files, but which exists separately, just for C++ Qt applications? Obviously, some AppImages are not even based on that GUI library.’

And the short answer to that question is, ‘Because unlike some other application frameworks, Qt is based heavily on plug-ins, which none of my simpler exercises required explicitly.’ And, what’s so special about plug-ins? Aside from the fact that they extend the features of the application framework, plug-ins have as special property, that the application can decide to load them at run-time, and not with static awareness, at build-time. What this means is that a tool such as ‘linuxdeploy’ will scan the executable, which has been built by whichever compiler and/or IDE the developer used, will find that this executable is linked to certain shared libraries (with static awareness), but will not recognize some of the libraries which that executable needs to run, just because those are plug-ins, which the application will only decide to load at run-time.

Hence, to get the full benefit of using ‘linuxdeployqt’, the developer ‘wants to’ end up in a situation similar to the situation described Here. Granted, the person in question had run in to a bug, but his usage of the feature was correct.

This usage differs from my earlier usage, in that I never fed any ‘extra-plugins’ list to ‘linuxdeployqt’, yet, when I used the tool for creating the AppImage, my (project’s) plug-ins folder was populated with certain libraries, that were also plug-ins. And so, a legitimate question which an aspiring developer could ask would be, ‘How do I know, whether my Qt application loads plug-ins dynamically at run-time, so that I’ll know, whether I also need to specify those when bundling my AppImage?’ After all, it would seem that in certain cases, the plug-ins are in fact loaded with static awareness, which means that ‘linuxdeployqt’ can just recognize that the application is loading them, without the developer having had to make any special declaration of the fact.

One possible answer to that question might be ‘Test the AppImage, and see if it runs.’ But one problem with that answer would be, that if the executable cannot find certain plug-ins as bundled with the AppImage, the danger exists, that it may find those on the Host Machine, and that the application will end up running on some hosts, but not on other hosts, depending on what version of Qt the recipient has installed, and, depending on which plug-ins that recipient also has installed. And so, a better way must exist, for the developer to know the answer to this question.

(Updated 4/05/2021, 8h55… )

Continue reading How to know, whether our Qt 5 C++ projects make use of dynamically-loaded plug-ins.

Print Friendly, PDF & Email

Managing Usershares properly, under SAMBA (meaning, under the SMB emulation given by Linux).

Just to encapsulate the subject of this posting… ‘SMB’ is a file-sharing protocol which is really Windows-owned. Granted, ‘SMB1‘ cannot be fully Windows-owned, but, on the assumption that a ‘SAMBA’ server is being used to emulate ‘SMB2′ and ‘SMB3′ (under Linux), there are many ways in which the (root-owned) configuration file at ‘/etc/samba/smb.conf’ could be faulty, and could result in weird error messages. The fixes which are highlighted in this posting worked for me, but my reader could be suffering from mistakes of an entirely different nature. I am posting this, in case the reader happens to be suffering from the same configuration mistakes which I was.

Also, my configuration issues arose partially, because I’ve switched this configuration file from a configuration in which Home Directories are being shared out in their entirety, to a configuration in which individual users can decide to share out specific folders they own. This system is one of ‘Usershares’.

What I was eventually doing was, to give the command ‘net usershare add <Share-Name> "<Path-Name>" "<Comment>" Host\\user1:f,Host\\user2:f‘. The comma and the absence of spaces in the Access Control List are important. I was getting the error-message stating, that these user-names, part of the ACL, could not be converted into SIDs. What I found was, that I had the following error in my ‘smb.conf’ file:

 


   map to guest = bad user
   guest account = nobody

(...)

   server max protocol = SMB3
   server min protocol = SMB2


 

Amazingly, the error message went away, if I changed that last detail to:

 


   client max protocol = SMB3
   server min protocol = SMB2

 

But, there is more to be known about my configuration:

 


usershare allow guests = no
usershare max shares = 10
usershare owner only = false
usershare path = /var/lib/samba/usershares

 

What this last set of parameters actually requests is, that individual users should not be able to grant Guest Access – unauthenticated access – to any of their folders, but potentially, to grant access which is authenticated by another SAMBA username, with their enabled password, as existing on the server.

Again to my amazement, I found that, if the server is massaged adequately, it will implement the settings exactly as requested. But it will do so with a crucial limitation. Locally on the server, the non-owner username must already have access to the exact folder named in the usershare. What level of access that username has (to the named folder itself), will cap whatever level of control is granted by way of a SAMBA client.

This observation could also be rephrased as follows: Even though (Linux) SAMBA has an impressive-looking feature in “Usershares”, while that creates rule-files in the folder ‘/var/lib/samba/usershares’, which even possess Access-Control Lists, those ACLs finally disappoint, in NOT guaranteeing what the behaviour of the SAMBA server will be, once a usershare has finally granted access (to a folder, for the client). (:2)

Usually, in order to grant such access locally on the server, some strategy with group memberships and permission-bits gets used. Which exact arrangement of groups and permission-bits gets used, is not set in stone. Any arrangement will work that grants full access. But, because it’s usually unwieldy for users to set up such local sharing of their folders, they are also not likely to succeed – without compromising their security completely – unless they also have the help of someone with root access. Therefore, the ability to give this feature to users in user-space, is theoretical at best. (:1)  And, if the user wants to activate this extension, he or she must use the CLI, and cannot count on the GUI within ‘Dolphin’ to do so. But, the final command given via the CLI can be given as user.

By default,  declaring a usershare ‘the easy way, via the GUI’, remains owner-only.


 

There is one more caveat to mention. According to my recent experiences, if ‘Dolphin’ is being used as the SAMBA client, and if it’s to authenticate with the server, using any username other than the current username on the local client-computer, then the credentials need to be specified under ‘System settings -> Network -> Connectivity’ (where the Plasma 5.8 Desktop manager puts them). If there is a set of credentials there, they will not be stored encrypted, but only stored with casual obfuscation, and Dolphin will use them. If the user wants the credentials to be stored in the ‘kwallet’ (and thus encrypted), he needs to make his username, to log into the server with, identical to his username on the local client. Otherwise, illogical error messages will appear again, this time, from the Dolphin file-browser not being consistent, in whether to try to authenticate at all. And, if an attempt is made to access the share without authenticating, then Dolphin generates an illogical error message, which can be paraphrased as a ‘File Exists’ error message.

With enough effort, and, sacrificing some security, it can all be done.

(Updated 3/29/2021, 15h10… )

Continue reading Managing Usershares properly, under SAMBA (meaning, under the SMB emulation given by Linux).

Print Friendly, PDF & Email

How an old problem with multiple (Debian / Linux) sessions seems to require an old fix.

One of the facts which I recently updated my postings with is, that I have the ability to run multiple sessions on one PC, and to switch back and forth between them, by using the key-combinations <Ctrl>+<Alt>+F7, <Ctrl>+<Alt>+F8, etc. According to that update, each session could theoretically use either ‘Plasma 5′, or ‘LXDE’ as its desktop manager, even though I’d choose an actual LXDE session for only one of my defined usernames.

When I was recently testing this ability, I found that a Plasma 5 session which had locked its screen (which I had switched away from), using the built-in Plasma 5 locker, would start to consume 1 CPU core at 100%, as was visible from within the other session. And, it appears that This is a bug which has been known for a long time, on computers that have the proprietary NVIDIA graphics drivers, as my computer named ‘Phosphene’ does. This computer is still based on Debian 9 / Stretch. Apparently, according to common lore, what one is supposed to do about this is, to create a file named such as ‘dirk.sh’ (in my case), in the directory ‘/etc/profile.d’, which is supposed to set two environment variables globally like so:

 

# An attempt to prevent klocker from consuming 1 CPU core 100%
# when multiple desktop sessions are active...

export KWIN_TRIPLE_BUFFER=1
export __GL_YIELD="USLEEP"

 

Unfortunately, this backfired on me when I tried to implement it, in that regardless of which way I tried to do so, ‘kwin’ would just crash in the session that’s supposed to be using ‘Plasma 5′. An additional mystery I ran in to was, that my attempts to set ‘__GL_YIELD’ would simply get reset somewhere, unless I had also set ‘KWIN_TRIPLE_BUFFER’. Only if I set both, would setting either reveal as successful, using an ‘echo $…’ command. (:1)  Therefore, what I really needed to do was, to turn off the Screen-Locking which is provided by Plasma 5 itself (for both my usernames), and to install and use ‘xscreensaver’ instead. However, doing that has two obvious caveats:

  • Under Debian 10 / Buster and later, ‘xscreensaver’ is no longer fully supported, unless one also reconfigured the new, Wayland display manager to act as an X-server proxy, And
  • Even when I apply this fix, which means that I’ve completely disabled ‘klocker’ in my settings, at the moment I tell Plasma 5 to launch a new, parallel session, foregoing-which causes <Ctrl>+<Alt>+F8 just to lead to a blank screen, Plasma 5 will lock the current session – Using ‘klocker’ again, and causing 100% CPU usage to become visible, again, from the second session.

What I find is that, once I’ve used my Plasma 5 session to create a parallel session, I need to switch to the first session once, using <Ctrl>+<Alt>+F7, and unlock that one. After that, both my Plasma 5 sessions will only lock themselves, using ‘xscreensaver’. And aside from that short, crucial interval, I haven’t seen 100% CPU-core usage again.


 

I should add that, for certain purposes, I sometimes choose only to install the CPU-rendered ‘xscreensaver’ packages, and deliberately do not install the hardware-accelerated ones. And in this case, the hardware-accelerated screensavers were omitted, simply because they could start running the busy-wait loop again, only this time, when invoked by ‘xscreensaver’.

(Update 3/24/2021, 13h55… )

Continue reading How an old problem with multiple (Debian / Linux) sessions seems to require an old fix.

Print Friendly, PDF & Email

Why C++ compilers use name-mangling.

A concept which exists in C++ is, that the application programmer can simply define more than one function, which will seem to have the same names in his or her source code, but which will differ, either just because they have different parameter-types, or, because they are member functions of a class, i.e., ‘Methods’ of that class. This can be done again, for each declared class. In the first case, it’s a common technique called ‘function overloading’. And, if the methods of a derived class replace those of a base-class, then it’s called ‘function overriding‘.

What people might forget when programming with object-oriented semantics is, that all the function definitions still result in subroutines when compiled, which in turn reside in address-ranges of RAM, dedicated for various types of code, either in ‘the code segment of a process’, or in ‘the addresses which shared libraries will be loaded to’. This differs from the actual member variables of each class-object, also known as its properties, as well as for entries that the object might have, for virtual methods. Those will reside in ‘the data-segment of the process’, if the object was allocated with ‘new’. Each method would be incapable of performing its task if, in addition to the declared parameters, it did not receive an invisible parameter, that will be its ‘this’ pointer, which will allow it to access the properties of one object. And such a hidden ‘this’ pointer is also needed by any constructors.

Alternatively, properties of an object can reside on the stack, and therefore, in ‘the stack segment of the process’, if they were just declared to exist as local variables of a function-call. And, if an array of objects was declared, let’s say mistakenly, and not, of pointers to those objects, then each entry in the array will, again, need to have a size determined at compile-time, for which reason such objects will not be polymorphic. I.e., in these two cases, any ‘virtuality’ of the functions is discarded, and only the declared class of the object will be considered, for resolving function-calls. Such an object ends up ‘statically bound’, in an environment which really supports ‘dynamically bound’ method-invocation.

First of all, when programming in C, it is not allowed to overload functions by the same name like that. According to C, a function by one name can only be defined once, as receiving the types in one parameter-list. And the only real exception to this is in the existence of ‘variadic functions,’ which are beyond the scope of this one posting. (:1)

Further, C++ functions that have the same name, are not (typically) an example of variadic functions.

This limitation ‘makes sense’, because the compiler of either language still needs to generate one subroutine, which is the machine-language version of what the function in the source-code defined. It will have a fixed expectation, of what parameter list it was fed, even in the case of ‘variadic functions’. I think that what happens with variadic functions is, that the machine-language code will search its parameter list on the stack, for whatever it finds, at run-time. They tend to be declared with an ellipsis, in other words with ‘…’, for the additional parameters, after the entries for any fixed parameters.

So, the way in which C++ resolves this problem is, that it “mangles” the names of the functions in the source code, deterministically, but, with a system that takes into account, which parameter types they receive, and which class they may belong to, if any. The following is an example of C++ source code that demonstrates this. I have created 3 versions of the function ‘MyFunc()’, each of which only has as defined behaviour, to return the exact data which they received as input. Obviously, this would be useless in a real program.

But what I did next was to compile this code into a shared library, and then to use the (Linux) utility ‘nm’, to list the symbols which ended up being defined in the shared library…

Source Code:

 

/*  Sample_Source.cpp
 * 
 * This snippet is designed to illustrate a capability which C++ has,
 * but which requires name-mangling...
 * 
 */

#include <cmath>
#include <complex>

 /*  If this were a regular C program, then we'd include...
  *
#include <math.h>
#include <complex.h>
  *
  */

using std::complex;

typedef complex<double> CC;

class HasMethods {
public:
	HasMethods() { }
	~HasMethods() { }
	
	CC MyFunc(CC input);
};

//  According to the given headers, there are at least 3 functions
// that I could define below. First, two free functions, aka
// global functions...

double MyFunc(double input) {
	return input;
}

CC MyFunc(CC input) {
	return input;
}

//  Next, the member function of HasMethods can be defined, aka
// the supposed main 'Method' of a HasMethods object...

CC HasMethods::MyFunc(CC input) {
	return input;
}

 

(Updated 4/12/2021, 21h30… )

Continue reading Why C++ compilers use name-mangling.

Print Friendly, PDF & Email