# Why some Linux devs still use spinlocks.

There is An interesting article, on another BB, which explains what some of the effects are, of spinlocks under Linux. But, given that at least under Linux, spinlocks are regarded as ‘bad code’ and constitute ‘a busy-wait loop’, I think that that posting does not explain the subject very well of why, in certain cases, they are still used.

A similar approach to acquiring an exclusive lock – that can exist as an abstract object, which does not really need to be linked to the resource that it’s designed to protect – could be programmed by a novice programmer, if there were not already a standard implementation somewhere. That other approach could read:

from time import sleep
import inspect

def acquire( obj ):
assert inspect.isclass(obj)
if obj.owner is None:
# Replace .owner with a unique attribute name.
# In C, .owner needs to be declared volatile.
# Compiler: Don't optimize out any reads.
obj.owner = myID
while obj.owner != myID:
if obj.owner == 0:
obj.owner = myID
sleep(0.02)

def release( obj ):
assert inspect.isclass(obj)
if obj.owner is None:
# In C, .owner needs to be declared volatile.
obj.owner = 0
return
obj.owner = 0


(Code updated 2/26/2020, 8h50…

I should add that, in the example above, Lines 8-12 will just happen to work under Python, because multi-threading under Python is in fact single-threaded, and governed by a Global Interpreter Lock. The semantics that take place between Lines 12 and 13 would break in the intended case, where this would just be pseudo-code, and where several clock cycles elapse, so that the ‘Truth’ which Line 13 tests may not last, just because another thread would have added a corresponding attribute on Line 12. OTOH, adding another sleep() statement is unnecessary, as those semantics are not available, outside Python.

In the same vein, if the above code is ported to C, then what matters is the fact that in the current thread, several clock-cycles elapse between Lines 14 and 15. Within those clock cycles, other threads could also read that obj.owner == 0, and assign themselves. Therefore, the only sleep() instruction is dual-purpose. Its duration might exceed the amount of time the cache was slowed down to, to execute multiple assignments to the same memory location. After that, one out of possibly several threads would have been the last, to assign themselves. And then, that thread would also be the one that breaks out of the loop.

However, there is more that could happen between Lines 14 and 15 above, than cache-inefficiency. An O/S scheduler could cause a context-switch, and the current thread could be out of action for some time. If that amount of time exceeds 20 milliseconds, then the current thread would assign itself afterwards, even though another thread has already passed the retest of Line 13, and assumes that it owns the Mutex. Therefore, better suggested pseudo-code is offered at the end of this posting…

)

This pseudo-code has another weakness. It assumes that every time the resource is not free, the program can afford to wait for more than 20 milliseconds, before re-attempting to acquire it. The problem can crop up, that the current process or thread must acquire the resource, within microseconds, or even within nanoseconds, after it has become free. And for such a short period of time, there is no way that the O/S can reschedule the current CPU core, to perform any useful amount of work on another process or thread. Therefore, in such cases, a busy-wait loop becomes The Alternative.

I suppose that another reason, for which some people have used spinlocks, is just due to bad code design.

Note: The subject has long been debated, of what the timer interrupt frequency should be. According to kernel v2.4 or earlier, it was 100Hz. According to kernel v2.6 and later, it has been 1000Hz. Therefore, in principle, an interval of 2 milliseconds could be inserted above (in case the resource had not become free). However, I don’t really think that doing so would change the nature of the problem.

Explanation: One of the (higher-priority, hardware) interrupt requests consists of nothing but a steady pulse-train, from a programmable oscillator. Even though the kernel can set its frequency over a wide range, this frequency is known not to be terribly accurate. Therefore, assuming that the machine has the software installed, that provides ‘strict kernel marshalling’, every time this interrupt fires, the system time is advanced by a floating-point number, that is also adjusted over a period of hours and days, so that the system time keeps in-sync with an external time reference. Under Debian Linux, the package which installs that is named ‘ntp’.

There exist a few other tricks to understand, about how, in practice, to force an Operating System which is not an RTOS, to behave like a Real-Time O/S. I’ve known a person who understood computers on a theoretical basis, and who had studied Real-Time Operating Systems. That’s a complicated subject. But, given the reality that his operating system was not a Real-Time O/S, he was somewhat stumped by why then, it was able to play a video-clip at 24 FPS…

(Updated on 3/12/2020, 13h20 …)

(As of 2/23/2020 : )

from time import sleep
import datetime
import sys

frame = 41.6666666667
# Expected milliseconds per frame

def play_forever(frame):
(v_1, v_2, v_3, v_4, v_5) = sys.version_info
assert v_1 >= 2
if v_1 == 2:
assert v_2 >= 7
a = datetime.datetime.now()
feedback = 0.0
while True:
play_one_frame()
b = datetime.datetime.now()
delta = b - a
a = b
feedback += frame - (delta.total_seconds() * 1000.0)
# Accurate to within 1 Microsecond, but cumulative
if feedback > 0.0:
sleep(feedback / 1000.0)
else:
while -feedback >= frame:
drop_one_frame()
feedback += frame


(Update 2/26/2020, 13h35 : )

The key difference between an RTOS, and a conventional, multi-tasking, multi-processing O/S is, that the RTOS will guarantee that the thread which has executed a ‘sleep()’ instruction will in fact resume running, within the amount of time specified. Often, this can only be achieved when there is a small number of threads executing that type of ‘sleep()’ instruction. Conventional operating systems make no such guarantee, so that the best way to think of their ‘sleep()’ instruction is, ‘Sleep for at least … seconds, but maybe longer. The amount of time I’m asleep can exceed the specified argument, by an arbitrary, unknown amount.’ In the example above, this results in a degradation of the accuracy, with which individual frames are timed, and eventually, also leads to dropped frames. Yet, if that is allowed, the long-term accuracy with which a sequence of frames is timed, will be good.

A compromise which a modern O/S will make, is to be conventional by default, but to elevate maybe one process to ‘RT’ priority. However, while I have Linux configurations installed which will do this, they will only do it for their ‘JACK’ daemon. They don’t do this in practice, for one instance of a video-player application.

AFAIK, One important feature that ‘RT’-Priority processes have, in addition to just being preferred over numbered-priority processes, would be, the permissions to lock memory, such that the memory does not get swapped out. But I think that the process must explicitly make use of this permission, in order for the memory to get locked. (:1)

(Update 2/26/2020, 0h00 : )

This time some pseudo-code for how to implement a Mutex That Waits, written in the syntax of C, is offered below. The reader will note that this snippet compiles but does not link. That is because it contains a function prototype, for what would need to wrap an assembler instruction, named ‘Exchange()’. This instruction is also at the heart of a spinlock (:2):

/* The purpose of this C code, is to act as pseudo-code,
* for 'the correct way' to achieve a waiting MUTEX.
* As its basis, it will assume that a hardware instruction exists,
* that can exchange two values, that are memory locations
* in this snippet, but that, on i86 architecture, would take the form
* of a register-value and a memory location.
*
* This instruction must be atomic.
*/

#include <iostream>

#ifdef _WIN32
#include <Windows.h>
#else
#include <unistd.h>
#endif

using std::cout;
using std::endl;

typedef int SpinLock;

//  Abstract, unimplemented, required, atomic
//  hardware instruction.
void Exchange(void *a, void *b);

//  Realistically achievable in i86 assembler
int LockAndTest(SpinLock *s) {
int cpu_register = 1;
Exchange(&cpu_register, s);
//  Return register value by value
return cpu_register;
}

void InitMutex(SpinLock *s){
*s = 0;
}

void WaitMutex(SpinLock *s, double laxity= 20.0){
if (laxity > 0.0) {
laxity /= 1000.0;
while (LockAndTest(s)) { sleep(laxity); }
} else {
while (LockAndTest(s)) { }
}
}

void UnlockMutex(SpinLock *s){
*s = 0;
}

int main(){
SpinLock s;
InitMutex(&s);

cout << "Acquiring Mutex" << endl;

WaitMutex(&s);

cout << "Mutex Acquired" << endl;
cout << "Unlocking Mutex" << endl;

UnlockMutex(&s);

cout << "Done" << endl;
}


(Update 2/26/2020, 13h30 : )

I suppose that another reason, for which some programmers might put a blocking spinlock could be, the possibility that they are writing low-level code, such as assembler, where the blocking variety should be easy to code, but from where they do not have access to the ‘sleep()’ instruction which I used above, in ‘WaitMutex()’. For example, because a hardware interrupt is never supposed to be interrupted by anything other than a higher-priority, hardware interrupt, this would be a place where the regular ‘sleep()’ instruction cannot be put. They tend to be single-threaded.

(Update 2/28/2020, 13h45 : )

1:)

An elaboration is in order, about why a Linux process with ‘RT’ priority needs to declare explicitly, that its memory should not be swapped out. By default, the data segment of a process is also referred to as its “Heap”, and is the main way in which processes and all their threads store data. I would just assume that if the process has ‘RT’ priority, its heap will not get swapped out. But when memory-management becomes more advanced, processes may request additional memory segments, either with (1) the ‘shmget()‘ or (2) the ‘mmap()‘ / ‘shm_open()‘ functions. ‘shmget()’ is the older of the two methods.

Asking for an additional memory segment may or may not be useful, if one thread is only going to use it by itself. If the newer ‘mmap()’ function call is used, its main point is that the newly allocated memory segment is a memory-map, so that bytes within it can just be accessed, being mapped to virtual memory. This can become a way to open files that reside on storage (that reside on the HD or the SSD). For this purpose, the ‘mmap()’ function accepts a File Descriptor, that was obtained when the file was first opened.

It is often more useful to allocate additional memory segments in order to share them, either between threads, or in some cases, even between processes. And this is what the ‘shm_open()’ function does, again resulting in a memory-map. However, this function does not accept a regular filename, only a symbolic filename, that does not belong within the regular directory tree.

If the older ‘shmget()’ function was used, then the Shared Memory ID it returned needs to be passed to a second function named ‘shmat()‘, so that a range of virtual addresses results, that either thread or process can access as such.

In any case, as soon as threads or processes create additional memory segments, they must also specify flags, that state particular attributes those should have. And then, there is a bit in this set of flags, which is referred to either by the macro ‘SHM_NORESERVE’, which gets specified in the ‘shmget()’ function, or by the macro ‘MAP_NORESERVE’, which is either specified in the ‘mmap()’ or the ‘shm_open()’ function. When set, either flag specifies that the memory segment should not be swapped out. This is where a program needs to do something explicitly.

If at that point the bit is set, but the program does not have the privileges needed, to set this bit, then the function-call will fail, and the program will likely produce an error message.

The process running with ‘RT’ priority will assure that the program has the required privileges.

In Computing, the reality is, that the answer to certain questions may or may not be true. In such cases, whichever the answer is, this answer has consequences, in addition to which something stays the same. For example, I just wrote that if the creation of the memory-map failed, the program may or may not produce an error message (that is visible to the user). Regardless of whether the program does so or not, it is being signalled by the O/S, whether or not the creation of the memory-map failed. It’s obligatory for the programmer to check this, especially since the operation can fail. The program has not been written, just to assume that the operation succeeded, and regardless of whether or not it has, then to try accessing the range of virtual addresses. But whether the programmer then decides to inform the user, of what exactly has gone wrong, if something has, is up to the programmer.

(Update 2/28/2020, 14h15 : )

2:)

One fact which I’m aware of is, that when writing practical programs in C or C++, declarations can be put, to make certain function-calls atomic, without requiring that the programmer that uses them, implements them. The reason for which it would not be valid for me to put any such declarations into my pseudo-code, is the fact that somewhere, they make use of a Mutex that has already been defined, and which my pseudo-code aims to define for the first time.

What makes this worse is the fact that a new Mutex cannot just be created every time an atomic function-call is made, because this would potentially result in many of them, none of which detect whether the other was locked or not. Relevant function-calls need to share the same Mutex.

What this means is that either:

• An ‘#include’ declaration has included the appropriate header file, by which an object with a static member variable has been declared, that will be shared between all instances of the object, and that must therefore also be initialized once in spite of being static, or
• The implementation could lie buried deeper in the standard library, which modern C or C++ compilers always link, so that they remain invisible to the programmers who make use of them.

Either way, the aim of my pseudo-code is not to make use of a Mutex, in order to demonstrate how one is implemented.

(Update 2/29/2020 : )

According to various readings, C++11 allows for a regular class object to be declared atomic. But the way it’s implemented under Linux has serious limitations:

• The class must be trivially copyable,
• It must not have custom assignment operators or destructors,
• Constructors will not be called,
• Within this class, member function calls are not atomic by nature. Instead, such objects are really just simpler, C-like types, with a limited set of predefined operations (which will be atomic).

In other words, if the base-class that the atomic class is based on, has the member ‘.counter1′, then the atomic class will not have this member. It will have members that cause an entire object of the base-class to be copied.

In order to try to make up for this, there is also the function ‘atomic_init()‘.

And so, another approach which has been suggested is to write code more in the style of C, where member variables of a class could be atomic, to have atomic operations performed on them, but to avoid an entire class being made atomic. This article gives an example. Of course, the example I just linked to, makes use of whatever Mutexes exist within ‘boost::atomic’ (that could be coded in assembler), in order to provide a spinlock, while mine makes the assumption that a Mutex has not been defined, but to provide one.

Also, This article gives a better example, if the reader positively needs a member function to be atomic.

I suppose that I could do the same thing which some of my linked articles suggest, which is, to rely on library functions being installed, and then to pretend that to make use of them completes an exercise, when the original exercise was, to explain how a Mutex is initially implemented. It is now implemented within the ‘std::atomic’ template class, but one can pretend that this corresponds to a possible, native register operation.

The following code will both compile and link, as long as C++11 has at least been satisfied.

/* Even though the platform is understood already to
* provide a Mutex, as part of its threading mechanism,
* this code snippet will undertake to define a special
* Mutex, that has as feature the ability of the client
* program to modulate, for how long the Mutex is to
* wait between retries, to acquire.
*/

#include <atomic>
#include <iostream>
#include <cassert>

#ifdef _WIN32
#include <Windows.h>
#define TID_TYPE DWORD
#define THREAD_COMP(a, b) a == b
#else
#include <unistd.h>
#endif

using std::cout;
using std::endl;

class Mutex {
private:
std::atomic<int> state;
//  One could pretend that this data-type exists
//  natively.

public:
Mutex() {
state.store(0, std::memory_order_release);
}

virtual void Unlock() {
state.store(0, std::memory_order_release);
}

virtual void Lock(double laxity=20.0) {
if (laxity > 0.0) {
while (state.exchange(1, std::memory_order_acquire)) {
sleep(laxity / 1000.0);
}
} else {
while (state.exchange(1, std::memory_order_acquire)) { }
}
}
};

class RecursiveMutex : public Mutex {
private:
int count;
volatile TID_TYPE owner;

public:
RecursiveMutex() : Mutex(), thread_isset(0), count(0) { }

void Lock(double laxity=20.0) {
Mutex::Lock(laxity);
}
count++;
}

void Unlock() {
if (--count == 0) {
Mutex::Unlock();
}
}
};

class ScopedMutex {
private:
Mutex *theMutex;

public:
ScopedMutex(Mutex *anyMutex, double laxity=20.0) :
theMutex(anyMutex) {
theMutex->Lock(laxity);
}

~ScopedMutex() {
theMutex->Unlock();
}
};

class ExampleClass {
private:
RecursiveMutex objectMutex;

public:
ExampleClass() : objectMutex() { }

void AtomicFunction() {
ScopedMutex m(&objectMutex);

cout << "Mutex Acquired Within Atomic Function" << endl;
cout << "Unlocking Mutex" << endl;

cout << endl;
}
};

int main() {
std::atomic<int> test;
if (std::atomic_is_lock_free(&test)) {
cout << "The std::atomic<int> object used is lock free." << endl;
cout << "This means, shortest acquisition times," << endl;
cout << "all the way to busy-wait loop, are available." << endl;
} else {
cout << "The std::atomic<int> object used is NOT lock free." << endl;
cout << "This means, acquisition times will be, as" << endl;
cout << "defined by this underlying type." << endl;
cout << "Busy-wait loops won't happen." << endl;
}

cout << endl;

ExampleClass atomic_object;
atomic_object.AtomicFunction();

Mutex myMutex;

cout << "Acquiring Mutex" << endl;

myMutex.Lock();

cout << "Mutex Acquired" << endl;
cout << "Unlocking Mutex" << endl;

myMutex.Unlock();

cout << "Done" << endl;

return 0;

}


Note:

The above source-code, when not being compiled within an IDE, but rather, from the Linux command-line, should be copied and pasted into a file with a name such as ‘test.cpp’, and then compiled like this:

g++ -Wall -lpthread -o test test.cpp


It needs to determine the current Thread ID, in order also to implement a recursive Mutex.

Because template classes frequently have specializations, it’s possible that ‘std::atomic<T>’ also has an optimized ‘.exchange()’ function, whenever ‘T’ is a simple data-type, such as an ‘int’. If that’s the case, then the timing variable fed to my ‘.Lock()’ function will be accurate, and then, if this timing variable is zero, it will in fact result in a busy-wait loop. It should also be noted that accordingly, in an example which I linked to above, ‘T’ was an ‘enum’, and within C, those are disguised integers.

If the above premise is not true, then the Mutex which I just defined will still work, but with timing no faster than what was used to define ‘std::atomic’.

(Update 3/12/2020, 13h20 : )

There is a critical detail to how that last piece of C++ works, which I should explain.

Between Lines 70 and 71, something important happens. The reason why I defined two variables, ‘thread_isset’ and ‘owner’, is, the fact that under new UNIX guidelines, there is no guarantee anymore, that a variable of type ‘pthread_t’ will be a numerical value. In fact, there is officially no way to set that to zero, to denote that nobody’s thread ID owns the Mutex. Therefore, I used the separate flag ‘thread_isset’ to denote, whether or not the ‘pthread_t’ value stored in ‘owner’ refers to the owner of the Mutex.

The way this needs to be visualized is such, that the possibility of ‘thread_isset’ being zero, will protect the value in ‘owner’ from being tested. For that reason, it’s crucial that Lines 70 and 71 perform their operation in the order shown.

Let us pretend for the moment that Lines 70 and 71 were reversed. Incorrectly written code could have it, that ‘thread_isset’ is set first, on Line 70, but that due to the same error, ‘owner’ is only set on Line 71. The following sequence of events could take place:

• A first thread could just have unlocked the Mutex and set its count to zero. This would result in ‘owner’ still indicating the first thread, but ‘thread_isset’ being zero,
• A second thread could want to lock the Mutex. When it does, it would execute Line 70, which could incorrectly set ‘thread_isset’ first, so that ‘owner’, still holds the thread ID of the first thread,
• The timer interrupt could fire, and the second thread could lose control of the CPU Core, and control of the CPU Core could be handed back to the first thread, while the second thread was between executing Lines 70 and 71,
• The first thread could try to acquire the Mutex for a second time. The condition on Line 68, which is executed before any ‘real Mutex’ is locked, would read that ‘thread_isset’ is set, and additionally, that according to ‘owner’, the first thread still owns the Mutex,
• As if the first thread still owned the Mutex, the above condition will be False, and the first thread will increment ‘owner’ and exit from the function-call, as if it did own the Mutex, but without attempting to lock any real Mutex,
• Control would later be handed back to the second thread, which really did own the Mutex, and it would set ‘owner’ to its own thread ID on incorrectly-ordered Line 71,
• Two threads would have returned from the function-call, which would signal that each owns the Mutex, but eventually, ‘owner’ would only indicate that the thread which I happened to refer to as the second thread, did.

To prevent that from happening, Lines 70 and 71 need to be executed as shown, so that ‘thread_isset’ being zero, will protect the value stored in ‘owner’ from being read (by a different thread), until after the second thread has set ‘owner’ to its thread ID, at which point ‘thread_isset’ can in fact be set, presumably by the second thread.

In the same vein, an error would hypothetically take place, if two threads were able to have the same thread ID. The second would obtain that the condition on Line 68 returns False, both would exit the function call, but only one thread is supposed to own the Mutex. However, that situation will not take place, because any two threads with access to the RecursiveMutex from within the same process, will always have non-matching thread IDs.

It’s possible to imagine an inverted situation, where the tentative problem is, that the first thread unsets ‘thread_isset’, while the second thread, which is attempting to acquire the Mutex, is between the first and the second part of the condition being tested on Line 68. Therefore,

• The second thread could have tested that ‘thread_isset’ was not equal to zero, which makes the first part of the condition False, but,
• The first thread could have set it to zero because of just releasing the Mutex, but only after the second thread has tested it to be non-zero,
• The second thread could proceed to the second part of the condition, which is, the equality of the Mutex’s ‘owner’ attribute to the second thread’s thread ID,
• The thread IDs would not match, because two different threads belonging to the same process will never have matching IDs, and, the first thread releasing the Mutex has left its ‘owner’ attribute set to the first thread’s ID,
• Therefore, for the second thread this time, the second part of the condition on Line 68 would test True,
• The second thread will execute the code-block inside the ‘if’ statement, thereby requiring that it actually acquire the ‘real Mutex’ before execution continues, and
• The second thread would only be able to acquire said Mutex after it has been released by whatever thread previously Locked it. And the first thread would only Unlock the real Mutex, after it has set ‘thread_isset’ to zero,
• When multiple threads try to Lock the same Mutex, because that has already been implemented properly, only one will succeed, while the others continue to wait,
• One thread would return from the function call, correctly indicating that it has become the new Mutex owner. No problem.

In a parallel fashion, the question could be tested, of whether the laxity that will take place between the execution of Lines 77 and 78 poses any problem. The current thread could have asserted the condition of Line 77 to be true, because ‘thread_isset’ was not equal to zero, but some other thread could have set it to zero, before the current thread gets to the assertion that is Line 78:

• In order for that to have happened, the other thread would need to have been the owner of the RecursiveMutex object, the ‘owner’ attribute of which would therefore still not match the current thread’s thread ID.
• The current thread’s assertion on Line 78 would correctly fail.
• The only way in which this would malfunction would be, if two ‘code execution instances’ could take place, but both with the same thread ID. And the infrastructure prohibits that.
• While the current thread was between Lines 77 and 78, whatever was concurrently setting ‘thread_isset’ to zero, could only have had a different thread ID, at least according to the rules of how thread IDs work.
• Before a thread with a different ID can make itself the Mutex owner, it must first obtain a Lock on the ‘real Mutex’, because that different thread ID will test True in the second part of the condition on Line 68.

Just to be clear: There is no guarantee, that threads belonging to different processes, will have non-matching thread IDs. If an inter-process, recursive Mutex is desired, then the process IDs will need to be used instead of the thread IDs. In that case, the atomic object itself would need to be made to reside in a shared memory segment, and why it would need to be recursive is not clear to me.

Further, without the ‘volatile’ keyword, the compiler will use registers and therefore optimize out, individual attempts to read the values of attributes, so that the condition of Line 68, as well as the pair of Lines 77 and 78, could become fused. The fact that their encapsulation is ‘safe’ the way this pseudo-code was written means, that the ‘volatile’ keyword can be applied, as it should be for logical correctness.

And, in general, the attributes themselves are stored in RAM, as they are defined by offsets from the object’s starting address. Thus, between function-calls of different member functions, the optimization of the compiler would need to take into account, object code two member functions long, before access to object attributes in RAM was also optimized out. Whether to omit the ‘volatile’ keyword poses any real risk, depends entirely on how the compilers normally optimize.

class Mutex {
private:
std::atomic<int> state;
//  One could pretend that this data-type exists
//  natively.

typedef std::atomic<int> *SharedAtomicIntPtr;
//  Any void pointer returned from a shared memory segment
//  will need to be cast to this pointer-type, then fed to
//  'atomic_store_explicit()' and 'atomic_exchange_explicit()' below.

SharedAtomicIntPtr sharedState;

public:
Mutex() {
sharedState = &state;
std::atomic_store_explicit(sharedState, 0, std::memory_order_release);
}

Mutex(void *sharedPointer) {
sharedState = (SharedAtomicIntPtr) sharedPointer;
std::atomic_store_explicit(sharedState, 0, std::memory_order_release);
}

virtual void Unlock() {
std::atomic_store_explicit(sharedState, 0, std::memory_order_release);
}

virtual void Lock(double laxity=20.0) {
if (laxity > 0.0) {
while (std::atomic_exchange_explicit(sharedState, 1, std::memory_order_acquire)) {
sleep(laxity / 1000.0);
}
} else {
while (std::atomic_exchange_explicit(sharedState, 1, std::memory_order_acquire)) {
}
}
}
};



When I recompile my test program, with the last snip of code above, which is meant as a hint of how to sync inter-process memory, the test program still compiles and runs without any problems. But of course, then, the main objective is actually to allocate shared memory segments, and to introduce their addresses, void pointers to which should be cast as suggested.

I should also mention that, as soon as the memory address of the atomic integer resides in a shared memory segment, that atomic type and its defined operations will no longer be ‘lock free‘, if they ever were.

Dirk

This site uses Akismet to reduce spam. Learn how your comment data is processed.