A Problem which can befall Dropbox under Linux (Unable to Access -Folder).

This is a problem which has happened to some of the Dropbox customers, who have the client installed under Linux:

The Dropbox Icon changes to a grayed-out icon, with a red cross, and when we click or right-click on the icon, it says it’s unable to access (its) Dropbox folder. It even asks us for our Linux Password (apparently Windows-gurus don’t understand Linux), in a bid to correct the permissions of the folder in question. Don’t enter any password. At the same time, if we have a very complex desktop-management system running, we may find that the Desktop and its management-software become laggy to almost non-functional, especially with ‘Baloo’ running etc..

In my case this was due to the combination of two factors:

  1. I had added many, systematically-named files to my Dropbox folder from another synced computer, due to backing up newly-installed software.
  2. Dropbox uses a feature called ‘INotify’, so that a program gets notified as soon as the contents of a file are changed, which that program has placed a watch on. In this case, Dropbox has a watch on thousands of files.

In my case, the following helped. On a Debian-based system, in a terminal-window, type:

 


dirk@Phoenix:~$ su
Password: 
root@Phoenix:/home/dirk# cd /etc
root@Phoenix:/etc# edit text/*:sysctl.conf

 

Then, edit the file in question, to contain the following two lines:

 


fs.inotify.max_user_watches = 262144
fs.inotify.max_user_instances = 256

 

Then, to make the changes take effect, type:

 


root@Phoenix:/etc# sysctl -p

 

What this does is set the kernel limits ‘very high’, as to how many INotify-watches it will support. For the moment, the Dropbox client on this machine is stable again.

(Updated 07/03/2018, 8h35 … )

(As of 07/02/2018, 11h25 : )

Actually, according to my own recent experience, after applying the above fix, if the limit already did run out, a reboot is nevertheless required.

And because of the needed reboot, my server was also down for about 10 minutes this morning…

Continue reading A Problem which can befall Dropbox under Linux (Unable to Access -Folder).

Understanding ‘Meltdown’ and ‘Spectre’ in Layman’s Terms

One of the pieces of news which many people have heard recently, but which few people fully understand, is that in Intel chip-sets in particular, but also to a lesser degree in AMD-chip-sets, and even with some ARM (Android) chip-sets, a vulnerability has been discovered by researchers, which comes in two flavors: ‘Meltdown’ and ‘Spectre’. What do these vulnerabilities do?

Well, modern CPUs have a feature which enables them to execute multiple CPU-instructions concurrently. I learned about how this works, when I was taking a System Hardware course some time ago. What happens is meant to make up for the fact that to execute one CISC-Chip instruction, typically takes up considerably more than 1 clock-cycle. So what a CISC-Chip CPU does, is to start execution on instruction 1, but during the very next clock-cycle, to fetch the opcode belonging to instruction 2 already. Instruction 1 is at that point in the 2nd clock-cycle of its own execution. And one clock-cycle later, Opcode 3 gets fetched by the CPU, while instruction 2 is in the 2nd clock-cycle, and instruction 1 is in the 3rd clock-cycle – if their is any – of each of their execution.

This pushes the CISC-Chip CPUs closer to the ideal goal of executing 1 instruction per clock-cycle, even though that ideal is never fully reached. But, because CPU-instructions contain branches, where a condition is tested first, and where, in a roundabout way, if the non-default outcome of this test happens to be true, the program ‘branches off’, to another part within the same program, according to the true logic of the CPU-instructions. The behavior of the CPU under those conditions has also been made more-concurrent, than a first-glance appraisal of the logic might permit.

When a modern CISC-Chip CPU reaches a branching instruction in the program, it will continue to fetch opcodes, and to execute the instructions which immediately follow the conditional test, according to the default assumption of what the outcome of the conditional test is likely to be. But if the test brings about a non-default logical result, which will cause the program to continue in some completely different part within its code, the work which has already been done on the partially-executed instructions is to be discarded, in a way that is not supposed to affect the logical outcome, because program flow will continue at the new address within its code. At that moment, the execution of code no longer benefits from concurrency.

This concurrent execution, of the instructions that immediately follow a conditional test, is called “Speculative Execution”.

The problem is, that Complex Instruction-Set CPUs, are in fact extremely complex in their logic, as well as the fact that their logic has been burned as such, into the transistors – into the hardware – of the CPU itself, and even the highly-skilled Engineers who design CPUs, are not perfect. So we’ve been astounded by how reliably and faithfully actual, physical CPUs execute their intricate logic, apparently without error. But now, for the first time in a long time, an error has been discovered, and it seems to take place across a wide span of CPU-types.

This error has to do with the fact that modern CPUs are multi-featured, and that in addition to having concurrent execution, they also possess Protected Memory, as well as Virtual Memory. Apparently, cleverly-crafted code can exploit Speculative Execution, together with how Virtual Memory works, in order in fact to bypass the feature which is known as Protected Memory.

It is not the assumption of modern computers, that even when a program has been made to run on your computer – or your smart-phone – It would just be allowed ‘to do anything’. Instead, Protected Memory is a feature that blocks user-space programs from accessing memory that does not belong to them. It’s part of the security framework actually built-in to the hardware, that makes up the CPU.

More importantly, user-space programs are never supposed to be able to access kernel memory.

(Updated 01/11/2018 : )

Continue reading Understanding ‘Meltdown’ and ‘Spectre’ in Layman’s Terms

The Advantages of using a Slab Allocator

When people take their first C programming courses, they are taught about the standard allocator named ‘malloc()‘, while when learning C++, we were first taught about its standard allocator, named ‘new‘.

These allocators work on the assumption that a program is running in user space, and may not always be efficient at allocating smaller chunks of memory. They assume that a standard method of managing the heap is in-place, where the heap of any one process is a part of that process’s memory-image, and partially managed by the kernel.

Not only that, but when we tell either of these standard operators to allocate a chunk of memory, the allocator recognizes the size of that chunk, prepends to the chunk of memory a binary representation of its size, and before returning a pointer to the allocated memory, subtracts the size of the binary representation, of the size originally requested by the programmer. Thus, the pointer returned by either of these allocators points directly to the memory which the programmer can use, even though the allocated chunk is larger, and preceded by a binary representation of its own size. That way, when the command is given to deallocate, all the deallocation-function needs to receive in principle, is a pointer to the allocated chunk, and the deallocation-function can then find the header that was inserted from there, to derive how much memory to delete.

I suppose that one conclusion to draw from this is, that even though it looks like a good exercise to teach programming students, the exercise of always allocating a 32-bit or a 64-bit object – i.e., a 4-byte or an 8-byte object – such as an integer, to obtain an 8-byte pointer to that integer, is actually not a good one, because in addition to the requested 8 bytes, an additional header is always being allocated, which may add 4 bytes if the maximum allocated size is a 32-bit number, or add 8 bytes if the maximum allocated size (of one chunk) is a 64-bit number.

Additionally, these allocators assume the support of the kernel, to a user-space process, the latter of which has a heap. On 64-bit systems that are ‘vmalloc‘-based, this requires the user-space application try to access virtual address ‘0x0000 0000 0000 0000‘, which intentionally results in a page-fault, and stops the process. The kernel then needs to examine why the page-fault occurred, and since this was a legitimate reason, needs to set up the virtual page-frame, of an address returned to the (restarted) user-space process, via the usual methods for returning values.

And so means also needed to exist, by which a kernel can manage memory more-efficiently, even under the assumption that the kernel does not have the sort of heap, that a user-space process does. And one main mechanism for doing so, is to use a slab allocator. It will allocate large numbers of small chunks, without requiring as much overhead to do so, as the standard user-space allocators did. In kernel-space, these slabs are the main replacement for a heap.

(Updated 06/20/2017 … )

Continue reading The Advantages of using a Slab Allocator

Why a Hard-Boot is often Overkill.

In order to understand this posting, the reader first needs to understand something, about how the (volatile) memory in a computer is organized. Through the use of virtual addresses, it gets organized into user-space and kernel-space. User-space includes programs that run with elevated privileges, but also the GUI-programs that display widgets and the like on the screen. Kernel-space is reserved for the kernel of the O/S, as well as for kernel-modules, that act as device drivers in practice.

But in addition to that, modern architectures also have I/O chips – which the kernel modules would be responsible for – that run “firmware”. These more-complex I/O chips will often perform complex tasks, such as the encryption built-in to Bluetooth, without requiring that these tasks run on the CPU, or in RAM. In order to do that, I/O chips capable of doing so need a smaller program of instructions, that actually run on the I/O chip. This program is typically no longer than 1-2 KB.

(Edit 06/09/2017 :

I suppose the fact should also be acknowledged, that in order for the firmware actually to do anything, the I/O chip should also have a somewhat larger region of memory – call it a Buffer – which stores data and not code. In the case of an intelligent Bluetooth chip, that buffer would logically store whatever encryption keys are currently being applied to data, which is being streamed to and from the chip… )

So in practice, a situation which the user can run in to, is that by unpairing all his (external) Bluetooth devices, and then turning Bluetooth Off from the settings GUI, and next turning BT back On, he can Fail to Reset the Bluetooth system to its original state. The user only gets to see what the GUI is showing him – which is being controlled by programs running in user-space.

What the user may not realize, is that the way kernel-space is often organized, turning Bluetooth Off in user-space, will fail to unload the kernel-module itself, responsible for operating the Bluetooth Chip, that has firmware running on it, for the sake of argument.

The actual kernel-modules first load when the computer boots up, and the first thing they do, if they are of the sort that use firmware, is to load that firmware onto the I/O chip – and it’s often patches to the firmware that get loaded, because the default firmware is burned-in to the I/O chip.

Continue reading Why a Hard-Boot is often Overkill.