One Important Task of the File System is, to Manage Unallocated Blocks.

When we visualize deleting large files, it is tempting to visualize that, as only having to ‘unlink’ blocks of data, which used to belonged to that file, from the File System.

But in reality, there is a major additional task which an FS must manage. Any File System possesses ‘a pool of unallocated blocks’, which I would refer to colloquially as ‘the Free Pool’. This pool needs to exist, because every time a new file is created, or an existing one appended to, such that a new block needs to be allocated to it, that new block needs to have some origin, with the foreknowledge that it does not already belong to some existing, allocated file.

Such a block is taken rapidly from the Free Pool.

Therefore, when we delete an existing file, all its allocated blocks additionally need to be added back to the Free Pool.

If FS corruption has taken place, then one of the earliest, most common practical signs of this will be, a failure to count de-allocated blocks, as blocks of hard drive capacity, which are free again, for the new formation and extension of files. This happens, because the unlinking of the deallocated blocks, always takes place before their addition back to the Free Pool. The way a File System Driver behaves would become even more precarious, if it was generally to add deallocated blocks to the Free Pool, before unlinking them from existing data stores.

More specific explanations of how a File System works, are hampered by the fact that many different File Systems exist, including ‘FAT32′, ‘ExFAT’, ‘NTFS’, ‘ext3′, ‘ext4′, ‘reiserfs’, etc..

When I studied System Software, the File System that was used as an example was, ‘UNIX System V’. That predates ‘etx2′. My studies did not include other practical examples. But what I have read about ‘FAT32′, is that it has conceptual simplicity, which may reduce the consequences of FS corruption slightly. OTOH, ‘FAT32′ is not a “Journaling File System”, as was not ‘System V’.

By comparison, later versions of ‘ext3′, and ‘NTFS’, ‘ext4′, are all examples of Journaling File Systems. What happens therein, is that data written to the HD by user-space programs is not written directly to the FS, but rather written to an incremental Journal kept by the FS Driver (residing of course in kernel-space). At a non-specific point in time, the Journal is Committed to the FS, an operation, in which a subset of Journal entries is read from the Journal, in such a way that they perform Atomic operations to the FS, after which the actual File System is consistent again, but due to which not all the Journal Entries have yet been Committed. And then, those additional Journal Entries are played back, only when the next attempt is made, to commit the Journal as it stands again.

This can reduce the risk of FS Corruption, because the interruption of the kernel would need to take place, exactly during an interval during which the Journal is being Committed, in order for real Corruption to occur.

But, Why then Is ‘NTFS’ a Journaling File System, if the issue of FS Corruption did not exist under Windows?

Dirk

 

A Possible Oversight on my part, Concerning FS Corruption

One of the facts which regularly concerns me about Linux, is that we Mount a File System so that we can access its files, and that we Must Unmount it at the end of any Kernel-Session, before we can power down or restart the machine, and that failure to do so results in FS Corruption.

But there is a thought which has only occurred to me recently, about how that might not translate correctly into Windows. It could be that under Windows, there is no hidden Mount or Unmount procedure, when we boot the computer or shut it down. In the past I had always assumed that this step does exist, but that Windows keep it hidden in the BG.

If Windows has no analog to this, then the problems I was describing in This Posting, may simply be due to a weak, dodgy hard-drive on the computer I name ‘Mithral’, purely in how it functions as hardware.

Also, I should next take steps to take a closer look at OS/X, which was originally derived from some form of UNIX (‘BSD’), but which may also have done away, with any sort of Mount or Unmount taking place in the BG.

Dirk

 

Mithral Seems To Be Stable Again.

In earlier postings, I had written, that my Windows 7 computer ‘Mithral’ had become unstable, and that this was related to the fact that I had set up background-defragmentation using the 3rd-party, commercially-paid-for Application named “Diskeeper 2011″. The reason for this idea was the observation, that crashes seemed to take place only during times of the night, when I had Diskeeper configured to do its BG-defrag, and while I was not making any active use of that computer.

In connection with this I had observed, that sometimes a Windows computer can get into a state, where the session does not survive a defragmentation, until a ‘Check-Disk’ is run, even though in my case I had run a Check-Disk, and the problem persisted.

This situation had led me to the possibility, that I might have to reinstall a new O/S onto Mithral.

What I have found instead, was that simply updating to a new version of Diskeeper, which requires to uninstall the old version first, seems to have solved that problem. I can think of two reasons, why it might have done so:

  1. There might have been some FS Corruption, but the type of background-scan which version 2016 of the software performs, may be sufficiently improved to correct that, without causing a crash, or
  2. Diskeeper uses some version of the Visual Studio C++ Run-Time Library, and while I was installing “Visual Studio 2015 Express”, I also did update a certain ‘MSVCRT’ to the latest version. But, since Diskeeper 2011 was not an up-to-date Application version, it might itself have gotten into trouble, from depending on the latest MSVCRT version. So, an up-to-date Application-version, may also be able to make full use of an up-to-date library version.

Dirk