One Important Task of the File System is, to Manage Unallocated Blocks.

When we visualize deleting large files, it is tempting to visualize that, as only having to ‘unlink’ blocks of data, which used to belonged to that file, from the File System.

But in reality, there is a major additional task which an FS must manage. Any File System possesses ‘a pool of unallocated blocks’, which I would refer to colloquially as ‘the Free Pool’. This pool needs to exist, because every time a new file is created, or an existing one appended to, such that a new block needs to be allocated to it, that new block needs to have some origin, with the foreknowledge that it does not already belong to some existing, allocated file.

Such a block is taken rapidly from the Free Pool.

Therefore, when we delete an existing file, all its allocated blocks additionally need to be added back to the Free Pool.

If FS corruption has taken place, then one of the earliest, most common practical signs of this will be, a failure to count de-allocated blocks, as blocks of hard drive capacity, which are free again, for the new formation and extension of files. This happens, because the unlinking of the deallocated blocks, always takes place before their addition back to the Free Pool. The way a File System Driver behaves would become even more precarious, if it was generally to add deallocated blocks to the Free Pool, before unlinking them from existing data stores.

More specific explanations of how a File System works, are hampered by the fact that many different File Systems exist, including ‘FAT32′, ‘ExFAT’, ‘NTFS’, ‘ext3′, ‘ext4′, ‘reiserfs’, etc..

When I studied System Software, the File System that was used as an example was, ‘UNIX System V’. That predates ‘etx2′. My studies did not include other practical examples. But what I have read about ‘FAT32′, is that it has conceptual simplicity, which may reduce the consequences of FS corruption slightly. OTOH, ‘FAT32′ is not a “Journaling File System”, as was not ‘System V’.

By comparison, later versions of ‘ext3′, and ‘NTFS’, ‘ext4′, are all examples of Journaling File Systems. What happens therein, is that data written to the HD by user-space programs is not written directly to the FS, but rather written to an incremental Journal kept by the FS Driver (residing of course in kernel-space). At a non-specific point in time, the Journal is Committed to the FS, an operation, in which a subset of Journal entries is read from the Journal, in such a way that they perform Atomic operations to the FS, after which the actual File System is consistent again, but due to which not all the Journal Entries have yet been Committed. And then, those additional Journal Entries are played back, only when the next attempt is made, to commit the Journal as it stands again.

This can reduce the risk of FS Corruption, because the interruption of the kernel would need to take place, exactly during an interval during which the Journal is being Committed, in order for real Corruption to occur.

But, Why then Is ‘NTFS’ a Journaling File System, if the issue of FS Corruption did not exist under Windows?



Print Friendly, PDF & Email

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>