Pixel C Crash Yesterday Night

Yesterday evening, my new Pixel C Tablet did something for the first time, which was ominous. Its screen just went dark, and then started to display the logo, which it displays during a restart. It followed through with a successful restart.

Some people mistakenly think that this behavior is a reboot. If we were to call it that, then this behavior would need to be called a Hard Boot – as opposed to a Soft Boot, which happens when the user shuts the tablet down from the software-side, in telling it to reboot. In fact, a Hard Boot would be happening when the user uses the power-button to force a Hard Boot, and would have an explanation in that.

In reality, what the tablet did was a spontaneous reset. This type of event is also a File System Event, as the File System was never unmounted. Hence, the tablet also needed to repair its file system when it booted anew.

But, there are certain safety-factors built into how any serious O/S works, and built into how any file system works. So in most cases, the repair to the file system succeeds.

The fact that this has happened to a brand-new tablet, causes me to question how (un)stable it might really be. I’ve only had this tablet for a few short months now.

One of the features of how this happens, which is even less reassuring, is that after the reset, there is nothing displayed in the user interface, which betrays the fact that the reset happened. What this means is that in theory, this could be happening every night as I sleep, even while the tablet is charging, because by the next morning, there would be nothing displayed, to betray the fact that it has happened.

It just happens to have taken place once now, while I was sitting in front of it.

Dirk

(Edit : )

I should add, that this tablet is running the May 5 patch of Android 7.1.2 .

 

Something that might count Against FAT32

One fact which In have written about, is that each File System essentially has a different way, of organizing the Blocks on an HD, or on an SSD, into Files, and of managing free storage space.

One fact which they all seem to have in common however, is that HD or SSD Blocks possess ‘Logical Block Addresses’, aka LBAs. What this means in simplified terms, is that every Block on an HD has a unique Block-Number, which can be used to point to it, very much the way in which Pointers do, to addresses in RAM. There can be some differences though, in whether the Physical Blocks correspond directly to Logical Blocks, mainly because a given HD platter might have a Physical Block Size of 512 Bytes, while the O/S might require that Logical Blocks have a size, compatible with the size of Pages, of Virtual Memory. Since Virtual Memory Pages are usually 4KB, this would seem to imply that 1 Logical Block always corresponds to 8 Physical Blocks, and that any file will take up space, corresponding to at least 1 Logical Block, regardless of where the logical end of the data is, inside its last Block.

One way in which a File System can allocate Blocks to a File, is just by chaining them, and this can make for an entertaining way to explain Computing. I.e., at the beginning of each allocated Block, there could be a Header, which among other things, states an LBA which points to a ‘Next Block’ in a chain, or which is otherwise Null to state that the current block is also the last one, belonging to a File.

The use of such header information that resides inside the Physical Blocks, that are supposed to contain data and not meta-data, has a major drawback though. It can be accommodated by a modern O/S with some effort, but again causes the Physical Blocks, not to correspond directly to Virtual Memory Pages, since some odd number of Bytes have to be subtracted from their size then, to arrive at the size of units of data and not meta-data, which then also does not correspond to a clean power of two.

I think that ‘FAT32′ uses such an approach.

Because a modern O/S has such features as Memory-Mapped Files and Virtual Memory, its performance would take a considerable hit, if it was forced to use the old ‘FAT32′ system – as its main store of system data.

What tends to happen with up-to-date File Systems, is that Logical Blocks that are mapped directly from HD Physical Blocks – Contain only Data, and contain no Meta-Data. ‘Unix System V’ was an early example of that; therefore ‘ext2′, ‘ext3′, ‘reiserfs’, and ‘ext4′ are examples of that as well. The way in which these File Systems keep track of Blocks as belonging to Files, is through one ‘iNode’ per File, plus an arbitrary number of Index Tables, the latter of which each contain one Logical Block of Meta-Data, consisting entirely of the LBAs, of data-blocks, that belong contiguously to any one File.

So there are entire pages of meta-data, stored on these higher-performing File Systems, and many of them. If that becomes corrupted, the effect is not the same, as when regular content-data becomes corrupted.

‘NTFS’ also possesses Index Tables, but I think replaces the use of ‘iNodes’ by Linux, with a ‘Master File Table’. LBAs are used as pointers throughout. Whether other people will think that a Master File Table is more secure than the use of many iNodes is, I do not know. I feel more secure with a system of iNodes.

The iNode or the MFT will state as part of their meta-data, the exact length of the File, in Bytes, so that the FS does not read past its end, even if the end of the content-data is midway through a Logical Block.

Dirk