Recognizing that Methodologies Exist, to Computerize Volumes of Text

One feat of Computer Programming which is not in itself forbidden, is to store text in a database, formatted as HTML. But an aspect of this which needs to be recognized, is that random text such as ‘blog entries’, tends to be of variable length, while database records still tend to be of fixed length today.

But, methodologies have existed in Computer Science for a long time, to manage variable-length data. One such methodology involves ‘linked lists’, and another involves ‘doubly-linked lists’. This is commonplace with pointers and memory addresses, but can easily be translated into records with record numbers.

Hence, random text can be subdivided into smaller blocks of arbitrary length, and a table of database records can be defined, such that each record has a numeric field, another numeric field, and then the text field corresponding to a block. Because each record in the DB table has a record number intrinsically, the first numeric field within the record can point to another logical ‘next record’, while the second numeric field to a logical ‘previous record’, just as it was taught with pointers. An impossible record number such as (-1) could be used to signal an end to these chains…

And while this sounds interesting in theory, one type of data processing which “WordPress” already seems to do well, is to keep track of ‘revisions’ that have taken place to a blog entry, each of which could have replaced, inserted or deleted a block of text…

Oh, the marvels of Computing…

I think one would also need to keep track, of the possibility that If a block of text had 256 characters by default, any one block could be using fewer than its full 256 characters…

Dirk