Programming is Glorified Bookkeeping

Preface: The post was planned to be comprehensive across the stack, but the realization of the stack being infinitely zoomable means the post will never be complete and never published. Thus I have decided to bite the bullet, accept the current level of detail for the time being, and to be updated over time.

My first programming course was called Scientific Programming which used MATLAB. I remember my struggle back then wasn’t with the language itself or the bugs. The biggest struggle was recognizing mental shortcuts, catching my brain just at the right moment, breaking apart the mental shortcuts into their greatest common divisors, and translating those components to code. This means unlearning fossilized mental processes that have been ingrained and automated over the years. It requires shifting your mental gears from System 1 to System 2 1. This means, once we find the greatest common divisor in the process, our task is grouping and naming them accordingly to be meaningful, trackable, and reusable. But this is not where the bookkeeping takes place.

Throughout this process, we not only group actions into function, we also group closely related data into structures. Note how it is structures, plural, referring to many types to choose from depending on the use case. The data structure determines how you keep track of the data and how it is organized. Data that is more sensibly organized as an array will be hectic to be managed as an otherwise fragmented structure. Reportedly, Rob Pike says 2, “Data dominates. If you’ve chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming.” Data is fluid and exists in its own dimension. The structures we package them in are for us, programmers, to allow us to keep track of which data are related, how they are related, how much space they occupy, and how they pack together. This is the first level of bookkeeping.

Building the mental model is the highest level of bookkeeping. You will often find that the mental model of the data of the problem is not the best fit for problem solution in code. At times it is a question of possibility, but I believe it is more a question of complexity of track. To illustrate, consider a problem I had to work on once involving the detection of cycles in a directed graph. The mental model consists of vertices as spheres and edges as strings connecting the balls with a certain direction assigned to them. If you try to code it merely as vertices (balls) and edges (directed strings) then try to find the cycles, then you are in for a World of Pain. It is not impossible, but there is too much bookkeeping to keep track of as you travel through the graph. Each vertex will know the next vertex it is connecting to. Thus to detect cycles, we will have a traveller who starts from each vertex (or only the ones we care about) and goes through the various possible paths one after another. This means we need to construct all the possible paths to take, keep track of the ones taken and the ones remaining, and keep track of the travellers along the current path. We also need a watchful eye staring at the graph from a higher dimension. If you stare sharp enough you will notice how naive the solution is with a hard brute-force approach. It is doable, but hard. The alternative is to break the information of vertex connectivity via edges from the edges connecting vertices together. In other words, the bookkeeping is broken into a unique list of the vertices (a set) and a table (hash map) showing the mapping from one vertex to the next. Now the problem of detecting cycles is just going through the mapping table until you hit a vertex with no mapping to a next vertex or you find yourself at a vertex that was seen before, where the former means no cycle and the latter indicates the presence of a cycle. Note how the very exercise of bookkeeping was instrumental to solving the problem without going through the hair-pulling phase.

We have our first level covered, so let’s proceed. Once you have your data managed in proper data structures, you can start keeping track of them as single units. Moreover, you can mix and match those structures and still handle them as single units. In my earlier example, I synthesized the data structure of the graph composed of a set and a hash map, but at the end the operations were talking to the graph and not the vertices or the edges themselves. The end goal is to make it easier to reason about and track those data sets as they shift around. Receiving form data full of details is intended to be processed as a single package at a certain level of operations. This takes in consideration proper abstraction to avoid processes stepping over each other’s toes as they handle the data. Algorithms and data structures are just means to help the programmer keep track of data arrangement and movement in an expected manner. This is another level of bookkeeping.

As you, the programmer, have managed to track the data packages, their movement, and how they stack their underlying data bits need to be managed and tracked too. The bits of your data are managed in multiple levels of abstraction. The first level depends on your programming language of choice. The runtime, or the interpreter, will claim memory from the operating system to store the data structures created and used throughout your program. The simplest case is when all the data structures have the same size, so the runtime of your language will have multiple blocks of memory of the same size containing the bits of your data. Each block stands for a single data structure in the code. The runtime can start with two blocks of memory: one to store the data structures created in your program broken into segments, and another to maintain a table referring the segments to the variable referring to them. We have built another level of bookkeeping.

The keen-eyed of you might notice how wasteful of memory this could be. We thus have to add one more layer of bookkeeping. The runtime of the programming language shall maintain a table of the types of data it supports and the size an instance of each of those types occupies. We will call this table type_size. We can thus have a type named int8 which is 8 bits big, i.e. a byte, another type int16 which is 2 bytes in size, another type char which is also a single byte in size, and so on. For this layer to function, now every segment of memory standing for a data structure must report its type and its content. Alright, so we must take a step back, build a table containing a series of bits and the type which they identify and call it type_identity. Then we will use the identity bits from the type_identity table in the type_size to relate the two tables together. The runtime thus divides every segment into two parts, the first of which contains the series of bits to identify the type of the data in the next part using the bits series from the type_identity table, and the second part contains the actual data. We are still just at the beginning of the bookkeeping journey of computer programs.

Custom data types are not impossible to handle using the same scheme. Custom data types are built by composing existing data types, which we shall call primitive data types. We already know the size of each primitive data type. The size of the new data type is the sum of the sizes of the primitive data types which it composes, plus some a tiny bit for metadata, such as the name of the new type. The runtime, or the compiler, runs through the definitions of the new types, inserts them into the type_identity and type_size tables then proceeds to treat them the same way it handles every other primitive type. The bookkeeping for custom types is not any different than the bookkeeping followed for primitive types. We only had to recognize them and record them, which is mere bookkeeping.

To run the program, the operating system will provide the runtime with a block of memory region for its use. The runtime divides this area into segments for the data created during the run of the program, the size of each segment corresponds exactly to the size of the data it contains plus a small amount of metadata to identify it. We have already noted how the bookkeeping is handled for the data within the program. For the program to run, the runtime requests a block of memory region to use. How does the operating system manage the assignment and reclaim of assigned memory blocks? You guessed right, by the virtue of bookkeeping.

The operating system knows how much memory is available. When the runtime of a program asks the operating system for certain size of memory, the operating system will mark the address of where a contiguous range of memory of at least the requested size, marks the end address, and tells the runtime of the program where the range of its assigned memory block starts. Again, the operating system marks the addresses and sizes of the ranges. It also marks who those ranges are assigned to. The operating system keeps books of that information.

TODO: more complex RAM region assignment, virtual memory, swap to disk – they’re are all forms of bookkeeping upon bookkeeping upon bookkeeping

TODO: Mnemonics of CPU instructions are mapped to bits which are electric signals to be sent in order to arrive at the correct set of NOR gates whose combination will perform the intended behavior, which is also a form of bookkeeping

This is not profoundly new. Richard Feynman described a more abstract form of this in a lecture titled Computer Heuristics. Feynman used the analogy of filing index cards in place of the tables and markings we used in this post.


  1. https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow ↩︎

  2. https://users.ece.utexas.edu/~adnan/pike.html ↩︎