As long-term readers of this blog will know, I think that we would benefit from better names for a number of computing concepts. I previously criticized the obfuscatory name ‘dynamic programming’ for the technique of solving an instance of a problem by making a table of solutions to smaller instances of the same problem. In this post, I’m going to criticize the names ‘top’ and ‘bottom’ for the two ends of a stack.
The trouble with these names is that they are ambiguous. If a stack is regarded as growing upwards, like the stack of plates in a plate dispenser (pictured), then the ‘top’ is the end with the youngest items, where items are pushed and popped, and the ‘bottom’ is the other end, with the oldest items. But if a stack is regarded as growing downwards, as for example the control stack does on essentially every computer architecture, then the ‘bottom’ of the stack would seem to be the lowest-numbered address, and that is the end with the youngest items.
In the Memory Pool System, we coped for years with confusion arising from these names. The MPS normally considers a block of memory as being organized in address order, but this means that when the control stack is considered as a block of memory, then the lowest-numbered address is the ‘bottom’ of the block, but this is the logical ‘top’ of the stack. We found that the stack-handling code became much easier to follow when we adopted the names hot end for the end of the stack with the youngest items, which changes quickly, and cold end for the end with the oldest items, which changes slowly.
I don't insist on these particular terms—young and old, or accessible and inaccessible would likely do as well, if you don't like hot and cold. The point is that in order to be unambiguous the names must be based on something that’s essential to the nature of the data structure rather than an arbitrary or accidental aspect of a pedagogical metaphor.
Where does stack nomenclature come from? The concepts involved in a control stack were invented by Alan Turing and described in a 1945 report, described in Carpenter and Doran (1977). I’ve glossed Turing’s descriptions with the modern names for the concepts.1
When we wish to start on a subsidiary operation [subroutine] we need only make a note of where we left off the major operation [the return address] and then apply the first instruction of the subsidiary. When the subsidiary is over we look up the note and continue with the major operation. Each subsidiary operation can end with instructions for this recovery of the note. How is the burying and disinterring of the note to be done? There are of course many ways. One is to keep a list of these notes in one or more standard size delay lines, with the most recent last. The position of the most recent of these will be kept in a fixed TS [temporary store, that is, register], and this reference [the stack pointer] will be modified every time a subsidiary is started or finished. The burying and disinterring processes are fairly elaborate, but there is fortunately no need to repeat the instructions involved, each time, the burying being done through a standard instruction table BURY [jump to subroutine], and the disinterring by the table UNBURY [return].
Knuth (1998) describes the history of the term ‘stack’:
the name “stack” stems from IPL [Information Processing Language] terminology (although “pushdown list” was the more official IPL wording), and it was also independently introduced by E. W. Dijkstra
I can’t find any of the IPL reports online, but here’s Dijkstra (1960):
The basic concept of the method is the so-called stack. One uses a stack for storing a sequence of information units that increases and decreases at one end only, i.e. when a unit of information that is no longer of interest is removed from the stack, then this is always the most recently added unit still present in the stack.
Dijkstra used the name “top” for the hot end of the stack:
there was no need to store the stack in a random access memory, for our interest was at all times restricted to the youngest element in the stack. In principle we could have used a small magnetic tape that would have to move one place forward in writting and one place backward in reading. […] Inside the subroutine we store the most anonymous intermediate results in the "top" of the stack in just the same way.
↩ The story I was taught about the development of the subroutine is that the first implementation was by Maurice Wilkes and David Wheeler during the development of the EDSAC in the late 1940s. But Turing’s report suggests that the key ideas were well known some time before they were implemented: Turing clearly describes subroutines, the call stack, the stack pointer, and dedicated jump-to-subroutine and return instructions. He also goes on to describe the process of link editing.