The 0x programmer: the curse of a perfect memory

by Adam Tornhill, March 2022

On my first post-pandemic flight I took the opportunity to re-read the ever fascinating The Mind of a Mnemonist. In this classic case study, A. R. Luria tells the tale of S., a synaesthetic man with limitless memory. The average person can keep ~7 items in their head and recall them. Without active rehearsal, those memory traces will soon vanish. This is in remarkable contrast to S. who effortlessly memorized 70 items, no matter if that information was numbers, symbols, or just random nonsense syllables. Even more impressive: when re-tested on the same information weeks, months, or even years later, S. could still recall the information in any given order. The information seemed to persist forever, and Luria's' team couldn't find any apparent limits to S. memory capacity.

Luria met S. in the 1920s, way before the computing age. That got me thinking: how would a person with S. capabilities perform as a programmer today? I mean, picking up the latest Perl syntax or the intricacies of Kubernetes would be effortless. And passing any AWS certification of your choice would be a lazy pass through the relevant documentation. While the benefits are obvious, a perfect memory won't necessarily turn us into coding wizards. In fact, I suspect that the contrary is true: S. would struggle as a developer. Let's explore how our cognitive constraints guide software design.

Working memory: a programmer's primary tool

From a programmer's perspective, one of our most interesting cognitive functions is something called working memory. Working memory is like a mental workbench for the mind. Your working memory lets you perceive, interpret, and manipulate information in your head. For example, you engage working memory when doing crosswords, solving a sudoku, or trying to understand a piece of code. Working memory is vital to us programmers.

Unfortunately, working memory is also a strictly limited cognitive resource. There's only so much information that we can hold in our head at any given time and still reason effectively about it. Approaching the boundaries of your working memory requires effort, and it's easy to make mistakes when operating at the edge of your cognitive capabilities. To try it out, you could do the N-Back task frequently used in cognitive research to exhaust working memory or, next level, try to understand the rules for C++ template argument deduction. Painful, isn't it?

S. didn't have this limitation. Further, S. seemed to be able to commit all information to his long-term memory. One of the most fascinating accounts in Luria's book is when he re-tests S. 16 years after the initial test. Unbeknown to S, Luria had maintained all original testing material (series of random numbers/words/symbols). To Luria's surprise, S. could recall the complete series that he had imprinted into memory almost two decades earlier. Even more spectacular is the way S. performed the recall: he started by mentally re-creating the whole setting. The room with its furniture arrangement, the tone of Luria’s voice, etc. S. would then proceed to recall the information just like as if the even happened right now. To me, this is the closest we humans ever got to time travel.

The legacy code hero

I don't have the memory capacity of S., although I often wish I did. In particular when dealing with legacy code. The main challenges with legacy code are not necessarily technical but rather social, contextual and historic. With legacy code, we might no longer know why the code does what it does or even how it works. Further, we are rarely aware of the original trade-offs, nor the business context at the time the code was written. Imagine having someone like S. on the team. S. would be invaluable and outperform any piece of documentation. His ability to re-create the original setting would also help us understand the context, trade-offs, and discussions of the original developers. With S. on-board we could tame any legacy code beast.

But even if S. would be invaluable as our collective memory bank, would he write great code himself? Or, in more general terms, would our coding style change if we could remember all of its details? It's a liberating thought, so let's consider all the design principles we could skip.

Software design is pointless

Given a perfect memory, the only one who had to interpret our code would be the machine. This implies that we could simplify programming a lot as we get rid of the demanding audience of forgetful humans. Let's start with the simple stuff: why on earth would we complicate our lives with version-control? Just code, and rollback mentally to any earlier state if needed. Tempting, but the big win is in the software design itself: care about giving your functions and variables proper names? Stop that. There wouldn't be any need -- remember that S. could recall nonsense syllables effortlessly. Suddenly typing would be the bottleneck of coding. Wonderful.

Encapsulation then, the pinnacle of good design? Well, if we can remember every single business rule and where they are located, then there's no point in encapsulating data nor business rules. Should the requirements change, then we could live with the occasional sweeping modification as it caries low risk. We know which code is there, how it interacts with other parts, and why we built it in the first place. We could even get away with poor cohesion and enjoy functions that stretch thousands of lines of code. With a perfect memory we know where to find any behavior as well as the impact of any change. Simple.

...except when it's not!

If the kind of code I just described sounds like the stuff programming nightmares are made of, then you're in good company. There's a reason we care about software design. Most of us don't have S. capabilities. We need to find a way of tuning our code to a form that fits our brain's cognitive bottlenecks. A form that plays to our brain's strengths rather than fighting it.

And this is where S. would run into challenges: even if we -- as a team -- agree upon certain principles and practices, S. would have a hard time to adapt: there's no such thing as a free lunch in the world of cognition either.

As you probably suspect, the memory of S. wasn't just better, it operated differently. S. had synesthesia, a condition where a sensory stimuli activates another sensory modality. For example, hearing a specific word triggers "seeing" a picture that represents that very word. Or a specific number might be associated with a shape, color, or even a taste.

S. synesthesia supported him with straightforward memory cues for organizing and retrieving information. Nevertheless, visual strategies quickly break down when it comes to abstract concepts. Consider the concepts of "nothing" and "eternity". How would you visualize them? S. couldn't, and consequently struggled with both remembering abstract terms as well as understanding them.

The typical human memory isn't anything like what S. experienced. For starters, most humans don't recall precise information. Instead we focus on the key elements of a message. That aspect is what allows us to identify patterns. Since S. didn't forget, he never had to evolve or train his ability to abstract. Let's consider for a moment what that type of programmer would be like.

First of all, without being able to deduce the overall pattern in a solution, it becomes impossible to generalize and hence learn. We wouldn't be able to translate hard earned lessons from one part of the code to another. Given the smallest variation in context, any problem would be perceived as unique and would have to be solved brute force.

Further, if we're unable to separate the specifics from the general, then there would be no design patterns catalogs for sure. The option of writing a reusable library would be impossible. And finally, without an ability to deduce commonalities between problems/solutions we wouldn't be capable of translating knowledge acquired working on one codebase into learnings for another project. We'd be doomed to reinventing wheels at a regular basis.

Abstraction as a tool of thought

Luria's compassionate portrait of S. and how memory shapes our cognitive strengths and weaknesses is a wonderful work. Later psychology studies have shone light on the consequences of amnesia -- the loss of memory -- and those results tend to be more intuitive; most of us forget, and it's something we can relate to. That the opposite also comes with negative consequences is at first more surprising.

As attractive as a limitless memory might sound, it comes with the likely cost of learning and reasoning impediments. The software design principles that we take for granted stem from human imperfection; design compensates for our cognitive bottlenecks, including but not limited to our flawed memory.

Still, design does more than that. A good design allows us to reason about both problem and solution so that we can refine them, generalize to patterns, and learn for future challenges. That learning can be both explicit and implicit, but it won't happen without an enabling abstraction. Abstraction is key.

About Adam Tornhill

Adam Tornhill is a programmer who combines degrees in engineering and psychology. He's the founder of CodeScene where he designs code analysis tools that empower teams to build great software.

Adam is also the author of Software Design X-Rays: Fix Technical Debt with Behavioral Code Analysis, the best selling Your Code as a Crime Scene, Lisp for the Web, Patterns in C and a public speaker. Adam's other interests include modern history, music, retro computing, and martial arts.