I was going through a practice interview problem and I had returned to it because I wanted to experiment when it struck me that I’d left myself a comment that I never deleted.

  1. Insert x in your data structure.
  2. Delete one occurence of y from your data structure, if present.
  3. Check if any integer is present whose frequency is exactly z. If yes, print 1 else 0.

This was in regards to an imaginary instruction set of which I had created a switch statement. In production code, many would have immediately seen this as a code smell. After all, if it was that difficult to remember and understand, it should be an instant trigger that this code needs more descriptive variables and better factored method names with appropriate tests. With that in mind, I went through and started thinking about what the proper names would be, and if I could make it descriptive enough that I could come back 6 months later and know what was going on without having to look elsewhere.

We could get simple and suggest INSERT, DELETE_ONE_INSTANCE, and QUERY, but query is a poor name here. We are checking if any integer is present whose frequency matches the input. That is, if there are six 8s, 3 6 returns yes. I sat here for ten minutes and could not think of a good, concise name to describe this behavior. While completely contrived as an example, we have all reached places where we racked our brains to think of just the right name.

Test driven design advocates would laugh. “Why don’t you just check your executable spec - the well written test?” That’s a great idea! Now, in something like RSpec, I have the option of a well written specification like it "prints 1 if any integer is present z times in the data structure." and the converse "it "prints 0 if no integer is present z times in the data structure." That’s as helpful as the original comment. However, that doubles the surface area of code we are keeping in our head. If we have our codebase properly factored, that’s not normally a problem. However, I have seen good practitioners create sub-optimal solutions under pressure or because the right abstractions were not obvious.

Great practitioners in disciplined organizations might take care to follow domain-driven design, which would allow us to have shared meaning that would allow us to have predefined and more articulate terms for naming in our codebase. Even so, we will often run into cases where the meaning of the code is hard to express in short bursts, and we will find times where we can’t yet properly find concise abstractions. Our commenting practices cannot be determined by perfect practices, because we are not perfect programmers.

There is also the contrarian view by John Ousterhout that “in-code documentation plays a crucial role in software design” and that “inadequate documentation creates a huge and unnecessary drag on software development.” I am more sympathetic than most current programmers to this argument, and it shows as this observation started with a comment. However, it simultaneously makes me consider that the test might be the more logical place to keep most comments! The test is providing a direct example as to why we care about the behavior of the source code, and seeing an example of the expected behavior allows us to contextualize what may be confusing. For example, I have seen code doing an obviously incorrect thing accompanied by the comment: // THIS IS WRONG but downstream provider depends on this behavior. If we place the comment alongside the test, we will have an example of the correct-incorrect behavior next to one or more examples of this behavior.

However, I’d like to do better. What if we could have a tool that would allow us to map our tests and our code bases? We already have code coverage tools. Could we do even better and use the same concept to allow our tests to become a more explicit documentation source? I’m not ready to commit to writing such a tool, but conceptually, I think it would enable a test driven documentation approach like this, and might also encourage some of the practices that formed around behavior-driven development.

It would also encourage writing tests for confusing code, which might give programmers the courage to refactor it.

It’s worth an experiment.