I guess the “No True Agile” crowd is as alive and well as they were 10 years ago. When it’s a failure, they always come out of the woodwork to let you know you weren’t doing it according to the orthodoxy. When you point out that it isn’t faster in practice, they gaslight you and claim that was never the point in the first place.
In a way it reminds me of a cult. You do a series arcane rituals that have no scientific evidence behind them, none of the process is data-driven or statistically proven. Frankly, it can’t die soon enough but as we all know some new hype will simply take its place to give busybody managers a reason to exist.
I have toned down in my pure agile advocacy in recent years because of what I will call the agile conundrum: Agile, as the agile thought leaders prescribe, is not often followed. True Agile (TM) is rarely practiced, and most places that call themselves agile are often a group of practices that conflict with the thought leaders’ prescriptions. Yet I am often frustrated by the anti-agile arguments because they do not argue for anything. In the effort to help raise the discourse, I offer this, to help them pick an alternative to argue for.
To get to an agile alternative, I think we need to define it, else we are likely to define something that someone, somewhere would call agile. Part of nailing down “True Agile” is that trying to define agile is difficult, but I think I can narrow it down into two camps and three definitions.
As such, I am going to take the following argument: there are two sets of practices that are both called “agile.” For the purposes of this essay, I am going to classify them as “doing agile” and “being agile,” and do not use either disparagingly. To argue for an alternative to agile, then, I am arguing for an alternative to both of these practices, rather than trying to distinguish between the two. (Otherwise, the simple argument is if you are experienced with one form of agile, the alternative is to experience the other.) While a non-agile approach may also adopt practices that been adopted by the agile community, this common set of practices are insufficient to call it agile if the mindset is based in another origin.
So, what are these alternatives?
Waterfall is given as the only alternative to scrum and the boogeyman that Agile replaces. However, as is widely described, the original paper upon which “waterfall” was coined spoke of it as “risky and invites failure” if done without iterations.1 That said, and I have worked at one company that used gates to proceed to the next phase of development and though that project had integrated testers, there was a separate user acceptance testing phase that certainly looked like a typical testing phase for waterfall. As we have already established this as a strawman, I will speak to a few variations on the theme instead, but suggest that the simple waterfall method is not an alternative. That said, we have a fair number that are Whole Design first. The key acknowledgment from these systems, however, is that the initial plan is not set in stone; it is a living document and adjustments can and should happen as organizations learn. As most people arguing against agile are not suggesting we move back to 3 year projects with no iterative developments, I won’t dive too deep here.
If we were to look for a system in practice that looked remarkably similar to the strawman, we can find it here. The SSDM, most commonly used in software projects for the United Kingdom government, is a thorough example of whole design first.
There’s no sense in recapping the method when you can go here to read a recap, but the short is that if a project is feasible, someone will do a thorough investigation of the current system, build a set of options on which the constraints of the new system are determined, build a set of detailed requirements “free from error, ambiguity and inconsistency.” At this point, the architects are now let into the room, where they discuss the technical system options and build an extensive logical design. After this logical design, the developers are let into the room. The developers turn the logical design into code in a stage called “physical design.”
As this was in practice in the UK government, we can look to see how the system works. This report from the UK National Audit Office suggests that even with these guards, objectives of the software are often not clear, schedules are overly optimistic, projects aren’t transparent, and senior leadership doesn’t know about the project being late until it is far too late.
So far, we do not have a ringing endorsement of waterfall.
We can similarly disregard PRINCE2, another UK Government effort. It might be worth getting a retrospective out of the Product Management Institute to say what was great about these, but in general these have multi-thousand dollar courses to be certified, and are complex enough to justify them. PRINCE2 is more iterative in that there are iterations (see below), but is so heavyweight in its documentation requirements that it is difficult to classify it in the iterative environments.
However, if your argument is that we deliver software too frequently and don’t write enough technical design documentation in a multi-month effort to have it reviewed by a gate of architects and middle management, I guess this is a good place to advocate. (And if you’re outsourcing your software to the cheapest bidder, perhaps you deserve this.)
V-Shaped software development looks similar in the first progression, but instead of delaying deployment and testing, deployment is not specified and testing is baked into each layer. Mohamed Sani has written an article about it, and I have seen a fair bit written about it, but I have not personally seen any evidence of whether it works or not. It seems to be more common in German government processes, and underlines some later implementations of the PRINCE2 model. However, most people who speak about the issues in Agile don’t seem to want to go into multi-level extensive testing and documentation environments. If you believe that the problem with agile development is that we don’t design up front enough and we don’t write extensive tests at every level, I think this is a fair argument to a path forward. That said, I see more people talking about building testing into the earlier phases of agile than I see it as a common critique.
I think it is safe to state that nearly every new project today is an iterative process in some form. While it is possible to gather all requirements up front, people gather these requirements into multiple milestones. With software as a service and product engineering, there is often no end of any given project. With that, we are far past the thought of a maintenance phase of any given software in the web world. As such, perhaps we should look to classical iterative development processes?
As mentioned above, iterative systems often look like a pipeline. If you have ever built a product milestone by milestone, you have engaged in iterative development. (Milestones are considered “water-scrum-fall” by many in the “true agile” camp as it means that you have predefined the set of features and are dividing them up without releasing more frequently and learning.)
If you understand RUP, please explain it to me. I’ve tried to understand it multiple times and bailed out. It’s built by the same people who built UML and the two share its complexity disguised as trying to simplify project methodologies. Many involved with RUP bailed to jump on the SAFe train. If you think the problem with development is that we need to build UML for all of our components and that we don’t reuse enough code because it isn’t written in a proper object orientation, you might want to argue for RUP. (No, really. Those are two of their six best practices.) 2
I’m cheating, because SAFe considers itself agile. I would most charitably call it an iterative development methodology that uses some agile practices. Many complaints about agile from developers are exactly the issues that SAFe exacerbates with its standardization of practice, heavy meeting system, and coordination around a release train. If you are arguing that the problem with agile is that there isn’t enough process or meetings, SAFe is a defensible position.
The Spiral model is another complex system that isn’t used much today, and is largely a predecessor to many of the agile practices. The idea is that a prototype is built based on the requirements and a preliminary design. The stakeholders are gathered again and everyone looks to see if the risks have been properly addressed. If they have, the system is built. If they have not, the project is terminated or a larger prototype is built to identify successive risks. It also suggests partitioning separate features and building them in their own spirals, which may be built successively or in parallel.
Conceptually, it is interesting as a place to start if the argument is that we need a better system for developing large software, but we should continue to think of them as products and should look for a root where we can grow to an alternative solution. I’d also look at this report that has some notes on spiral in the field if you are looking to start your argument here. If your argument is that we need to throw more software away, the spiral model is a good place to start. 3
As agile has sucked up the project process oxygen, we haven’t seen many new development methodologies that don’t start with agile as an assumption. That said, there are a few worth talking about.
The PMBOK today has a lot of nods to agile, but it does speak to predictive methodologies. The predictive methodologies look a lot like waterfall practices when I look at them. If you are looking to build your own replacement of agile, working on previous best practices, it may be valuable to start with the PMBOK to determine your points of agreement and disagreement. If you’re just looking to complain or are advocating for another model, this may be a waste of time.
37 Signals, the team behind Hey.com, Rails, and Basecamp, built their own product methodology. The basic idea is that there are bets every few weeks (basecamp uses six week cycles) and all planning goes back to the drawing board each cycle. A product can be discarded or re-invested in if there is a failure to release within those six weeks. There is a chance to pitch ideas every 6 weeks, so there are no backlogs. This is deadline-driven development but there are weeks of cooldown in between which is a great way to coax deadlines past sustainability yet still claim to be sustainable. However, you may find that you can argue this is better because developers are left alone for six weeks at a time, this may be a good place to argue from.
On the other end of the spectrum, we have Zed Shaw’s rejection of all methodologies. This “methodology” suggests that the only thing that matters is programming itself. Product, design, and management doesn’t matter. Customers don’t matter. Any methodology that gets in the way of the programmer must be “destroyed,” as the manifesto states.
There is no importance in programming the right thing in this methology. A charitable argument would say that this is satire and not a proper methodology, but nothing about Zed Shaw’s history online shows any evidence that it is any less than earnest. If you think that the only thing stopping great software from being developed is that your users are too stupid to appreciate your great program, this may be a good place to start.
And we get back to the original comment.
The alternative is to just do what makes sense, without calling it anything. Junk shit like what scrum has become, for example not discussing technical stuff during standup (why the fuck not? that’s what our job is about), artificially splitting stories into 2 week boxes (why the fuck should I break down something that isn’t naturally breakable?), why create a card for every thing I’m working on (fucking control freak heaven), etc. It’s completely juvenile, gaslighting, cargo cult scientology.
Continuous integration => of course, for me, never worked at a place where it didn’t happen and that was before agile was even a thing. If I hopped on a project where it didn’t make sense though, I should be free to ditch it.
There is no one way to write software that works for every project and there is no one process that works for every project. Just let the experienced folks decide what to use instead of leaning on some bullshit fixed crap process.
This is an appeal to “common sense” that looks similar to Zed Shaw. Every project is its own snowflake and each company is beholden to its most senior members and their intuition. We make no effort to coalesce best practices and instead intuit from first principles and previous experience.
This is a good way to be steamrolled into death marches as a profession. Having a set of agreed best practices informed by collective experience that includes things like “sustainable pace” have a chance of building case studies and showing that great software can be developed under humane conditions.
That said, ad-hoc methodologies may be valuable in some cases where the system being built is entirely novel or when teams are very small and the project can be held in the head of all participants. Further, I do want to clarify that this does not mean that there is one best methodology or that every project needs to be standardized on the same methodology, agile or otherwise. However, in most spaces where agile is criticized, it in in systems that are not free from these constraints.
But when I see people complaining about agile, this is the most common alternative suggested. I can accept the argument that we need a post-agile methodology, and I can accept that neither agiles are as good as some other methodology out there, but now I can at least give you, dear reader, a list of alternatives to argue from when I ask, “if not agile, then what?”
Thank you to James Carr, Mark Bastian, Curtis Warren, Hans Gerwitz, Eitan Adler, and Jim Hardison for initial eyes and comments.
This is a vast simplification. As Eitan Adler pointed out to me after the first version was released, Bell and Thayer 1976 referenced the Royce paper, and they were the ones who mentioned Waterfall. However, Bell and Thayer attribute waterfall to him though it wasn’t mentioned in the first paper, and they only briefly touch on iteration through the development cycle. Further, according to Wikipedia, Barry Boehm mentions two previous papers by Benington and Hosier that had “good approximations to the waterfall model.” Nevertheless, the key players all pointed to the necessity of iteration in software development which was the original point. As Adler points out, “Royce was an employee of TRW, a government contracting agency working on aerospace amongst other things. At the time this was written Royce was simultaneously discussing what was actually being done and also suggesting better ways to run projects.” Thanks again to Eitan Adler for this discussion! ↩
As an interesting aside, Dr. Winston Royce, of the 1970 paper referenced above, had a son named Walker Royce. Walker Royce was employed by IBM’s Rational division and was employed by TRW at one point. IBM’s page called him “a principal contributor to the management philosophy inherent in the IBM Rational Unified Process.” ↩
As another interesting aside to the aside; Winston Royce, Bell, Thayer, and Boehm were employed at TRW sometime in the 70s. It’s also possible that waterfall as a term came out of informal conversations. But Bell and Thayer cited a Boehm study, and Boehm himself is responsible for the Sprial Model paper. I would posit these three papers were checkpoints along the way of TRW building their internal processes. Software process geeks and agile alternativists would do well to study this evolutionary line. ↩
As the cost of compilation decreased and we moved to more iterative forms of software development, our standards of documentation became more lax as well. Some practitioners in the agile space went past “working software over comprehensive documentation” to “an agile document is just barely good enough” as Scott Ambler writes.3 When I encountered my first truly monstrous documentation project, I fully agreed. When I was at Visa International in 2011 as a contractor, the lead for the project had written over 100 pages as a design document and there were weeks of meetings by business analysts to document every requirement that would be referenced in a UAT system. We had a parallel process that was more agile to get those projects out, but it still took significant work to create such comprehensive documentation as was standard for the process.
At Zipcar, by contrast, knowledge was shared from person to person for shared code context. If Visa had too much, Zipcar had far too little. There were entire swaths of code that fewer than three people understood, and they weren’t touched unless absolutely necessary. Even on new development, there were times where the only answer was to ask the person who developed it or to dig in and figure it out. 4 This was efficient when developing a feature in a familiar code base, but very inefficient when changing the system down the road. We tried to build a few sets of documentation along the way, and by the time I left we had some unevenly dispersed documentation.
I then went to OneStudyTeam (then known as Reify Health), where I saw a fully fledged functional documentation culture. I remember my first month asking if the behavior would be documented and a developer came to me in a 1:1 and told me, “Relax. We document everything appropriately and you don’t have to remind on every ticket.” This wasn’t fluff – it was true and it had been for years. They had successfully defined a documentation culture. Conversations in chat for clarification on how things worked were often redirected to existing documentation. The harder problem wasn’t that it wasn’t documented; it was that it wasn’t always updated and there was sometimes duplicate documentation because it wasn’t easy to search. Even so, I personally saw it save more time than it took to write the documentation.
The keys to OST’s documentation culture, in my view, was that there was a cultural expectation to document what you changed, there was existing prior art to have examples to show what good documentation should look like, and there was immediate value - people were able to see the value in their first 30 days as they read the existing documentation. This work, alongside pair programming sessions and a culture of helping others, allowed developers to get up to speed quickly on any codebase in the company.
There is a basic assumption is that most developers hate writing documentation. From my experience, this isn’t true. They don’t mind it, but they often don’t feel like they do it well, don’t feel like they are given the time to effectively document, and don’t see the value when they try to document. Assuming you have organizational buy-in, the best way to make it feel effective is threefold. First, build a proper framework on which you can attach the documentation for discoverability. Next, build examples of a good documentation style. Finally, build it into your definition of done at the feature level.
Building a documentation culture takes time and effort, and you need to ensure that you are going to have an organization that believes this is an important initiative. Senior leaders (such as your CTO or VP of Engineering in a smaller organization, or a department director in a larger org) have continually competing priorities, and before building an effort to create change, you need to make sure that this is change that aligns with senior leaders’ understanding of current pain points.
Documentation has a return on investment, from my experience, but there are no wide-scale studies for which I am aware. However, senior leaders also can point you to the pain points they have, of which they may see as higher priority. If your vision does not align with theirs, your work will not be appreciated and your effort won’t bear fruit. I would start with this article to look at how to get buy-in to affect change, as it’s a large topic and outside the scope of this work.
Assuming you have organizational buy-in, the first work is to build the framework on which your team’s documentation will rest. I feel like this is hard thing to do perfectly, but having a bad documentation infrastructure is still better than having none at all. People want to feel like there is a good place for their documentation to live. I see two major options for your documentation: The first is wiki-based software such as Notion or Confluence (which seems to have its hands everywhere, even if nobody likes it), and the second is to use source control and markdown.
The advantage of source control is that you have less flexibility in how to construct your documentation structure, as you are primarly in a tree format. This sounds like a disadvantage, but half of the battle of documentation is knowing where to put it in a way that is searchable. The disadvantage is that you will occasionally provide documentation for multiple audiences. If your audience doesn’t use github or your source control system of choice, they will not feel confident in searching your documentation. This is one place that backstage and TechDocs can help. If I were starting a documentation effort from scratch today, I would use TechDocs. That said, the key is to make a decision to start rather than get stuck in analysis paralysis. If Confluence is the only tool available, use it.
From there, I think it is important to speak to three types of internal documentation: people, process, and technology5. The key thing to consider here is that your people documentation and some of your process documentation is likely coming from people other than engineers, and process documentation may be split between systems. People documentation is largely HR-focused and is often created in Microsoft Word, but you don’t need to have “how to request vacation” in the same place as other internal documentation. However you should consider a single source of truth such that documentation doesn’t become bifurcated. Similarly, process documentation within engineering will mostly be built by managers with some exceptions, but it will often have interactions with technical documentation. It is also easy to bifurcate, but you need to be more careful because any choice made by a documentation writer is an opportunity for analysis paralysis and a stop to writing the documentation.
It is worth splitting the process into two types: there is process at an engineering department level, and there is process at a project level. Process at an engineering level should be grouped in one area: it will need its own space. However, projects and teams often have agreements that are separate from the engineering department: this is particularly common in agile teams with team agreements.6 The important part is that you need to have a space for this information, even if a team doesn’t use it. If you don’t have a space, it can be conflated with the technical documentation, making it harder to find that documentation.
Technical documentation, similarly, has department, vertical, and team implications, and may not match your project repository structure. Where it makes sense, split them into their own spaces within the larger tool. In a source control repo, it makes sense to have a separate repo for cross-functional concerns. This may have holistic documentation like Architecture Decision Records, and also may have more holistic architectural designs. (As an aside, C4 is probably the best option here for larger organizations but I still find it slightly complex for many use cases. I like Event Modeling for as long as you can get away with along with a high level architectural diagram. Event Modeling does not need to correlate with event sourcing in all cases.) The important piece here is to make a decision to match your team topology7.
For this documentation, consider creating a minimal documentation portfolio that specifies the documentation needed for project stakeholders and team communication. This documentation should serve the team rather than serve the project or organization and may be as simple as team onboarding materials, decision records made by the team, and copies of chat threads minimally cleaned up.
These are decisions that should start with a very small group and presented later, or you will end up with analysis paralysis. It is better to start with something wrong and adjust than get caught up in details. So long as you have a folder that roughly matches the topology, you can treat this documentation like a large sandbox to start. This sounds complicated, but there is no effective prescription here because team topologies are not standardized in this industry.
The final piece of documentation is long term project documentation. I am a strong proponent for the Diátaxis framework. The Diátaxis framework dientifies tutorials, how-to guides, technical reference, and explanation. Rather than dive too deep here, I’ll point to their great advice on how to use it. This often can live in your current repositories, but this depends on your team topology.
The second step of building a strong documentation is to build a pattern. Just like senior engineers build patterns for with other engineers are responsible for following, so too it is important to build examples of the documentation you are expecting others to follow.
I suggest taking these from external projects. I point to backstage as a great place to start. While it is external documentation, it is meant for developers and is quite copyable. Their getting started documentation is a great example of a tutorial. Their build system is a good example of explanation. They have a good example of bespoke API documentation, but you may be better off including that with a specialized tool like postman. For how-to guides, though, I will point to Indeed’s how-to guide on how to build how-to guides.
I have seen larger organizations provide style guides, but there is no good generic style guide I know of other than the suggestions at the end of each Diátaxis page. However, people will follow the guidelines that you show as an example long before they will internalize a style guide. However, what will be profitable is to have templates for people to start with.
A good documentation pattern and templates will not necessarily create good documentation as developers learn how to build better documentation, but the key is to be supportive as developers write their first documentation, knowing that successful documentation will be revised and refined as the project progresses. Unless entirely incomprehensible, enthusiasm will trigger better feedback loops than critique.
Finally, once you have some good examples of documentation, the key is to build it into your process. This will require support from your champion more than the previous steps: this is where change actually starts. While in the long run, the return on investment in documentation will be high, it takes a lot of work to build those organizational muscles and to build the documentation corpus.
A common strategy in organizational change is to start with new initiatives, and only going back to previous initiatives as they are changed or on a risk-based basis. (For example, if the critical billing code is only understood by two people in the company, that’s a key risk to consider remediating with more than documentation!) As such, it helps to start with a single team that feels the same pain point and can be used as a pilot for gathering progressive buy-in. The most important contributor to success is to build documentation into the process. If documentation is an expected output for every work, it will become habit. While I would never suggest every feature or ticket needs substantive new documentation, the default should start as “on” until the right balance is found. (If you have documentation debt, every feature is an opportunity to address the surrounding areas of what is being touched.)
Further, use new team members to drive documentation needs. As people are approached to explain the system, these are key inflection points to drive remediation of documentation debt. At the same point, be brave enough to say that documentation has a cost both in writing and upkeep and do not write what is unnecessary, and archive or delete documentation that has fallen out of date.
There’s a cumulative wisdom in the industry on how to build technical documentation that hasn’t fully coalesced, but I am hoping that this gets people on the right track as they build their own documentation cultures. I didn’t touch on greater themes of change management and team buy-in, as these are better covered by this HBR article or this one. Once you have reached the collective buy-in, these are tools that I have seen help documentation efforts be successful.
Thank you to James Carr and Geoff Gallaway for initial eyes and comments.
By “our industry,” I speak of software development in general, and by “documentation,” I speak in particular of internal documentation, the work we generate to explain our work to other practitioners. While I think this may also be valuable to consider in open source software, I am speaking in particular about software designed in a corporate context. ↩
Richard Waychoff’s descriptions about those around the Burroughs B5000 mention an automatic programmer named Toni Schumann who facilitated arranging for computer time and getting programs keypunched. The gender issues there are for another blog. ↩
The argument many agile practicioners make is that your unit tests should be your executable specification and that your time is better spent writing code that is self-documenting. While I agree that code, at its best, is self documenting and that the working definition of a unit test suite should be an executable specification of your system, unit tests explain the code at the smallest possible increment and do not explain why. They do not explain rejected alternatives. They do not explain why some code uses pattern A and some uses pattern B. Is that because of an incomplete refactor, or is it because of a limitation in an upstream library? If I start to refactor this code, will I get three days in and realize why the last three people have tried and failed? That said, Scott Ambler is a good representative of the counterargument and is worth understanding the balance. ↩
Incidentally, this led to my most epic debugging session of my career. We had a new product for which I came on to the team. Nobody could figure out this billing issue that people would randomly be charged hundreds of dollars more than expected. This led me down the rabbit hole to five different code bases in four different programming languages and pairing with seven separate developers from five teams and eventually my manager, when we realized that one set of code was using “0” to encode when a state was changed, and the billing system was trained to ignore “0” because it encoded a separate state change that was normal in operation. There was no documentation in any of the scenarios that would alert to the situation. ↩
Apparently, PPT comes from a 1964 paper called “Applied Organization Change in Industry” by Harold Leavitt. At the time, Structure and Tasks were split out into separate categories and have since been consolidated to process, and this work by Becky Simon for Smartsheet has some interesting context in the case you’ve heard this before. ↩
This is not a particularly great example, but most team agreements are minimal anyway. ↩
https://teamtopologies.com/ is where I found the term and it is described by a good book worth reading if you are in management. However, I mean to use this term more loosely. ↩
I’ve gone through a fair number of productivity frameworks in my life, and I’ve read a fair number of the books at this point. So often it feels like the people writing the books are solving their problems and their clients’ problems, but their clients often have commonalities. As such, productivity books have an overlapping set of problems but trying to figure out what works for you comes down to trying to find which book comes closest to your use case.
I think we can do better with a Lego-block approach to productivity management. If you can dissect the problems of productivity management to a set of classifications, you can combine a set of approaches that make sense for your use case rather than wholesale attempting to adopt a program that may or may not fit your use case.
For example, getting things done is great for people with many projects, but it is truly overkill for people who have two or three work projects at any given time. For someone with low focus, a tool like autofocus is fantastic. I have recommended it to many midlevel engineers.
However, my focus is largely on programmers because those are the people I have coached most often. As such, I take for granted that the people I work with can break a larger problem into smaller steps, something that people in other disciplines may not have formal training on. As such, some people may need more focus on a system that coaches those types of behaviors. This is but one example of the need for a building block approach.
However, to get to these building blocks, we need to first lay down a set of assumptions on which we rest. What are the set of values that all productivity management systems seem to share?
With that, we can break our building blocks into these. To use this “system,” identify the problems you have today and look at the most pressing problem for you to attack first. Then, start to build a habit around that portion of your productivity management system. You can use many of the tricks behind Atomic Habits to consider integrating them into your system.
The building blocks I have identified are:
Future work will start breaking down these building blocks and help identify sample patterns that others have written about.
]]>
- Insert x in your data structure.
- Delete one occurence of y from your data structure, if present.
- Check if any integer is present whose frequency is exactly z. If yes, print 1 else 0.
This was in regards to an imaginary instruction set of which I had created a switch statement. In production code, I would have immediately seen this as a code smell. After all, if it was that difficult to remember, it should be an instant trigger that I should have used descriptive variables and refactored to method names with appropriate tests. With that in mind, I went through and started thinking about what the proper names would be, and if I could make it descriptive enough that I could come back 6 months later and know what was going on without having to look elsewhere.
So, I went through and started thinking about what the proper names would be if I could make it descriptive enough that I could come back 6 months later and know what was going on without having to look elsewhere. In this example, we can point to the Hackerrank specification. However, there is a train of thinking that argues documentation is an anti-pattern.1 So, with that in mind, what would code without a textual specification look like?
Let’s look at the variables first. We could get simple and suggest insert, deleteOneOccurence, and queryFrequency, but queryFrequency is a poor name here. We are checking if any integer is present whose frequency matches the input. That is, if there are six 8s, 3 6 returns yes. isAnyIntegerPresentWhoseFrequencyExactlyX
, though more descriptive, might be difficult to parse without context. While completely contrived as an example, we have all reached places where we racked our brains to think of just the right name.2 Imagine trying to explain a trie if it had not already been named!
Test driven design advocates would laugh. “Why don’t you just check your executable spec - the well written test?” That’s a great idea! Now, in something like RSpec, I have the option of a well written specification like it "prints 1 if any integer is present z times in the data structure."
and the converse "it "prints 0 if no integer is present z times in the data structure."
That’s… more helpful. However, this is a novel data structure that is inherently confusing at first glance. What we might need here is “What is the functional reason we care about a set of frequencies in integers?” Our specification is documentation, but it may be incomplete documentation.
Great practitioners in great organizations might take care to follow domain-driven design, which would allow us to have shared meaning that would allow us to have shared, predefined, and more articulate terms for naming in our codebase. Even so, we will often run into cases where the meaning of the code is hard to express in short bursts, and we will find times where we can’t yet properly find concise abstractions. Our commenting practices cannot be determined by perfect practices, because we are not perfect programmers.
There is also the contrarian view by John Ousterhout that “in-code documentation plays a crucial role in software design” and that “inadequate documentation creates a huge and unnecessary drag on software development.” I am more sympathetic than most to this argument, as it shows that I started this with a comment. However, it simultaneously makes me consider that the test might be the more logical place to keep most comments, as that is where the specification itself lives! The test is providing a direct example as to why we care about the behavior of the source code, and seeing the intentional byproduct allows us to contextualize what may be confusing behavior. For example, I have seen code doing the obviously incorrect thing accompanied by the comment: // THIS IS WRONG but the downstream provider depends on this behavior.
If we place the comment alongside the test, we will have an example of the correct-incorrect behavior next to one or more examples of this behavior.
However, I’d like to do better. We already have code coverage tools. Could we do even better and map the same concept to allow our tests to become a more explicit documentation source for our source? I’m not ready to commit to writing such a tool, but conceptually, I think it would enable a documentation approach like this, and might also encourage some of the practices that formed around behavior-driven development. This tool could also augment big picture documentation if done properly.
It would also encourage writing tests for confusing code, which might give programmers the courage to refactor it.
It’s worth an experiment.
Addendum 2023-01-05: It turns out that the D Language has something that would enable this very well. Their builtin unit test framework turns their unit test into examples and includes comments in their generated documentation, and their unit tests are connected to their code. This looks like a wonderful addition. Also, doctest in Elixir’s ExUnit looks like it is built with a similar mindset.
Let’s be fair: we cannot treat a tweet as a full argument, and I am sure there is nuance that wouldn’t be captured. However, the extended argument seems to be that teams should be using mob programming and practices such as the naming suggestions in Clean Code to make sure of a full and shared understanding such that further documentation is unnecessary. Although I am a proponent of mob programming, I do not think it fully absolves the need to write proper documentation, because we are humans who have blind spots. Code seems obvious at the time it is written; it will not necessarily be obvious years later when the original writers have left. Even though documentation can age poorly, having original intent is often as valuable as what the code is doing in the present. That’s for another blog. ↩
If you are interested in better advice about naming variables, Arlo Belshee wrote the best work I’ve seen on the subject. Belshee might argue that an extremely long name here would be appropriate until the team can find a better abstraction in general. ↩
However, many small and medium sized shops are still struggling with documentation, and there are some key insights that the book addresses that we can take away.
Although he proposes the term, I suggest that this may need another name, as this term is also used in education for documenting the development and improvement of student learning, which makes searching difficult. However, this concept was the most interesting idea that I hadn’t encountered before. In the age of wiki documentation (something suggested by this book), we run into the problem that we create documentation ad-hoc. That is, we find something we need to document and tie it to the project page, and we rely on search or institutional memory to find it later. Certainly, ops departments are further along than on the development side, and that is because there are established patterns of runbooks, support knowledge bases, and operations manuals. Unfortunately, the design documentation patterns in early software development turned out to have a low value to effort ratio. However, it leads to the key point of the first section: projects should determine what documentation belongs in its documentation portfolio, and the documentation should focus on long-term relevance. Rüping argues that each project should define its documentation requirements individually, broken down by
He argues that the documentation, and its organization, should be re-evaluated on request. This follows from the unstated agile principle to prefer local control at the team level over departmental or company control of standards. While I think there may be value there, I also think there is some room for overall consistency as many projects or product teams can start from a baseline and evaluate for their project, taking established departmental patterns into account. As such, I think there is value in building a default set of document artifacts and patterns to consider, but can be adjusted as makes sense.
The book does not have a real set of documentation patterns at the “what are the types of documentation” level, contrary to what the subtitle of the book would imply. (The book is subtitled “A Pattern Guide to Producing Lightweight Documents fro Software Projects.” The organization of the book follows a Problem/Forces/Solution/Discussion pattern similar to the Coplien Form, and is written to be used as a pattern language.) However, it does give some categories of documentation alongside examples of the type of documentation:
The author then says that these are well known from software engineering in general. While I appreciate that not everyone needs to reinvent the wheel, I think there would have been higher value in devoting an entire section to where and how to use these. The frustrating piece of how our tools interact today is that these are captured in systems that don’t communicate in a way that tells a coherent story and are hard to cross-reference, but that’s for another blog.
The author has a great pattern talking about having a “big picture” document that I found valuable. Too often, we get mired in the details when looking at documentation, but the first thing we need as someone new on a project is an understanding of the big picture. That often falls to face to face interaction, and while there is value in face to face interaction, this will not help when someone needs to modify the project in five years. That interaction would be better served as a set of questions after getting an asynchronous introduction. Further, many of the technical debt issues we face in older code bases are trying to reconcile current usage with original intent, often called software archaelology. The big picture is the key detail missing from many of these efforts, as these are either lost in institutional knowledge or are filtered through later efforts, and these filters can make understanding original intent difficult.
While the author has much to say about the rule of 7 - that you should limit your number of sections and sub-sections to 7 +/- 2 in order to enhance understanding and to have sequential yet well structured text, I think these are both obvious and vague. Instead, I am drawn to something from an experience report. Too often, our documentation is a wall of text. Instead, we can have a short summary, a diagram, and details with enumeration broken up by whitespace. While I cannot link (as I don’t think this is quite fair use), I think DVC has a good example from end user documentation in their agenda section. It has a simple summary, a diagram that explains the steps concisely, and the rest of the usage document is well spaced and easy to read. This is a great example of what many of our onboarding documents could look like. (I have never used DVC, but I admire its documentation from first glance.)
However, I have seen many examples of entirely unstructured wikis. If we can think through a portfolio and align our wikis to read more as full fledged documents than semi-organized snippets, I think we can improve the efficacy of our documentation with only a small bit of effort. What, then, do we do with all of those pieces of valuable information that defy organization? I think there is value in separating that out into a “junk drawer” of the project so that the valuable portions of the wiki can be more pronounced.
I wouldn’t recommend this book to most people in 2020 unless they are in a deep dive of trying to reform documentation. However, I have yet to find the right book as a guide, and there is value at looking through older efforts to think through this difficult problem. I think that there were some valuable takeaways, and its list of documentation types is still relevant in determining how we should define our documentation portfolio through the phases of the project.
]]>While I think there is great overlap for learning techniques, we aren’t being tested for what we know on a daily basis. Getting better as a programmer looks like it has little overlap with a recital, game, or test, so we need to think about that in the way that we approach learning or refining our craft. However, there’s a giant exception to this: the technical interview.
The irony of the technical interview is that it is performative in a way that programming rarely is. (Pair and mob programming is performative, as are whiteboarding sessions, as a major exception.) So, this tells us that we should treat getting better in interviews as a different skill than programming itself. As someone who is preparing for technical interviews, it means that we should break down the performance components of technical interviews and train those separately. As someone who is evaluating technical interviews, it means that we face a dilemma: are we evaluating their ability to program, or their ability to train in the skills of technical interviewing?
Others have encapsulated the problems of coding interviews better than I have, but I have seen that most people have given little thought to practicing technical interviews using the research. Cracking the Coding Interview, for example, gives an algorithm for running through a technical problem. However, it gives little other than drilling the problems themselves. Elements of Programming Interviews in Java gives a nice set of strategies, but again, relies on drilling as the preparation method.
Here are a few observations from both sides of the interviewing table:
People, when they are tripped up, are often because they are not used to verbalizing the problem and walking the interviewer down their path of reasoning. This is obvious - it’s hard to have a line of reasoning when we don’t know it ourselves! This tells us that we should practice verbalizing our thinking when we are trying problems, and that we should treat it as a separate skill. To prepare, take FizzBuzz-level problems to start, and practice verbalizing your algorithm to solve the problem, smiling, and engaging with an imaginary interviewer.
Listen to interviewing.io exercises and think about how you would discuss the same problem. Then, watch the interview and actively engage with how you would revise the same explanation as you watch someone else.
In piano, there is a practice concept that you can learn each hand separately and then put the hands together. There is also a practice concept that you should not practice the easy parts, but repeat practicing the hard parts instead. As such, break down all of the individual components of an interview, and drill the parts you have difficulty with. For example, I can take the first ten minutes of an interview problem - that is, the panic of reading a problem, coming up with test cases, and running through questions to make sure I understand the problem, and go through that repeatedly until it doesn’t make me nervous.
Your programming language has some basic classes that are commonly used in interviews. Strings, Hashes, and Arrays are very common. Take time to memorize these functions in particular.
These problems can be thought of as lego blocks, as Zach Evans talks about within the piano sphere. If you can deconstruct the problems such that you don’t need to think about the mechanics of constructing the actual program, you can spend more time in dissecting the problem, feeling confident that you will have time at the end to finish it.
The three most effective strategies I have found in learning are interleaved practice, distributed practice, and the power of consistency. (I would like a link for the power of consistency, but it seems like it has become such common sense that nobody questions it.) Spaced Repitition is a learning super power, but I haven’t found a way to make it valuable in this context.
As I deconstruct this myself, I will post the different skills we can practice while interviewing, breaking them up into chunks we can learn, then put it all together. My goal is to place them in the same structure as a piano practice.
In the end, I am far more interested in helping others become better programmers than better interviewers, but understanding the differences between the two helps us to focus on what makes us good at each.
]]>Agile Documentation: A Pattern Guide to Producing Lightweight Documents for Software Projects - Rüping, Andreas
I will get to a full review later, but like many of the other early agile books, that which is valuable has been accepted as common sense - most of the common commercial wikis turn sections 3 and 4 into “use a commercial wiki, set up common templates, and observe some basic organization principles.” I liked a lot about what the book said about how to create a documentation portfolio, but I think that the real missing link is a set of documentation patterns from which to draw. There is room for a new book here.
Applying UML And Patterns - Craig Larman
This is primarily a book about object oriented design, even though it takes a UML-centric focus for it. UML was given as one of the primary ways of documenting in the 90s, and it follows today, but between the tools we have to build it and the practicality of the documents it produces, it seems to have limited use. I think it’s worth thinking about why.
Software Architecture in Practice - Len Bass, Paul Clements, and Rick Kazman
Chapter 18 is dedicated to documenting software architectures, and is worth reading about within a larger context of documentation and architecture.
RESTful Web APIs - Leonard Richardson and Mike Amundsen
This is about the API layer, and is mostly a book on hypermedia, but it has a documentation component and as such is useful as a comparison to OpenAPI / Swagger approaches
Domain Driven Design - Eric Evans
Again, this is not a book about documentation, but the idea of domain driven design dovetails into documentation, and and it spent some time thinking about documents. I think the concept about the Domain Vision Statement will factor into more general patterns.
Thinking in Systems - Donella Meadows
Part of the problem of documentation is that our systems don’t encourage good documentation. Understanding systems and building feedback loops helps us think about how to build a system that builds good documentation.
Diataxis Framework - Daniele Procida. This is an incredible content framework for developer-facing documentation, whether or not it is internal or external-facing. At OneStudyTeam, we used this to provide clarity in the types of documentation we developed, and we were moving to Techdocs to more codify it.
Write The Docs guide - Write The Docs. This is an incredible resource to dive into what is necessary.
Technical Documentation in Software Development: Types, Best Practices, and Tools - Altexsoft. This is a long set of information about the types of documentation that larger systems require. It’s valuable as a set of targets to have.
18 Software Documentation Tools that Do The Hard Work For You - Process Street. Some tools that have already been developed to help software documentation, for a broad idea of “developed.”
Building better documentation - Atlassian. This is Atlassian’s guide to good documentation. This barely made the cut, as it’s rather sparse, but I liked its six step process as an example of the problem of how we describe documentation.
Why agile teams should care about documentation - Tom Thompson. The value is below in the “Making agile and documentation work together” section - it starts to consider what it looks to build a system that values documentation.
Core Practices for Agile/Lean Documentation - this is in the “write as little documentation as possible” camp, and it’s both rich with good and bad ideas. It feels like they are fighting against “documentation from above” and low-value documentation, but don’t really have an idea of how to build good ones. The idea that “documentation is a business decision” that he pulls from XP is thought provoking, but while the
Agile/Lean Documentation: Strategies for Agile Software Development is another from Scott Ambler (see above) has a similar perspective. Under Table 1, it shows potential documents to be created by the development team. The documents that he shows as more valuable are precisely the ones that have the least guidance. How can we build better internal documentation if we don’t have patterns for that?
10 things you can do to create better documentation - Alan Norton. This is more of a set of tips to keep in mind, but can inform a larger effort.
Internal Documentation 101: A Simple Guide to Get You Started - Josh Brown. This is a fantastic article that gives the steps to kickstart internal documentation efforts and informed the article on how to build a documentation culture. They are trying to sell you software, but it is translatable to other options.
API Documentation articles / blogs
Documenting Your Existing APIs: API Documentation Made Easy with OpenAPI & Swagger - Smartbear. This gets into how to use swagger, the most common form of API documentation.
How to Build an Effective Support Knowledge Base: Everything You Need to Know about Documentation - Jess Byrne. It’s a good introduction to user-facing support documention. This doesn’t quite match what is needed for an internal support knowledge base, but there is overlap.
How to Write a Killer Operations Manual [5 Easy Steps] - Tallyfy. Built from a Business Process Management perspective and partially to support buying Tallyfy, but it’s a good start about thinking of how to structure operations manuals. the links below that are seemingly valuable as well.
How to Write an Operations Manual - Edward Lowe Foundation. this is for non-technical audiences, and I think that’s what creates the value. It helps expand our thought of what needs to be in a tech company’s operations manual.
How To Document Your Current Processes In 10 Easy Steps - Quickbase. Can we build a similar 10 steps to how to document your current system?
The Ultimate Guide to Onboarding New Developers: Industry Best Practices and How to Plan the First 90 Days - Nicole Kow. Onboarding documentation is a common pattern for the first 90 days. I think the self-actualization piece she points to is interesting conceptually, but it feels a little shoehorned into this. However, it has a good checklist of things to consider in the onboarding documentation.
What People Really Want from Onboarding - Tori Fica. Ironically, the infographic seems to be broken, but it has
C4 Model - this seems to be the most used at this point.
]]>I plan to expand on this, but as I have tried to figure out what makes a good programmer, I’ve slowly become more intrigued with what others think a good programmer is. This is a list that I’ve collected of articles, blog posts, and books tangentially connected to this question.
These are of varying quality. I am trying to drive a larger understanding of what people at all levels think makes a good programmer, rather than just what experts think. We may be missing something if we focus on what those predisposed to authorship write. However, it’s interesting to see what different people at different experience levels and perspectives see. By Grounded Theory, I should try to incorporate these multiple views.
(Anti-disclaimer: I have removed any affiliate links I have found and derive no income from any links.)
Clean Code: A Handbook of Agile Software Craftsmanship - Bob Martin
This indirectly touches on what makes good code rather than a good coder, but many of the pieces of what make good code can infer a subset of what makes a good programmer, if you assume a good programmer produces good code.
The Clean Coder: A Code of Conduct for Professional Programmers - Bob Martin
This speaks to what he believes a professional programmer should be. For him, in this book, this speaks to his ideas on how a programmer should interact with the world around him, and we can infer that these are qualities he would expect in a good programmer.
The Pragmatic Programmer: your journey to mastery - David Thomas and Andrew Hunt
This is a seminal book in the field, and rightfully so, as the book is all about becoming a more productive programmer. It is a great influence on the entire field, but I see all sorts of conflicting advice with it. This question drove me past “what makes a good programmer to me?” and “What is the larger consensus of a good programmer?”
Level Up! How to Become a Great Professional Software Developer - Steven Talcott Smith
I have not yet read this, but it is on my list. Looking at the table of contents, I don’t see anything that doesn’t touch on what has already been said by others, but going by Grounded Theory, I don’t want to miss important data.
Who is a Good Programmer? - Ed Weissman. This short comment was key to starting my journey. The whole concept is value laden and it made me want to further grapple with the question.
Programming achievements: How to level up as a developer - Jason Rudolph
Levels of Seniority: How to Step Up as a Junior, Mid Level or a Senior Developer? - Kamran Ahmed Associated Hacker News comments
What Really Makes a 10x Engineer - Charles Max Wood
Mastering Programming - Kent Beck
What Makes a Good Programmer Good? - Josh Symonds Associated Reddit comments
Rich Hickey on becoming a better developer - this is an unsourced copy of a comment by Rich Hickey (of Clojure fame) responding to Jason Rudolph’s blog.
Great Programmers - Bram Cohen
How to be a great software developer - Peter Nixey Associated Hacker News comments
What makes a great developer? A story of an extraordinary blacksmith Associated Hacker News comments
What makes a good programmer? - Eran Galperin
Making the Good Programmer … Better - James Sugrue
Done, and Gets Things Smart - Steve Yegge
What Makes Great Programmers Different? - Andrew Binstock - Dr. Dobb’s
Becoming a Programming Rock Star: 5 Traits that Make a Great Programmer - Matt Weisfeld
What Makes A Great Programmer? - Treehouse
7 ways to be a better programmer - Amy Jollymore
Top 10 traits of a good programmer - “Jayson”
The key to being a good programmer - Andy Gibson
How To Become A Good Programmer - Simeon Visser
Laziness Impatience Hubris This is a wiki article that has summarizes a Larry Wall idea about the three great virtues of a programmer, and some comments on the insight.
He expounded later on Big Think that talks about it as a joke. He uses Persistence, Smart, Social, Literate, and Slightly Insane, using Lord Of The Rings as a metaphor. (This looks like another attempt at wit.)
Why Good Programmers Are Lazy and Dumb This is an old article that takes “Lazy” and “Dumb” as virtues, as Larry Wall does. “Lazy” is operationalized as writing code that is easy to modify later, and “dumb” as someone who realizes their own limitations and continues to learn and be critical of their own work, and keeping a childlike curiosity and creativity in solving problems.
I’ve kept these because it captures a wider variety of opinions from people going from their gut, and while it is low fidelity, it captures some interesting insights.
How Can I Know Whether I Am a Good Programmer? - Software Engineering Stack Exchange
What makes a great programmer? - Old Fog Creek Software discussion board post
What makes a good programmer good? - Reddit thread.
]]>I had latched on to a particular piece of the question:
Also I waste considerable amount of time trying to do things in the most readable, maintainable and simple way possible. This means weighing merits of different solutions and choosing one. I am a really hesitant decision maker, resulting in more wasted hours.
Others latched on to what they saw as burnout, but I remember this particular dread that took me years to overcome. The problem wasn’t the workload, the problem was that I hadn’t yet figured out how to conquer this problem, to the extent I have today. It still is something that slows me down, relatively speaking, but not to the point that it holds me back like it did then.
With what I now understand about developer skills, This is not one problem but three: it is a problem with wisdom, speed, and discipline, and can be attacked from multiple angles.
First, let me operationalize these terms within this context: By wisdom, I am referring to the accumulated best practices and patterns that lead to easy to maintain code. By speed, I am referring to the techniques we acquire to write code more quickly, from better mastery of our IDE to not spending time trying to reinvent the wheel with each new problem. By discipline, the ability to focus on the problem, avoiding distraction and the yak shaving that interferes with our productivity.
If we want to attack this from the wisdom perspective, it is this: we are afraid of making the wrong decision because we are afraid to refactor. We are afraid to refactor because we don’t have sufficient test coverage. What if I break something? Kent Beck, in his RIP TDD post, mentions this feeling of overwhelm, which I believe is the root of this sort of analysis paralysis.
The good news is, for developers like us, test driven development is very helpful as a technique for getting us over these problems. If our team is not test-friendly (by this I don’t necessarily mean TDD — I mean good old-fashioned automated tests that cover the system), however, it will be difficult for us to make the jump because their code will not be written in ways to make it easy to test.
Once we have sufficient test coverage, we can look at refactoring techniques which we can internalize to give us the tools at our fingertips to make the next move.
There are a few books I can suggest to help us jump the chasm:
Clean Code by Bob Martin. This book helped me think in more testable code, and also helped me understand how to make better decisions the first time around. It helped me by seeing patterns I didn’t know first.
Refactoring by Martin Fowler. This one is old, but knowing the patterns of changing code gives us more confidence in knowing what is right, rather than hemming and hawing over what is readable and maintainable.
Working Effectively with Legacy Code by Michael Feathers can help get from here to there. All of these help from three aspects: They help us develop a set of tests so we are less afraid of breaking existing things, they give us the freedom to experiment, and they help us break things down into smaller, more manageable problems by letting us think about “what is the next thing I can test?”
If we want to attack this from a Speed issue, these are the techniques I used to speed myself up, especially within new code where the problem wasn’t tests. As this wasn’t something I had internalized, I had to systematically look at how to solve this problem.
First, I looked for patterns within the application. Most features have a fairly similar flow, and the same patterns popped up. So, I made a checklist of things that were going to go into each feature and looked for patterns in the application that were used, rather than trying to figure out the best way for each feature. The fewer choices I make at any one point, the faster I can go. (Also, I’m not tied down to any particular implementation if I write tests. I don’t worry about regressions — the tests will cover that for me. Of course, I always do a double check manually before I check in, but I don’t have to worry about debugging something I broke a half hour ago.)
Second, I made a dedicated effort to move beyond learning just enough of my particular stack to get the next feature done, and made a deep dive both into the documentation, reference, books, and the source code. This gave me a better idea of the intended patterns, and I also recognized a few places my coworkers and I were reinventing the wheel unnecessarily, and plugins that had solved our problems already with only minor modifications.
Finally, in the worst case scenario, sometimes I have even written multiple solutions. It ended up faster to write all of them and choose one than to get stuck in analysis paralysis. (That isn’t to say you shouldn’t think before you write code!)
Finally, I attacked this from a discipline angle. When I started out, I wasn’t a particularly disciplined programmer, and have found these techniques helped me immensely.
First, I started meditating. By doing so, I became more self-aware of analysis paralysis. When I recognize it now, I take a few centering breaths to calm my mind, and mindfully choose a path.
Second, while I’m not as good as I have been at times, I try to exercise. In the same way as meditation, exercise helps us learn to clear our mind and focus on command, and it helps sharpen our discipline chops.
With these, we can develop an awareness of how our body feels. Then we can develop an awareness of how analysis paralysis feels. If we can catch ourselves in the act, and cannot find any way out, we can then institute a last ditch solution: When caught in the trap, I set 30 minutes on my timer, and bring out a pad of paper. If you feel you have the freedom, turn off the monitor.
Take a few deep breaths, and sketch out the solutions in the first ten minutes on the first page. Use UML or your own system.
In the next ten minutes, write a pro/con analysis on each path.
In the final ten minutes, make the decision. After this, my analysis time is up and I must code.
I don’t pretend to have conquered analysis paralysis, but then again those who have often don’t think before starting a solution, and I have to come in and clean up afterwards. :) Think before coding, but don’t get overwhelmed. Good luck!
]]>