The CURE framework: a tool to break down tasks and improve story pointing.
I am not a fan of story points. (I am also not a fan of estimates, but neither of these are battles I will win anytime soon.) However, my largest frustration with them is that the conversations they generate are poor, because the idea of what makes a story point is nebulous for most developers. The CURE framework is a tool that will help channel discussions by organizing them by the categories of Complexity, Uncertainty, Risk, and Effort. 1 I started using this to help with conversations about story points, but I find its value is best with team leads and senior developers as they break tickets down to manageable chunks for their teams and providing a shared vocabulary for what makes a task easy or hard. In this essay, I will start by showing how I use it in the story point scenario, then continue by discussing how I use it to subdivide tickets independent of points.
The elements of CURE are as such:
- Complexity: How many places in the system does this change touch? How many pathways do we need to consider? How large does our mental model need to be to fully understand the change?
- Uncertainty: How confident are we that we understand the extent of the change? How much do we understand the underlying technologies? How evenly is this spread between the people who are likely to take this work? Is this a code base or area we work in frequently?
- Risk: What is the worst that can happen if this goes wrong? If this fails, could we have irreparable loss? If this fails, how hard is this to revert? Is the code this is changing have good tests and specifications such that we are confident in the lack of regressions?
- Effort: This can be measured in lines of code or estimated hours.
When I am working with a new team, I start by splitting these conversations out. I explicitly ask them to measure complexity, uncertainty, risk, and effort in separate conversations using small, medium, and large. I then make a mechanical transformation from those conversations to story points using a fibonacci ladder, advancing for every deviation from small. For example:
- If all elements are small, it’s a 1 point ticket.
- If any element is a medium, it moves to the next step on the fibonacci sequence. For example, all smalls except medium complexity is a 2 point ticket.
- If any element is a large, it moves two steps on the fibonacci sequence. For example, S-S-S-L (large effort) would be a 3 point ticket.
- All mediums would be an 8 point ticket. (Medium complexity: 1 -> 2. Medium uncertainty: 2 -> 3. Medium risk: 3 -> 5. Medium effort: 5 -> 8.)
I tend to cap all tickets at 3 because of my domain and context, but you should set your own tolerances. If something is larger than a 3, the team looks for ways to break down the ticket to smaller pieces.
- If something is complex, can we break it down into a set of simpler tickets? Is there a way we can develop integration tests that test the number of interactions? Can we create a refactoring ticket up front that will simplify the change without becoming too complex itself?
- If something is uncertain, can we split it to a spike ticket and an implementation ticket? Are we using a technology that is too risky? Can we ask an expert for time to pair?
- If something is risky, how can we reduce the risk? This is so dependent on the system that it is hard to speculate in all the ways a change may be risky, but it should open up conversation around work that would make the risk mitigable. Further, it can elucidate to leadership where technical debt has accumulated and place pressure for projects to reduce it.
- If something is bottlenecked in effort, can it be split into a smaller set of tickets? Most teams already know how to do this.
As the team gets more familiar, they start doing these calculations in their head and we can head back to a quicker story point conversation. If a ticket is not a “1,” I ask them to say where they see deviations and if there is merit in finding a way to simplify. The conversations become more specific and targeted, and they are more valuable. It also allows QEs to understand exactly where they need to prepare for a ticket that is more complex or risky in testing and gives them the confidence to speak about risk and complexity from their domains instead of the more nebulous conversations they had before.
Now, as a senior engineer or tech lead, we can also take this framework and use it to break down larger tasks before presenting them to the team, or even for our own work. Consider making a mental estimate for your median developer. where can you find those seams to reduce complexity and risk in particular? In my experience, developers tend to underestimate the amount of effort that risk introduces in testing until they are faced with the implications of the change, or they overestimate the risk in parts of the code that cause pain or are unfamiliar. Separating the conversations of risk and uncertainty help identify the nature of their concerns. Also, using this framework gives junior developers the vocabulary to break down how they think about a ticket when approaching it and give you the tools to ask questions and provide strategies to mitigate their concerns.
Even if you abandon story points, keeping CURE in your back pocket gives you a practical way to split, reduce, and communicate work. Consider experimenting with it.
LLM Disclaimer: All components of this are human written, but an LLM was used for initial feedback to identify weak spots.