Category Archives: flow

What is Capacity in software development? – The #NoEstimates journey

I hear this a lot in the #NoEstimates discussion: you must estimate to know what you can deliver for a certain price, time or effort.

Actually, you don’t. There’s a different way to look at your organization and your project. Organizations and projects have an inherent capacity, that capacity is a result of many different variables – not all can be predicted. Although you can add more people to a team, you don’t actually know what the impact of that addition will be until you have some data. Estimating the impact is not going to help you, if we are to believe the track record of the software industry.

So, for me the recipe to avoid estimates is very simple: Just do it, measure it and react. Inspect and adapt – not a very new idea, but still not applied enough.

Let’s make it practical. How many of these stories or features is my team or project going to deliver in the next month? Before you can answer that question, you must find out how many stories or features your team or project has delivered in the past.

Look at this example.

How many stories is this team going to deliver in the next 10 sprints? The answer to this question is the concept of capacity (aka Process Capability). Every team, project or organization has an inherent capacity. Your job is to learn what that capacity is and limit the work to capacity! (Credit to Mary Poppendieck (PDF, slide 15) for this quote).

Why is limiting work to capacity important? That’s a topic for another post, but suffice it to say that adding more work than the available capacity, causes many stressful moments and sleepless nights; while having less work than capacity might get you and a few more people fired.

My advice is this: learn what the capacity of your project or team is. Only then you will be able to deliver reliably, and with quality the software you are expected to deliver.

How to determine capacity?

Determining the capacity of capability of a team, organization or project is relatively simple. Here’s how

  • 1- Collect the data you have already:
    • If using timeboxes, collect the stories or features delivered(*) in each timebox
    • If using Kanban/flow, collect the stories or features delivered(*) in each week or period of 2 weeks depending on the length of the release/project
  • 2- Plot a graph with the number of stories delivered for the past N iterations, to determine if your System of Development (slideshare) is stable
  • 3- Determine the process capability by calculating the upper (average + 1*sigma) and the lower limits(average – 1*sigma) of variability

At this point you know what your team, organization or process is likely to deliver in the future. However, the capacity can change over time. This means you should regularly review the data you have and determine (see slideshare above) if you should update the capacity limits as in step 3 above.

(*): by “delivered” I mean something similar to what Scrum calls “Done”. Something that is ready to go into production, even if the actual production release is done later. In my language delivered means: it has been tested and accepted in a production-like environment.

Note for the statisticians in the audience: Yes, I know that I am assuming a normal distribution of delivered items per unit of time. And yes, I know that the Weibull distribution is a more likely candidate. That’s ok, this is an approximation that has value, i.e. gives us enough information to make decisions.

You can receive exclusive content (not available on the blog) on the topic of #NoEstimates, just subscribe to the #NoEstimates mailing list below. As a bonus you will get my #NoEstimates whitepaper, where I review the background and reasons for using #NoEstimates

Picture credit: John Hammink, follow him on twitter

Humans suck at statistics – how agile velocity leads managers astray

Humans are highly optimized for quick decision making. The so-called System 1 that Kahneman refers to in his book “Thinking fast, thinking slow“. One specific area of weakness for the average human is understanding statistics. A very simple exercise to review this is the coin-toss simulation.

Humans are highly optimized for quick decision making.

Get two people to run this experiment (or one computer and one person if you are low on humans :). One person throws a coin in the air and notes down the results. For each “heads” the person adds one to the total; for each “tails” the person subtracts one from the total. Then she graphs the total as it evolves with each throw.

The second person simulates the coin-toss by writing down “heads” or “tails” and adding/subtracting to the totals. Leave the room while the two players run their exercise and then come back after they have completed 100 throws.

Look at the graph that each person produced, can you detect which one was created by the real coin, which was “imagined”? Test your knowledge by looking at the graph below (don’t peak at the solution at the end of the post). Which of these lines was generated by a human, and which by a pseudo-random process (computer simulation)?

One common characteristic in this exercise is that the real random walk, which was produced by actually throwing a coin in the air, is often more repetitive than the one simulated by the player. For example, the coin may generate a sequence of several consecutive heads or tails throws. No human (except you, after reading this) would do that because it would not “feel” random. We, humans, are bad at creating randomness and understanding the consequences of randomness. This is because we are trained to see meaning and a theory behind everything.

Take the velocity of the team. Did it go up in the latest sprint? Surely they are getting better! Or, it’s the new person that joined the team, they are already having an effect! In the worst case, if the velocity goes down in one sprint, we are running around like crazy trying to solve a “problem” that prevented the team from delivering more.

The fact is that a team’s velocity is affected by many variables, and its variation is not predictable. However, and this is the most important, velocity will reliably vary over time. Or, in other words, it is predictable that the velocity will vary up and down with time.

The velocity of a team will vary over time, but around a set of values that are the actual “throughput capability” of that team or project. For us as managers it is more important to understand what that throughput capability is, rather than to guess frantically at what might have caused a “dip” or a “peak” in the project’s delivery rate.

The velocity of a team will vary over time, but around a set of values that are the actual “throughput capability” of that team or project.

When you look at a graph of a team’s velocity don’t ask “what made the velocity dip/peak?”, ask rather: “based on this data, what is the capability of the team?”. This second question will help you understand what your team is capable of delivering over a long period of time and will help you manage the scope and release date for your project.

The important question for your project is not, “how can we improve velocity?” The important question is: “is the velocity of the team reliable?”

Picture credit: John Hammink, follow him on twitter

Solution to the question above: The black line is the one generated by a pseudo-random simulation in a computer. The human generated line is more “regular”, because humans expect that random processes “average out”. Indeed that’s the theory. But not the the reality. Humans are notoriously bad at distinguishing real randomness from what we believe is random, but isn’t.

As you know I’ve been writing about #NoEstimates regularly on this blog. But I also send more information about #NoEstimates and how I use it in practice to my list. If you want to know more about how I use #NoEstimates, sign up to my #NoEstimates list. As a bonus you will get my #NoEstimates whitepaper, where I review the background and reasons for using #NoEstimates