Category Archives: complexity

Humans suck at statistics – how agile velocity leads managers astray

Humans are highly optimized for quick decision making. The so-called System 1 that Kahneman refers to in his book “Thinking fast, thinking slow“. One specific area of weakness for the average human is understanding statistics. A very simple exercise to review this is the coin-toss simulation.

Humans are highly optimized for quick decision making.

Get two people to run this experiment (or one computer and one person if you are low on humans :). One person throws a coin in the air and notes down the results. For each “heads” the person adds one to the total; for each “tails” the person subtracts one from the total. Then she graphs the total as it evolves with each throw.

The second person simulates the coin-toss by writing down “heads” or “tails” and adding/subtracting to the totals. Leave the room while the two players run their exercise and then come back after they have completed 100 throws.

Look at the graph that each person produced, can you detect which one was created by the real coin, which was “imagined”? Test your knowledge by looking at the graph below (don’t peak at the solution at the end of the post). Which of these lines was generated by a human, and which by a pseudo-random process (computer simulation)?

One common characteristic in this exercise is that the real random walk, which was produced by actually throwing a coin in the air, is often more repetitive than the one simulated by the player. For example, the coin may generate a sequence of several consecutive heads or tails throws. No human (except you, after reading this) would do that because it would not “feel” random. We, humans, are bad at creating randomness and understanding the consequences of randomness. This is because we are trained to see meaning and a theory behind everything.

Take the velocity of the team. Did it go up in the latest sprint? Surely they are getting better! Or, it’s the new person that joined the team, they are already having an effect! In the worst case, if the velocity goes down in one sprint, we are running around like crazy trying to solve a “problem” that prevented the team from delivering more.

The fact is that a team’s velocity is affected by many variables, and its variation is not predictable. However, and this is the most important, velocity will reliably vary over time. Or, in other words, it is predictable that the velocity will vary up and down with time.

The velocity of a team will vary over time, but around a set of values that are the actual “throughput capability” of that team or project. For us as managers it is more important to understand what that throughput capability is, rather than to guess frantically at what might have caused a “dip” or a “peak” in the project’s delivery rate.

The velocity of a team will vary over time, but around a set of values that are the actual “throughput capability” of that team or project.

When you look at a graph of a team’s velocity don’t ask “what made the velocity dip/peak?”, ask rather: “based on this data, what is the capability of the team?”. This second question will help you understand what your team is capable of delivering over a long period of time and will help you manage the scope and release date for your project.

The important question for your project is not, “how can we improve velocity?” The important question is: “is the velocity of the team reliable?”

Picture credit: John Hammink, follow him on twitter

Solution to the question above: The black line is the one generated by a pseudo-random simulation in a computer. The human generated line is more “regular”, because humans expect that random processes “average out”. Indeed that’s the theory. But not the the reality. Humans are notoriously bad at distinguishing real randomness from what we believe is random, but isn’t.

As you know I’ve been writing about #NoEstimates regularly on this blog. But I also send more information about #NoEstimates and how I use it in practice to my list. If you want to know more about how I use #NoEstimates, sign up to my #NoEstimates list. As a bonus you will get my #NoEstimates whitepaper, where I review the background and reasons for using #NoEstimates

Why projects fail, is why (we think) they succeed!

When I started my career as a Project Manager, I too was convinced that following a plan was a mandatory requirement for project success. As I tried to manage my first projects, my emphasis was on making sure that the plan was known, understood and then followed by everyone involved.

When I started my career as a Project Manager, I too was convinced that following a plan was a mandatory requirement for project success

I wrote down all the work packages needed, and discussed with the teams involved when those work packages could be worked on, and completed. I checked that all the dependencies were clear, and that we did not have delays in the critical path (the linear path through a project plan that has no buffer).

I made sure that everyone knew what to do, to the point that I even started using daily meetings before I heard of Scrum which would, later on, institutionalize that practice in many software organizations.

As my projects succeeded, I was more and more convinced that the Great Plan was the cause for their success.

As my projects succeeded, I was more and more convinced that the Great Plan was the cause for their success. The better the plan the more likely the project would succeed – I thought. And I was good at planning!

Boy, was I wrong!

It was only later – after several successful, and some failed projects – that I realized that The Plan had little effect on the success of the projects. I could only reach this conclusion through experience. Some of the projects I ran were “rushed”, which made it impossible to create a Great Plan, but had to be managed “by the seat of the pants”. Many of them were successful nonetheless.

In other cases, I did create a plan that I was happy with. Then I had to change it. And then change it again, and again, and again – to the point that I did little else but change The Plan.

Confusing the chicken with the egg, which came first?

The example above is one where I had confused the final cause (chicken) with the original cause (egg).

When something works well, we will often retrospectively analyze the events that led to success, and create a story/narrative about why that particular approach succeeded. We will assign a “final cause” to the success: in my example I assigned the “final cause” of project success to having a Great Plan, and the events that created the Great Plan.

This is normal, and it is so prevalent in humans that there is a name for it: retrospective coherence. Retrospective coherence is what we create when we evaluate events after-the-fact and create a logical path that leads from the initial state to the final state via logical causality, that can easily be explained to others. These causality relationships are what lead us to create lists of “Best Practices”.

Best Practice lists are the result of Retrospective Coherence, and because of that many are useless

When the solution becomes the problem

However, this phenomena of Retrospective Coherence is not necessarily a good thing. In my initial example about Project Management I was convinced that the Great Plan and the related activities were the reason for success because that is what I could “make sense” of when I looked back in time. But as I gained experience I was forced to recognize that my “Best Practice” did not, in fact, help me in other projects. This realization, in turn led me to question the real reasons for success in my previous projects.

After many years of research and reflection I came to realize that many projects are successful purely by random reasons. For example: someone did an heroic effort to come to work during the week-end and recover the Visual SourceSafe database that had been corrupted once again and for the 1000th time!

But there are many other reasons why projects succeed by pure random chance. Here are some:

  • In one project we had a few great testers that were not willing to wait to the end of the project to test the product. What they found changed the requirements and made the project a success
  • Some projects were started so that we could deliver the “perfect feature set” to our customers. But as time went by and the deadlines were closing in, some managers – sometimes even me – understood that delivering on time was more important, and therefore changed the project significantly by reducing scope.
  • Some developers were single handedly able to both, increase product functionality, while reducing the code base by 30%. This feat increased quality massively and made a delivery on time even possible.
  • In one project we tried to use Agile. As a result, we started practicing timeboxed iterations and eventually ended up releasing so often that we could never be late

These are only a few of the reasons why projects succeed despite having a Great Plan, rather than because of it.

The Original Cause

The reasons for project success that I listed above are only a few that can be called “original cause” for project success. Original causes are those that actually start a chain of events that lead to success (or failure), but are too detailed or far into the past to be remembered while doing a retrospectively coherent analysis of project success (after-the-fact).

The kicker

But the kicker is this: when we get caught in “Final Cause” assignment through the retrospective coherence lenses or our logical mind, we lose a massive opportunity to actually learn something. By removing the role of “luck” or “randomness” from our success scorecard we miss the opportunity to study the system that we are part of (the teams, the organization, the market). We miss the opportunity to understand how we can influence this system and therefore increase our chances of success in the future.

Many people in the project management community still think – today – that having a Great Plan is a “Best Practice”, and that you cannot succeed without one. I would be the first to agree that having a plan will increase your chances of success, but I will also claim that the Great Plan alone (including following that Great Plan) can never deliver success without those random events that you will never recognize because you are blind to the effects of chance in your own success.

In our lives we must, always, strive to separate Original Cause (what actually caused success) from Final Cause (why we think success happened by analysing it after-the-fact).

In a later post I will discuss how to increase the chances of project success by – on purpose – inserting randomness and chance into the project. Stay tuned…

Image credit: John Hammink, follow him on twitter