#NoEstimates #YesThroughput #AgileAlienAbduction

Agile Sacred Cow.
Agile Sacred Cow.

When I recently posted a tweet that said I’ve become a believer in #noestimates, I received more than a few confused and surprised responses privately from friends and colleagues. Rest assured, my fellow PMPs, that I have not recently suffered from a project management lobotomy. I was not abducted by agile aliens (though not gonna lie, I wouldn’t mind if David Duchovny showed up at my door). I do still understand that this is business and we cannot just build in a vacuum. People with the money need answers to their questions.

My buy-in to the concept of #noestimates is based on the premise that if we are successful in improving how we work, estimates on user stories quite naturally fall away. If we understand our work and what the business goals are, and we are able to normalize that work over time, we can start to use probability rather than relying upon human-based subjectivity, i.e., story points.

So what does this really look like? In a nutshell:

Let’s say my team has the start date and end date for their historical user stories (cycle time). We can use the cycle times to generate throughput for each sprint (or other time increment), showing us how many stories the team delivers within that time increment. If we continue to gather this data over time, it becomes possible to understand what is realistic for the team to deliver within a larger given time increment. The user stories do not need to be the identical size for this exercise, because we are examining story throughput over time. Average story throughput can be used to provide a forecast, to illustrate based upon real data why a feature (that has been broken down into user stories) will or will not fit into a given time period.

Using the throughput of stories rather than story points (velocity) is logically a more accurate method of forecasting because it does not rely on the subjective human element. Story points are based upon opinion. Start time and end time of a story is not.

Once I had that revelation, it all clicked: I am a believer in #noestimates – not because I don’t think we should be responsible for providing some kind of forecast to our stakeholders, not because I don’t think we should commit and hold ourselves accountable to our commitments, but because the need to estimate stories is naturally eliminated when we rely upon throughput, and throughput is non-subjective.

In closing: #noestimates is not synonymous with no forecasting, and we Agilists should always be open to looking at new ways to improve how we work.

2 thoughts on “#NoEstimates #YesThroughput #AgileAlienAbduction

  1. Interesting, have you tried it on some real project? I would be interested into some chart or graphical presentation of this. Something like burndown. Do you have something like that? Thanks


    • Absolutely! For the charts/example data you would like to see, check out focusedobjective.com and http://Bit.ly/SimResources.

      My first experience with this was in September 2012. I took over a project that was considered by management to be “off the rails” – dates kept slipping and code debt was piling up. After I got to know the team and observed for awhile, I saw that there were several issues that we needed to solve…and one of them was the frustration surrounding story points being used by management to forecast.

      The team didn’t have an issue with using story points to help them align on the work itself during planning (which I still think can have value to a team) – the problem was with management trying to adapt story points to hours, predict the future, and then force the team to work nights and weekends when the team inevitably did not deliver when management thought they should.

      I didn’t change anything about how the team was using estimating to help them size their own work – at first. What I did do was talk with the leadership about using cycle time and throughput for forecasting instead.

      To do this, we had to have a few key ingredients:

      – Cycle time data, to figure out how long it took to finish a story
      – Throughput data, to figure out how many stories were getting finished in a sprint
      – Predicted work data (the number of stories the team created for the remaining work necessary to release a given feature)
      – “Unknown work” data (the number of new stories getting created during a sprint because the team learned something as they worked)

      With those key ingredients, we could factor in the probability of the unknown work, combining it with the remaining stories, and then divide that by the team’s typical throughput (there are even cooler ways of doing this that show probability levels with dates in that link I posted above). The result was a forecast that was based upon what the team’s real capabilities had been.

      Now comes the amazing improvement part for the team:

      This exercise encouraged the team to re-examine the work to finish a feature, because we had to have user stories for the forecast. The team had not been blessed with great user stories and had just been trudging along with what they had, thinking they ultimately just “knew” what to build. When they stopped scrambling to finish half-baked, nonsensical user stories and took the time to groom and plan, their user stories became more standardized. The team organically stopped stressing about estimating the individual user stories, and started to rely on commitment-based planning entirely.

      The product beta-launched in November 2013, later than was originally targeted when the project began but absolutely on track for what our forecast had indicated once we stopped using story points to forecast!


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s