Podcast: Play in new window | Download
Subscribe: Apple Podcasts | Google Podcasts | Spotify | RSS | More
How good are you at predicting the future? Chances are that like most of us, you’re not as good with your predictions as you would like to believe that you are.
Each episode of the PM Lift podcast provides a brief, focused summary on a key topic that affects our performance as effective project managers.
Today we’re going to be talking about Project Estimating and how to improve our estimates so that we set our project up for success by better managing our stakeholder’s and our team’s expectations.
We must always ensure we know the purpose of the estimate and the amount of detail available so that we can use an appropriate approach that matches needs, available time and appropriate amount of effort. This podcast will show you how.
One of the first things we need to address is what is an estimate. Well, let’s begin by making it clear that it’s different from a goal. A goal represents a desired outcome… we’d like to complete a task in 2 days, but the reality could be quite different.
Why do we need estimates? Well we need to know whether the project or piece of work we’re about to embark on will be completed by the required date or within the required budget. If not then the project may not be viable and should either be redesigned or perhaps not even progressed any further.
An estimate should reflect how long we believe it will truly take, this will normally be longer than our goal. So if we view estimates as goals we’ll find that we consistently miss our estimates either in terms of cost, time or both.
An estimate is also not a prediction, it’s a forecast. Our brains are extremely poor at predictive estimates. Which is why all too often we see a task estimated to take 1 month becoming 2 months, then 3 months and so on. You’ll have now noticed that we normally underestimate the amount of time something will take. This is why we are surprised by those all too rare occasions when something takes less time than predicted. This is partly because we have a natural bias towards optimistic predictions.
These types of predictions, also called deterministic estimates, fail to take into account the probability of successfully meeting the estimate.
In order to estimate a piece of work, whether a simple task or a complex project we need information. We need to know enough about what it is that we’re being asked to deliver. The more ambiguity and uncertainty there is around the requirements or approach then the less accurate the estimate will be. Even when working with an Agile approach we need to have completed enough design up front to be able to understand the scope and size of the undertaking at the start of the project. The depth and detail will come later.
Estimates early in a project are beset with a number of difficulties. Firstly, requirements may still be being defined and there could be a high degree of variability in what is being asked for and what is needed. So the information needed to inform good estimates may not be available. Add to this that the team may be new to working together and therefore have not had the opportunity to build a culture of trust and openness.
Even in established teams culture can play a big part in estimate accuracy. If the team members don’t feel accountable then they are likely to ask fewer questions, and generally put less effort into the estimating activities. If the culture doesn’t support open sharing of issues and concerns then important factors won’t be considered, leading to overly optimistic estimates. If the team feel pressured by a domineering manager’s agenda then their ability to provide realistic estimates is completely undermined and the project will suffer as a result.
The earlier in the project’s life we try to estimate the greater the level of uncertainty and therefore the greater the risk of an inaccurate estimate. This is represented by Barry Boehm’s “Cone of Uncertainty”. The cone of uncertainty is a very visual representation of estimate variance at different stages in a project. You can find a copy of this diagram below:
The Cone of Uncertainty is a line chart with time on the x-axis and negative to positive variability on the y-axis. There are then two lines extending from left to right. The one line goes from a high level of positive variability and travels down towards the zero line the further along the x-axis and the other line travels from a high level of negative variability and travels up towards the zero line the further along the x-axis. This gives the image of a cone lying on its side with the widest part at the left and the tip of the point on the right-hand side.
The cone represents the level of variability, both positive and negative, that estimates will have earlier in a project’s life. At the inception of a project, there is a high degree of uncertainty due to lack a of understanding of requirements, technologies and potentially team members that the level of variability between estimated and actual performance is so high as to render the estimates next to worthless.
As more work is done to elaborate on the requirements and needs of the project and as more knowledge and information is gained from the act of undertaking the project and completing work, the degree of variability is decreased. As the project processes along into the construction or development stages the level of variance decreases and the estimates become more accurate.
Closely linked to this is that the further off in time that the piece of work is going to take place the greater the variability or uncertainty that the estimate will contain. Consequently, it is best to estimate in differing levels of detail depending on where we are on the x-axis/time of the cone of uncertainty and how far into the future we are trying to estimate. Estimates presented earlier in the life of the project should, therefore, be presented as ranges that take into account the high level of uncertainty present at that point in the project’s lifecycle.
However, there is traditionally a lot of resistance to this idea and therefore a lot of guesses and assumptions are made in order to produce a single value estimate. Each of these assumptions translates into one or more risks that could adversely affect the project.
So how can we overcome these challenges at the start of the project to help determine if the undertaking is viable?
First off, in the early stages of a project, it is all too easy and common to rely on a HIPPO estimate. Hippo stands for Highest Paid Person’s Opinion. This form of Expert Judgement based estimating relies on the “been there done that” view of experience. The more senior the person, clearly the more accurate their estimate will be. However, this is far from the case. This is a form of best guess and whilst sometimes needed for a quick ballpark estimate when dealing with high levels of ambiguity and uncertainty it is far from ideal (or accurate). It sits right at the left hand side of the cone of uncertainty, so estimates arrived at by this method should always be used with care as they at best give only a very rough guess of time, resource or budget needed.
Another form of estimating is Bandwidth estimating which provides three different estimates with the most likely, the best potential outcome and the worst potential outcome. This 3-point range can be better at showing the level of risk in the estimate as the further apart these values are from each other the greater the level of risk built into the estimates.
Project Evaluation and Review Techniques (PERT) is an example of this approach that produces a triangle distribution estimate consisting of Optimistic, Pessimistic and Most Likely outcomes.
PERT uses a formula of P+4M+O/6 where P is the Pessimistic estimate, M the most likely and O the optimistic outcome. This then produces a single number estimate that is weighted towards the most likely outcome and circumvents some of the issues that come from overly optimistic (and the less likely pessimistic) estimating.
Probabilistic estimating aims to improve on the simple distribution curve provided by PERT by using mathematical models such as Monte Carlo simulations to compute a risk-based distribution curve. This, however, requires more time, more understanding of the technique, a lot more thinking and specialist software. This requires a greater investment in time and effort than many are prepared to make, especially with the recent trend towards more agile estimating approaches.
Other forms of estimating such as Parametric or Ratio Estimating also use mathematical formulas to determine an estimate based upon the size and complexity of the problem to be solved, albeit on a less complex scale than probabilistic estimating.
Parametric estimating relies on past events and data to drive future estimates. This works well where the unit cost or duration is known and the size of the task or number of units required is known. For example, at a simplistic level, calculating the number of tiles required to tile a bathroom multiplied by the cost of the tiles and the time it takes to install each tile. For industries where activities are less standardised and repeatable, this becomes more complex, for example in software development. An extension of this approach is Function Point estimation, which uses a combination of function point analysis to determine the number of different types of operation that the application must perform combined with historical data about the past performance of the team performance against each type of function point.
A popular way to reduce the imperfections in our estimates is to take advantage of what our brains are actually very good at. That is comparing two or more things. For example, is the task we are estimating more complex or less complex than a similar task we’ve previously performed. If more complex then, is it twice as complex or ten times as complex. This type of estimating is called analogy based estimating.
Even probabilistic or analogy based estimating approaches will still result in an imperfect estimate. That’s potentially ok so long as we understand that they are imperfect and why. It’s equally important that the person being provided with the estimate understands the imperfections and what the ranges provided really mean. We are all good at latching onto the element of the estimate that best fits our needs and plans… all too often this is the optimistic estimate and the other information provided is simply subconsciously ignored.
So how does Agile approach estimating? Agile views early, high uncertainty estimates as something of a fool’s errand and instead focuses on the near term, typically Sprint level estimates of user stories, where the cone of uncertainty is greatly reduced.
Agile methods also advocate having multiple people estimate the work and not rely on a small group of so-called experts. This does not mean that the majority view of the size of the piece of work wins. The focus instead is on building understanding and developing a consensus view. This helps improve the quality of the estimates as multiple viewpoints are considered and the past experiences of a wider group of people bring a greater level of knowledge and expertise helps to Consensus, not Majority.
A popular type of Affinity Estimating is through a technique called “Planning Poker” which is also a form of wide-band Delphi estimating. Planning Poker involves the whole team and not just experts or more senior team members. Estimates by individuals can be anonymous or presented simultaneously. This helps avoid bias, groupthink and everyone following a perceived expert or senior person’s opinion.
Planning Poker – is also an analogy based estimating technique. Pieces of work are broken down into descriptions written as user stories that each describe distinct pieces of the overall requirements. Only those pieces of work that are to be worked on during the next sprint (typically 2-3 weeks period of time) are estimated. Thereby avoiding issues highlighted by the cone of uncertainty described earlier. Each user story is then estimated individually by each team member by assigning either a t-shirt size (such as extra small, small, medium, large, extra-large, etc) or a Fibonacci number, which is a number in the sequence of 1, 2, 3, 5, 8, 13, 21, though 20 is often used instead of 21. These sizes are then shared, variances discussed and a consensus arrived at within the group by discussing and addressing any uncertainty or ambiguity that different team members may expose during the planning session.
Agile estimating approaches such as Planning Poker are however still not perfect and can be implemented in ways that may make them less effective than they should be. For example, if only the developers are participating in the planning poker then valuable insights and opinions could be being missed that result in less effective estimates. It is therefore important to include all people involved in delivery of the project, the designers, developers, testers, systems architecture, etc.
It’s also important to ensure biases are not introduced that could influence the outcome of the estimating process. The anchoring effect, whereby a piece of information or suggestion is made that influences the outcome. This could be as subtle as saying something along the lines of “now we’ll discuss a simpler user story” or “last time we did something like this it took 4 weeks” statements like these whether true or not will introduce bias into the estimating process no matter how collaborative the discussion is.
It is important to remember when Tracking progress that there is not a company or historical average velocity. Each team produces its own cadence and cannot be compared directly to another team. In short, a story point is not a unit of time, it doesn’t map to hours or days in a way that can be compared across teams.
We now know and understand that the best estimates are those provided using a consensus view from a group of people actively working on the project, looking at a relatively close time horizon and when significant knowledge and experience is available. We know this isn’t always possible when there are significant levels of uncertainty. In these situations, an estimate is often still required. In these situations, all that is possible is for us to do the best we can to help set the project up for success whilst setting the expectations of our stakeholders as best we can. Single value estimates should be avoided and bandwidth based estimates should be provided. A summary of the assumptions and risks the estimate was based upon should also be communicated along with the estimate itself.
Account for uncertainty where possible by identifying the known unknowns and assessing the cost of the risk if it were to occur multiplied by the likelihood of it occurring.
Refine the estimates using approaches such as Planning Poker as the project progresses and assess the progress against initial high-level estimates to identify deviations and work to improve the initial estimates on future projects.
Most importantly ensure you foster a culture of honesty, openness and collaboration with your team and your stakeholders so that honest conversations can be had about how estimates were derived. How and when they need to be refined and details of the level of uncertainty or risk contained in the estimate.
In the next episode of PM Lift we will be looking at lessons learned, How they can help us reflect, and share valuable knowledge for the benefit of our own and our colleagues future projects.
0 Comments