There is an intriguing question that pops up frequently in organizations that develop software in projects: “When is a project successful?” For sure, one of the most (mis)used resources on the subject is the Standish Group. In their frequently renewed CHAOS Report they define a project successful if it delivers on time, on budget, and with all planned features. For a number of reasons this is, in my opinion, a rather vague definition.
First of all, how do you measure if a project has finished on budget? You would need to compare the actual budget to an originally planned budget. This originally planned budget is, of course, based on some estimate, at project start. An estimate at project start is not called an estimate for nothing. It is not a calculation. At best, it is an educated guess of the size of the project, and of the speed at which the project will travel.
As we know from experience, estimates at project start are often incorrect, or at least highly biased. For instance, what if the people who create the estimate will not do the actual work on the project? Is that fair? Also, we often think we know the technical complexity of the technology and application landscape. In reality these appear to be much more complex during the execution of the project than we originally thought.
Second, when is a project on time? Again, this part of the definition depends on a correct estimate of how long it will take to realize the software, at project start. Even when a project has a fixed deadline, you could well debate the value of such an on-time delivery comparison. How do we know that the time available for the project presents us with a realistic schedule? Once again, it boils down to how much software we need to produce, and how fast we can do this.
All Planned Features
But the biggest issue I have with the Standish Group definition is the all planned features part of it. The big assumption here is that we know all the planned features up-front. Otherwise, there is nothing with which to compare the actual delivered features. Much research has been done on changes to requirements during projects. Most research shows that, on average, requirements change between 20 and 25 percent during a project, regardless of whether a project is traditional or agile. Leaving aside the accuracy of this research, these are percentages to take into account. And we do. In agile projects we allow for requirements to change, basically because changes in requirements are based on new and improved insights, and will contribute to enhancing the usefulness of the software. In short, we consider that changes to requirements will increase the value of the delivered software.
So much for software development projects’ success rates. Now back to reality. In 2003, the company I worked for engaged in an interesting project with our client. The project set out to unify a large number of small systems, written in now exotic environments such as Microsoft Access, traditional ASP, Microsoft Excel, SQL Windows and PowerBuilder, into one stable back-end system. This new back-end system was then going to support the front-end software that was used in each of their 5,000 shops.
As usual, we started our agile project with a preliminary preparation sprint, during which we worked on backlog items such as investigating the goals of the project, and the business processes it should support. Using these business processes, we modeled the requirements for the project in (smart) use cases. We outlined a baseline software architecture and came up with a plan for the project. The scope and estimates for the project were based on the modeled smart use cases.
Due to the high relevance of this project to the organization, all 22 departments had high expectations of the project, and were considered to be stakeholders on the project. Due to the very mixed interests of the different departments, we decided to talk to the stakeholders directly instead of appointing a single product owner. We figured that the organization would never be able to appoint a single representative anyway. During the preparation sprint, we modeled 85 smart use cases during a short series of workshops with all stakeholders present.
The outcome of the workshops looked very promising and, adhering to the customer collaboration statement in the Agile Manifesto, the client’s project manager, their software architect, and I jointly wrote the plan for the project. We included the smart use case model, created an estimate based on the model, listed the team, and planned a series of sixteen two-week sprints to implement the smart use cases on the backlog. Of course we did not fix the scope, as we welcomed changes to the requirements and even the addition of new smart use cases to the backlog by the stakeholders.
However, we did add an unusual clause to the project plan. We included a go/no-go decision for further continuation of the project, to be made at the end of each sprint, during the retrospective. We allowed the project sponsor to stop the project for two reasons:
- When the backlog items for the next sprint no longer added enough value compared to the costs of developing them.
- In the rare case that the requirements grew exceptionally in size between sprints – we set this to 20 percent of the total scope.
Not that we expected the latter to happen, but, given the diversity of interest of the stakeholders, we just wanted to make sure that the project would not tilt over into a totally new direction or was hugely more expensive than originally expected.
And, as you might expect, it did tilt over. After having successfully implemented a dozen or so smart use cases during the first two sprints, somewhere during the third sprint the software architect and I sat together with the representative of the accounting department. Much to our surprise, our accountant came up with so far unforeseen functionality. He required an extensive set of printable reports from the application. Based on this single one-hour conversation, we had to add 40-something new smart use cases to the model and the backlog for this new functionality. A scope change of almost 50 percent.
At the next retrospective, I can tell you it sure was quiet. All eyes were on the project sponsor. There and then she weighed up all the pros and cons of continuing or stopping the project, and the necessity of the newly discovered functionality. In the end, after about thirty minutes of intense discussion, she made a brave decision. She cancelled the project and said: “It’s better to fail early with low costs than spend a lot of money and fail later.”
“We Can’t Stop Now!”
The question you could try to answer here is this: “Was this project successful or was it a total failure?” For sure it did not implement all planned features. So, to the Standish Group, this is a failed project. But, on the other hand, at least it failed really early. This to me is one of the big advantages of being in agile projects: it is not that you will avoid all problems, they just reveal themselves much earlier. Just think of all the painful projects that should have been euthanized a long time ago, but continue to struggle onwards, just because management says: “We already invested so much money in this project, we can’t stop now!” And actually, in that sense, looking at the long history the client had with failing software development projects, after the project had stopped, the product sponsor considered it to be successful.
What is there to learn from this story? I would say it is always a good thing to have one or two preliminary preparation sprints that allow you to reason about the project at hand. I also consider it a good thing to develop an overall model of the requirements of your project at this stage – knowing that the model is neither fixed nor complete, and without going into a big up-front design. And, last but not least, if you fail, fail fast.