When I was a software consultant I got to see a wide of software projects in a wide variety of different organizations. My subjective experience is that most software projects in most organizations get it quite badly wrong most of the time.
I am aware that this is a contentious statement, and I am aware, particularly in the context of this article, of the need to be able to back up my statements with some facts, but such facts are hard to acquire, because, as I hope to show over a series of posts, our industry is riddled with subjective measures, and lacks hard data.
Some of the reason for this lack of quantitative measurement is the difficulty, if not the impossibility, of measurement of software projects. How do we decide if my project is the same as your project? Without the establishment of such a baseline, how can we ever objectively measure the effect of my methodology/technology/team structure/etc compared with yours?
The cost of a genuinely scientific approach to such measurement would be extreme to say the least, and even then the design of experiments that leveled such differences as strength of team, experience and so on would be extremely difficult to achieve.
There have been efforts in the past to gather statistical evidence of the success, or not, of projects, the most long lasting effort to collect data that I have encountered has been carried out by the “Standish Group” who have been researching software project failures for many years. Unfortunately they don’t publish their data so there is no way of knowing if their analysis stacks up. However, I recommend you type something like “research software project failure” into your search engine of choice and take a look at the somewhat dismal statistics. A high-level summary is that a significant percentage of software projects fail to deliver what the users wanted on-time and on-budget, this aligns with my own observations and, albeit, subjective impressions.
Superstition in software development
My contention is that while we talk about “Software Engineering” and “Computer Science” much day to day practice is anything but scientific, or even based on sound engineering principles. One may go to the extent that much of what is practiced in the name of software development is often based on irrational decision-making. We fool ourselves into believing that we are being diligent and rigorous, when in fact we are simply, and somewhat blindly, applying what are effectively superstitious practices that have grown up in our industry over time about what a software project should look like. Worse than that, superstitious practices that clearly don’t work very well, if at all.
In part this is a cultural thing related to the paucity of scientific education, but I also think that we have some micro-cultural influences of our own at work within our industry.
I intend to add a series of posts on this topic, outlining a particular superstition and some more rational approaches to tackling the problem, so let’s start with some low-hanging fruit – planning.
The start of a software project is the time when we know least about it. Everything that we do at this time is going to be based on speculation and guess-work. As an experienced developer having recently worked on a project similar to the one proposed, your guesses may be better than mine, but they remain guesses.
The requirements gathering process has an inevitable speculative element to it. During the process users and analysts will be guessing about the best way for the system to behave in certain circumstances, and they will be guessing at the value that this will bring when the system is live.
The commonest superstition at this stage is that all requirements must be clearly defined before the project can start. This is the worst time to define all of the requirements because it is the furthest point away from the time of use.
By the time the project is finished the business climate may have changed but even more likely the understanding of the problem will certainly have changed as more is learnt by the analysis team.
A more measurable approach
The only way to know if a requirement is correct is to implement it and to get users interacting with the behaviour in question. This suggests that rather than get all requirements identified before a project starts, the best way to achieve a measurable outcome is to convert any given requirement as quickly as possible into a software solution that can be tried by users and accepted or rejected quickly. This feedback cycle is a fundamental of a more scientific approach.
- Propose a theory.
Define a business need (requirement)
- Design an experiment to prove the theory.
Implement a solution that can be placed in front of users.
- Test the theory by experiment.
Let the users try the new behaviour.
- Evaluate results.
Did the users get what they needed from the new behaviour of the system?
This is another inherently speculative activity. In order for planning activities to have any bearing on reality they must be closely and interactively allied to actual outcomes.
Without such an active feedback loop to keep the process on track the divergence of the plan from reality is inevitable. This is such a commonly understood outcome, and fairly widely held, that it is sometimes hard to understand how the all too common superstition in project planning has arisen. That is the superstition that the only route to success is to define a fully detailed plan of the project at inception, which must then be stuck to religiously.
At the outset of a project we know very little, we don’t really know what requirements will earn the highest business value, we don’t know how easy the technologies will be to use in the context of this project, we don’t know how well the developers understand the problem and we don’t know how much time people will be spending doing other things. Most of these things we can’t know with any degree of certainty. In fact most of these things we can’t know with anything but the woolliest of guesswork.
Therefore any plan we make must be flexible enough to cope with the extremes of these ranges of probability; if the technology choice turns out to be a big problem, the plan must be able to show the impact quickly and clearly; it can never prove that everything will be all-right, it can only show us as quickly as possible that we are in trouble!
On the other hand if our technology choice proves to have been inspired, the plan must be a useful enough tool to allow us to capitalize on the fact, and maybe bring dates forward, or perhaps increase the scope of our planned delivery; unlikely as this may sound it does happen.
Note: If our industry had a more normal distribution of project success such statements wouldn’t be surprising or unusual, because half our projects would deliver ahead of time or under budget or with more functionality.
The best plan is a complete plan, the more detail the better, the more accurate we make our forecasts (a fancy word for guesses) the more realistic our plan will be.
This is one of the biggest problems in software development as well as one of the most pervasive superstitions. I have wasted months of my life planning, months in which I could have been producing business value in the form of software and achieving a more realistic, lighter-weight plan at the same time.
When a plan is very detailed, prepared ahead of time, and diverges from reality it can cope with neither greater success, nor, the sadly more frequent, failure. In my experience one of three things happens.
1. The project manager shuts themselves away and at enormous effort and enormous pain attempts to realign the plan with reality. Despite the hard work this has always been a complete failure in my experience, mostly because for the days and weeks that they are working on the plan the project keeps moving.
2. The project becomes schizophrenic, it has two independent realities. One is the “reality” of the plan, effort is expended trying to force-fit what really happened into a shape that we pretend it was the same as we thought would happen. Then there is the reality of the software development which is essentially carrying on un-tracked and un-reported in any meaningful sense. This tends not to last very long because usually it becomes harder and harder to fool ourselves that the plan bears any relationship to the project. I have observed projects where the illusion was maintained, and it came as a very big shock to senior management when they eventually found that the project that they had thought was on-track turned out to be far from ready for production.
3. The PM realises the futility of plan 1 or 2 and gives up the tracking. Status reporting instead becomes more tactical, reporting what was done, but while this gives a sense of movement it provides no real sense of progress towards the goal of delivery.
A more measurable approach
The value in a plan is as a guide to achieving something concrete, in our case some working software. All plans will and must evolve, the more complex the plan the less able it is to evolve. Minimizing detail is not just desirable it is an essential attribute of a successful plan. A successful plan is one that is capable of maintenance, one which bears more than a passing relationship to reality.
Plans must be large-scale with minimal detail when describing things that are more than a month ahead, and detailed, concrete and most importantly, measurable, when describing the immediate future.
Establishing a high-level scope:
- Propose a theory:
Establish a broad statement of scope and direction, avoid detail, but try and capture the real business benefits that are expected from the system. Establish a budget, if you expect to accrue business value X, what percentage of X is worth investing to achieve it? Establish a time-limit if this is sensible. Projects should be constrained by budget or budget and time.
- Define an experiment:
Given your best view of the scope of the project divide this up into requirements aligned only with the business-value they will bring. Prioritize these stories by which will deliver the most value. Create a broad-brush estimate for each story (T-shirt sizing is accurate enough). Establish a development capacity that you expect the team to achieve, in terms of many points you expect the team to deliver within the life of the project. Initially this can only be based on observed development capacity in other projects, the best plan is to pick the closest project to the one that is being planned, and take actual figures from it. Do not theorize that the new technology will make us go faster, or that these developers are smarter than the last lot. Plan with real verifiable data! There is subjectivity implicit at this stage so we must minimize its impact by collecting real data, measurements of the rate of delivery of the team, from this project as soon as we are able to and re-aligning the plan on the basis of those measurements.
- Test the theory by experiment:
Start work as quickly as possible in an attempt to evaluate the theory that is the plan. Measure REAL progress. Choose something that is genuinely measurable. Too many plans track progress as a percentage of task completion. This is a wholly subjective measure, and as a result wholly inaccurate. The only real measure of progress is finished software. Track and measure the rate of Ôfeature completion’ as agreed with the ultimate users of the software.
- Evaluate results:
The measured results of feature-completion will inevitably differ from the plan. The
plan, was after all only a theory. As the rate at which the team is found to deliver features in reality, in this project, with these technologies with this mix of people, the results of this experiment should be fed into the definition of the next. Create a new plan including the now hard-data that has been gathered.
I post some more thoughts on common software development superstitions in future.