There is one way to define output for a programming team that does work. And that’s to look at the impact of the team’s software on the business.
Nothing like common sense to clear the air 🙂 The fuzziness sets in however, when we try to gauge what that impact will look like before it is an impact. So we [everyone involved in software production] have tried a number of ways to refine the one metric [or group of metrics] that we can rely on to give us a reliable clue ahead of time.
Mr Shore has posted his take, and refers to the Poppendiecks and Fowler for their perspectives. In addition, there are a dozen or so prescribed metrics that have been suggested, with varying degrees of intensity, by the handful of established agilities. Amongst all of them, my favourite and most accurate has got to be: “commits per week” [CPW].
A commit, in the way my team understands it, is the smallest change you can make to the codebase without breaking it. That’s it. It can be one story, two stories or only half a story. It can be a bug fix or a refactoring. Either way, it’s a productive change because whatever gets committed is, by principle, designed to improve the codebase.
Why this is such a good metric is because it is hard to jimmy and it is wonderfully self-regulating. In light of a team environment, the metric is also straightforward and honest in its interpretation. Most productivity metrics usually fail because there’s always *another* way to interpret it and it is loaded with ambiguity which can be used negatively; especially when the going gets rough or political.
Off the bat, if that’s a motive [ammunition] for even using a metric, or if that kind of temptation is too great- then no metric is ever going to work fairly. That being said…
Why you can’t jimmy a CPW
Wether you bloat your code, starve your code, design badly, over engineer, don’t design at all, there’s only *so* long you can work with something before you need to “offload” it. By offloading, i mean, add it to the codebase so you can work on the next part of it. In a team environment [more than one developer], the longer you wait, the more painful your commit is going to be [merges and out of date logic]. The more painful your commit, the longer it takes to commit, the more trouble you start running into. Now when everyone else in your team is committing n times a day, and you’re only contributing n/2, your siren should start wailing. The team as a whole is not productive. If you try to compensate for a bad CPW number and try make multiple commits of very little consequence, you’ve got to sit in front of a test harness for most your day watching for green or risk breaking the build, which disqualifies the commit anyway. As a result, you end up getting less work done which impacts on your estimates and delivery time anyway.
For each team, the average CPW number will vary depending on the type of work being done. For example, a spike mode will cut down CPW but the CPW number for the spike team should be the same. And it is important to realise that CPW will fluctuate, peak and fall and that you cannot aim for an “average”. Not to say that you cannot maintain an “average” for a length of time if you’re into a long predictable season of development.
As with most numbers, the actual value of a CPW holds more of an academic interest, but the values compared as trends are highly indicative of production. For example, over a period of +220000 changes to the code base, our average CPW per resource [be that a pair or individual] is 20. That’s 4x per day at roughly one commit every 100minutes. Interesting. But to make that a rule for each iteration and make the number a performance indicator over every iteration to “make the target” is just ludicrous.
I’m all for metrics being utilised to measure and estimate business return on investment- it’s part of being accurately successful- but tempered against the temptation to be blind. The knee-jerk response is not to use any metrics for fear of being misrepresented. And there are some rather “extreme programmers” 😉 supporting this position. Don’t measure because you can’t measure. That kind of thinking can also seriously limit your potential.
So either extreme position on the use of metrics is snake magic and has no place in a team committed to beautiful programming. They’re useful, everybody uses them and there are plenty to choose from. Keeping track of them, and in particular CPW, tempered with some common sense, can give you some very early indicators of what kind of impact your team is going to have before it gets called an impact.