The Programmer

In trying to describe what a programmer does, it’s easy to get sidetracked into discussions about compilers, delivery methodologies, design principles and frameworks. And then we try define the attributes of a “good” programmer when we’re really trying to define a “valuable programmer” [or team]. And even then, we can get sidelined into discussions about how a valuable programmer communicates, approaches design, deals with team conflict, translates and can abstract appropriately.

and that’s all good. But there is a less spoken of attribute which is very valuable:
is the programmer a servant?

As a programmer, i believe my existence is [should be] one of servanthood. All my decisions regarding design, implementation, methodologies, conflict- should be underpinned by an attitude of “to serve”.
I have heard programmers [myself included] negate a user request because:
“…the system is not designed to do it that way…”
“…it’s too difficult to implement…”
“…it’s too much effort…”
“…it’ll break the class design…”
“…too much refactoring…”
i do conceded that these may be valid reasons in the right context. As i’ve paraphrased them here, imagine these responses to be expressed void of any attitude to serve.

So… rather, if the class design won’t handle it, change the design.
If it’s a mission… make a plan.
Bottom line, a programmer exists to serve the needs of the user [ note: i say “user” and not necc the whims of the product manager/owner- that’s a different discussion alltogether 🙂 ]

And if you approach your choice of compiler, methodology, framework, team members, or principles; how to deal with conflict resolution, translation, communication… all of the above with attitude to serve, and serve your user well… the rest is mostly detail …

Productivity Metrics

There is one way to define output for a programming team that does work. And that’s to look at the impact of the team’s software on the business.

Nothing like common sense to clear the air 🙂 The fuzziness sets in however, when we try to gauge what that impact will look like before it is an impact. So we [everyone involved in software production] have tried a number of ways to refine the one metric [or group of metrics] that we can rely on to give us a reliable clue ahead of time.

Mr Shore has posted his take, and refers to the Poppendiecks and Fowler for their perspectives. In addition, there are a dozen or so prescribed metrics that have been suggested, with varying degrees of intensity, by the handful of established agilities. Amongst all of them, my favourite and most accurate has got to be: “commits per week” [CPW].

A commit, in the way my team understands it, is the smallest change you can make to the codebase without breaking it. That’s it. It can be one story, two stories or only half a story. It can be a bug fix or a refactoring. Either way, it’s a productive change because whatever gets committed is, by principle, designed to improve the codebase.

Why this is such a good metric is because it is hard to jimmy and it is wonderfully self-regulating. In light of a team environment, the metric is also straightforward and honest in its interpretation. Most productivity metrics usually fail because there’s always *another* way to interpret it and it is loaded with ambiguity which can be used negatively; especially when the going gets rough or political.

Off the bat, if that’s a motive [ammunition] for even using a metric, or if that kind of temptation is too great- then no metric is ever going to work fairly. That being said…

Why you can’t jimmy a CPW
Wether you bloat your code, starve your code, design badly, over engineer, don’t design at all, there’s only *so* long you can work with something before you need to “offload” it. By offloading, i mean, add it to the codebase so you can work on the next part of it. In a team environment [more than one developer], the longer you wait, the more painful your commit is going to be [merges and out of date logic]. The more painful your commit, the longer it takes to commit, the more trouble you start running into. Now when everyone else in your team is committing n times a day, and you’re only contributing n/2, your siren should start wailing. The team as a whole is not productive. If you try to compensate for a bad CPW number and try make multiple commits of very little consequence, you’ve got to sit in front of a test harness for most your day watching for green or risk breaking the build, which disqualifies the commit anyway. As a result, you end up getting less work done which impacts on your estimates and delivery time anyway.

For each team, the average CPW number will vary depending on the type of work being done. For example, a spike mode will cut down CPW but the CPW number for the spike team should be the same. And it is important to realise that CPW will fluctuate, peak and fall and that you cannot aim for an “average”. Not to say that you cannot maintain an “average” for a length of time if you’re into a long predictable season of development.

As with most numbers, the actual value of a CPW holds more of an academic interest, but the values compared as trends are highly indicative of production. For example, over a period of +220000 changes to the code base, our average CPW per resource [be that a pair or individual] is 20. That’s 4x per day at roughly one commit every 100minutes. Interesting. But to make that a rule for each iteration and make the number a performance indicator over every iteration to “make the target” is just ludicrous.

I’m all for metrics being utilised to measure and estimate business return on investment- it’s part of being accurately successful- but tempered against the temptation to be blind. The knee-jerk response is not to use any metrics for fear of being misrepresented. And there are some rather “extreme programmers” 😉 supporting this position. Don’t measure because you can’t measure. That kind of thinking can also seriously limit your potential.

So either extreme position on the use of metrics is snake magic and has no place in a team committed to beautiful programming. They’re useful, everybody uses them and there are plenty to choose from. Keeping track of them, and in particular CPW, tempered with some common sense, can give you some very early indicators of what kind of impact your team is going to have before it gets called an impact.

Agile Relationships

The XP Coach label often gets thrown in with some other descriptive titles like facilitator, mentor, team lead, trainer in order to paint a picture about what the role entails. A big picture indeed, but not all coaches can/will fit all attributed descriptions since each is quite different, particularly mentor.

A mentorship … is a dynamic shared relationship in which values, attitudes, passions, and traditions are passed from one person to another and internalized. Its purpose is to transform lives (Berger, 1990).

If we combine the above definition and this list of attributes of a good mentor [inspired by Lewin, 1993 and Gordon, 2002]; a mentor is:
enthusiastic about the mentee’s progress
willing to help mentee whenever needed
willing to recede when credit needs to be distributed
willing to protect the reputation of the mentee through, for example, co-authorship
leading the way through support and encouragement [not through dictation]
unconditional in accepting the mentee along with his/her’s ideas

… it stands to reason that most relationships within programming, and in particular the emerging programming culture [accelerated through Agile practices like pair-programming] have an element of mentorship: coach or not. Like it or not 🙂

I have benefitted more from mentors in my career than i have from learning resources. Make no mistake, these resources are invaluable but my mentors shaped my values, challenged my thinking and encouraged my passions. These determine the who i am and not just the what i can[can’t] do.

Mostly though, my menteeship has been ad-hoc [as is most mentoring, i’m assuming]. I wish it had been more explicit since the value it offers is obvious. Be that as it may for me, i do believe that in the now and in the generations to come, mentorship in software should be more explicit.

We all learn tricks from the gurus. This will never change. But where do we learn to think and how do we sharpen that axe, constructively? OpenSource/ forums/ blogs; this is part of it, but relationship is key. Afterall, computers should be about people, right? And mentorship hold some answers. Something we can all take on more explicitly as we advance, always being mentored, forever being a mentee.

References
Berger, S (1990). Mentor Relationships and Gifted Learners. [http://ericec.org/digests/e486.html] In Boston, B. (1979). The mentor and the education of the gifted and talented. In J. H. Orloff (Ed.), BEYOND AWARENESS: PROVIDING FOR THE GIFTED CHILD
Gordon, J. (2002). Qualities of an Inspiring Relationship [http://www.qualitycoaching.com/Articles/mentor.html]
Lewin, R. (1993). Complexity: Life at the Edge of Choas

Continuing Education Gaps: Part Deux

This is a bit of a carry over from a previous post where i discussed gaps in continuing education. Although the focus of that discussion was about form and function, Raph Koster’s book on game design [A Theory Of Fun] made me realise something else about continuing education: it’s just boring.

A large part of it at least. There’s no shortage of content and no shortage media and presentation variety. But it’s still mostly boring and hence, difficult to engage with. From the learner’s perspective that is.

As a continual learner, most of my time is spent is spent gathering information from blogs, stories and special cases. I will use MSDN, and the like, for a point of reference on technical detail, but if really want to “learn” something- to grok it- i read a story about it.

I read about the human being behind the problem: how and when they discovered the challenge. What did they try, what *almost* worked and why not. What frustrations did they experience and finally, what solution do they suggest. In the story, i pick up on the mood and get to feel with the writer and become part of the adventure. In doing so, i am having fun because i’m engaging in all sorts of patterns thread into the story [implicit and explicit]. I also get to use my imagination: what does the writer look like? what kind of cubicle do they work in? what kind of boss is breathing down their neck? what time of day is it? what is the look on their face when they make their discovery? how do they feel about it? And i know that this is real.

It’s not another hypothetical Bob and Alice story but a real life event. Real blood and sweat and tears are involved in finding the solution and so, in turn i integrate more than one sense into the story. It’s interesting and above all, it’s fun and that’s how i learn.

Then i stare blankly at the table of contents in front of me for a .NET Fusion course. As much as i would love to learn all about it, where do i start? How do i apply all this to my real life; the here and now? As good as it might be for a reference, it’s definitely not something i can learn from…

Categories
Technology

An Assumption Makes…

Assumptions are wonderful. They allow you to fly ahead without needing to fuss over any time-consuming details… until, at least, the assumption fails you. Like the “ref” keyword, for example.

A data type is a value type if it holds the data within its own memory allocation [Eg. numeric types, bool, any struct types]. A reference type contains a pointer to another memory location that holds the data [Eg. String, any class types].

So, when it comes to argument passing in .Net, by default, all value types are passed by, well, value 🙂 and reference types too are also passed by value. But value in the case of the reference type, is the value of the reference. This little nuance [nuisance, at first] popped up quite late in my project simply because the implemented design didn’t call for any dynamic re-allocating, until one particular test started failing during what was supposed to be routine “refactoring”. I say “supposed to be” because the refactor ended up changing the behaviour hence no longer a refactor… anyhoooo…

My assumption [reference types = raw pointer, ala C++] allowed me to gloss over an entire section of the C# specification. Had i paid any real thought to this little abstraction way back then, i would probably still have forgotten about it when i had to call upon that knowledge for the first time, months later. Or not. At least now i know i won’t *ever* forget about it 🙂

Any which way, i still think assumptions are good. Even better though if you can change them. And best if you can constantly challenge them or have them challenged. And yes, an assumption can lead you off into the wrong direction, but hey! At least it’s a direction. Could have indulged the specification in depth before hand and told my employer to wait n months while i read all about C#… don’t think that would have worked too well either.

As always, some tempering between the extremes is a refreshing welcome…

Categories
perspective Technology

Methodology Wars

It is interesting to see the increasingly varied responses to Agile as it waxes and wanes in popularity. From within the ranks of the new order rises a breed of zealots determined to see their ways overthrow the old. The stoics glare down at this revolution with some contempt and evidence of its untrustworthiness. The evidence itself is backed up by their reputation and that should be enough for victory. Yet the new order marches on, and in between, there’s another minority quietly getting on with fusing the gap…

I guess my view of government organisations is influenced by the incompetence of many. As grossly unjust as my opinion is, i am encouraged by many others, and most recently by: Army Simulation Program Balances Agile and Traditional Methods With Success. All this while i’ve been quietly researching on combining the two [motto: to be Agile enough to do Waterfall] and wondering just how this fusion would play out in a larger project?

Rather sweetly actually: OneSAF is a success!

This does pose a bit of a problem for the stoics and zealots though. Who gets too claim this victory? Or will they both ignore this one 🙂 Or maybe we can expect a new range of books both for and against OneSAF?

Refactoring Agile: How OneSAF Could Bave Been Better.
Traditional DeadWeight: Agile Carries OneSAF.

All i can say is: “Kudos to Mr Parsons, LTC Surdu and the team on some clear thinking!”

Categories
perspective Technology

The Right Tool

It’s a theme which comes up quite regularly: “the right tool for the right job”. From DIY to software, this mantra manifested literally saves you money. Let alone a bundle of emotional energy which gets exhausted trying to fit round blocks into square holes. I can use a knife as a screwdriver, my cellphone as a delicate hammer and chewing gum as glue. As much as they work, they are not the right tools for the right job.

In software, we tend to think of tools as: inter-alia, compilers, languages, debuggers, IDE’s, SDK’s and drivers. Very rarely do we regard our processes, resources and skills as tools. We define these instead as attributes of a project, team or individual. But attributes are abstract things such as value, determination and enjoyment. Of course, we don’t want to offend any person by referring to them as a tool. But let’s rise above ourselves for a moment here, and ignore the connotations of “tool”. In doing so, we can take advantage of the definition to pursue success with greater enojyment.

The challenge: need to respond to aggressive shifts in the market and stay ahead of competition by releasing features rapidly. The tool: Agile.

The challenge: need to carefully plan out the next n months of development in a mature vertical market. The tool: Waterfall.

The challenge: need to research algorithms and implementations of those algorithms on different platforms. The tool: established computer scientists.

The challenge: need to mentor a team of young programmers within a business application product. The tool: pair-programming.

Instead of dogmatically insisting that the “attributes” of your project: Processes, Resources and SkillSet determine the course of your project, let the real attributes: success, enjoyment and value be a product of your project- but manage those just as pro-actively as any other aspect of the project. And while you course through your project, employ the right tools for the right job at the right time.

And just because a hammer worked really well driving that nail into the wall, it doesn’t mean it’ll do an equally fantastic job at attaching the mirror to the wall.

The irony for me though, is that we all instinctively know this and by some subconscious decision making process, we apply this principle quite well to a point. It’s when we don’t apply this principle though that we end up in a: “Houston, we have a problem”.

I think the art of getting this right, is like anything else: be conscious of what you do and decidely know what makes you successful. Like your code, when it unexpectedly stops working, it’s probably because you are not really sure why it was working to begin with…

Categories
Technology

I Love My TestHarness

now wouldn’t that make a great bumper sticker? 😀

again, yesterday, i experienced the fullness of my beloved test harness. as it always happens, business requirements change; dynamic market pressures or product discovery over time dictate that change is required. now whether you’ve spent 6 months designing before coding or spent 6 months designing through coding [implemented code IS the design], how do you evaluate the impact of the change accurately? how do you go back to the hand that feeds you and estimate the cost of change with confidence [which impacts on a marketing strategy and promise] and maintain that near-perfect delivery track record? and then, how do you know, for sure, that your implemented change doesn’t inadvertently break some other part of the system because the system is now so huge [increasing feature set over time] it’s getting near to impossible to keep it alltogether in one moment?

welcome the testharness!

it was so quick to implement the change [fully tested on it’s own of course] but then integrate the new module into the existing system… the next step was to figure out: where to begin with handling all the other intelligence that relies on the old structures and is impacted by the new structure?

ran the tests. red bar with breaking intelligence over one primary area. there were one or two adhoc modules that were affected. no sweat. the beauty was that i din’t need to comb throught the system to find them. i let my system tell me and in doing so saved myself a load of cognitive energy. now that the buggy areas are recorded i can fix one test a time until the bar is green and walah! 😀

there was *life*, allegedly, before a test harness and now then there’s life with a test harness. i cannot remember what software development was like before. i just know now that more than ever before i truly passionately enjoy my craft! [which also happens to pay the rent ;)]

Categories
Technology

It’s all in the estimate

Estimates form the basis for all software projects. In fact, estimates are part and parcel of our daily lives. We live, plan and act by them. Our expectations are met or shattered based on the estimates we feed into our lives.

Buying food, you might estimate before you set out how much money you need, how long you think you might be and plan supper, a night out, a telephone call based on the estimates you give yourself. When life goes according to our estimates, we’re happy. Everything is running smoothly. When estimates are wrong, we adjust, but sometimes, the ability a bad estimate has to flap it’s wings and spiral out of control can be deadly. Especially for a software project.

Based on estimates, business programmes are set in motion. Budgets are approved and marketing plans are established. The length of the estimate is largely irrelevant. What’s critical is its accuracy. When projects, and hence people’s careers, lives, finances, lifestyles, families, are on the line [ok, maybe a tad dramatic :)], an estimate has the misfortune of not being taken too seriously on one hand, or far too seriously on the other.

Taken too seriously and the time to estimate can be as long as the time to do. Not taken seriously and the time to do is incalculably longer than the time to estimate.

And because estimates can be co critical, it does make sense that the right amount of energy
be invested into getting them as accurate as they need to be, weighed against the cost of getting them too accurate. No solution strategy, technology or skill set is going to set you up professionally if you don’t know how long it’s going to take you to do something, do anything. If you don’t really know how long it will take, it does imply that you don’t really know what you’re doing. And if you establish a trend over time of not delivering when you say you will it shouldn’t come as a surprise if your services become undervalued.

Of course, you can always “buffer” your estimates and play it safe. But in a dog-eat-dog world where the ubiquitous “5-minute solution” marketing threatens your chances of being awarded the contract [or the glory- however inaccurate those 5 minutes are], buffering can be expensive.

Considering the risks, together with the cost of not being accurate, it pays dividends to be more boldly accurate. And to be more boldly accurate, it takes time invested into getting to know your weaknesses. It takes being honest with your progress using feedback mechanisms that might hurt your feelings. It requires that you get better, not just at what you do, but at how you do what you do.

Categories
Business perspective Technology

Beautiful Programming

With a mission statement which includes phrases such as creating beautiful software and a focus on beautiful people, it can sometimes be hard to follow such a mission if the very objective, beauty, is hard to materialize.

It has been said that beauty is in proportions. The moment you start emphasizing or leaning too heavily on one concept, you start to caricature the concept and fade on beauty. Beauty is all about holding the right proportion at all times.

Beautiful software does not waste features, but has everything you need. The interface is not busy but practical. Its efficient and natural, but cruising through complex problems. The code is not bloated and not cryptic due to code line famines. It has the right proportions. Beautiful people are controlled but not retentive. They are passionate but not sentimental. Expression is a skill, not a habit. Focused yet continually thinking out the box. Pushing boundaries and pioneering but never falling off the edge, beautiful people contain the right proportions. Bringing the two together is indeed a mission statement, but requires more mission and less statement. And to achieve this, we use the inaptly and irony-burdened name Extreme Programming.

In its essence, it aims to contain proportion, never to focus on the extremes. The right design, but simple enough and agile to tackle complex systems. Estimating to cater for business forecasting and survival but not allowing dogmatic routines and methodologies to dictate. Indeed, a minefield of contradictions when you lean too heavily on the one concept to the neglect of the other supporting principles but by achieving the right proportions and you attain beauty. Perhaps we could reconsider the term Extreme Programming (afterall, its laden with cliches and prejudices) and look at the ways we want to work and what we want to achieve. And never forgetting, most importantly, how we want to achieve it. Perhaps, the way we implement our mission is more suited to the title Beautiful Programming because beautiful software, created
by beautiful people can only be created in a beautiful manner.