Doom?

The List itself should probably be rated somewhere πŸ™‚
But certainly some valid reasons to point out the obvious: we cannot continue as we currently are [as a species] with our pervading [and degraded] morals and [de-]values and still expect to survive in some half-functional world.

who be bluffing who?

but that’s not to be a nay-sayer… o no.. on the contrary:

life is for living and living to the full.. indeed:
the Good Shepherd came so that we could have life and life more abundantly.
and we all know that life is not just the breathing and eating part.. it’s that which refers to vitality- and that is what He came to give us… abundant vitality πŸ™‚

so look past the doom and nay-saying- these are more comments stating the obvious… life is good, because God is good.

Obituary Of Common Sense

circulated and received by email… author: unknown.

Today we mourn the passing of a beloved old friend, Mr. Common Sense.
Mr. Sense had been with us for many years. No one knows for sure how old he was since his birth records were long ago lost in bureaucratic red tape.

He will be remembered as having cultivated such value lessons as knowing when to come in out of the rain, why the early bird gets the worm and that life isn’t always fair.

Common Sense lived by simple, sound financial policies (don’t spend more than you earn) and reliable parenting strategies (adults, not kids, are in charge).
His health began to rapidly deteriorate when well intentioned but overbearing regulations were set in place.

Reports of a 6-year-old boy charged with sexual harassment for kissing a classmate; teens suspended from school for using mouthwash after lunch; and a teacher fired for reprimanding an unruly student, only worsened his condition.

Mr. Common Sense declined even further when schools were required to get parental consent to administer aspirin to a student; but, could not inform the parents when a student became pregnant and wanted to have an abortion.

Finally, Common Sense lost the will to live as the Ten Commandments became contraband; churches became businesses; and criminals received better treatment than their victims.

Common Sense finally gave up the ghost after a woman failed to realize that a steaming cup of coffee was hot, she spilled a bit in her lap,and was awarded a huge financial settlement.

Common Sense was preceded in death by his parents, Truth and Trust, his wife, Discretion; his daughter,Responsibility; and his son, Reason.

He is survived by three stepbrothers; My Rights, Not me, and Lazy.

Productivity Metrics

There is one way to define output for a programming team that does work. And that’s to look at the impact of the team’s software on the business.

Nothing like common sense to clear the air πŸ™‚ The fuzziness sets in however, when we try to gauge what that impact will look like before it is an impact. So we [everyone involved in software production] have tried a number of ways to refine the one metric [or group of metrics] that we can rely on to give us a reliable clue ahead of time.

Mr Shore has posted his take, and refers to the Poppendiecks and Fowler for their perspectives. In addition, there are a dozen or so prescribed metrics that have been suggested, with varying degrees of intensity, by the handful of established agilities. Amongst all of them, my favourite and most accurate has got to be: “commits per week” [CPW].

A commit, in the way my team understands it, is the smallest change you can make to the codebase without breaking it. That’s it. It can be one story, two stories or only half a story. It can be a bug fix or a refactoring. Either way, it’s a productive change because whatever gets committed is, by principle, designed to improve the codebase.

Why this is such a good metric is because it is hard to jimmy and it is wonderfully self-regulating. In light of a team environment, the metric is also straightforward and honest in its interpretation. Most productivity metrics usually fail because there’s always *another* way to interpret it and it is loaded with ambiguity which can be used negatively; especially when the going gets rough or political.

Off the bat, if that’s a motive [ammunition] for even using a metric, or if that kind of temptation is too great- then no metric is ever going to work fairly. That being said…

Why you can’t jimmy a CPW
Wether you bloat your code, starve your code, design badly, over engineer, don’t design at all, there’s only *so* long you can work with something before you need to “offload” it. By offloading, i mean, add it to the codebase so you can work on the next part of it. In a team environment [more than one developer], the longer you wait, the more painful your commit is going to be [merges and out of date logic]. The more painful your commit, the longer it takes to commit, the more trouble you start running into. Now when everyone else in your team is committing n times a day, and you’re only contributing n/2, your siren should start wailing. The team as a whole is not productive. If you try to compensate for a bad CPW number and try make multiple commits of very little consequence, you’ve got to sit in front of a test harness for most your day watching for green or risk breaking the build, which disqualifies the commit anyway. As a result, you end up getting less work done which impacts on your estimates and delivery time anyway.

For each team, the average CPW number will vary depending on the type of work being done. For example, a spike mode will cut down CPW but the CPW number for the spike team should be the same. And it is important to realise that CPW will fluctuate, peak and fall and that you cannot aim for an “average”. Not to say that you cannot maintain an “average” for a length of time if you’re into a long predictable season of development.

As with most numbers, the actual value of a CPW holds more of an academic interest, but the values compared as trends are highly indicative of production. For example, over a period of +220000 changes to the code base, our average CPW per resource [be that a pair or individual] is 20. That’s 4x per day at roughly one commit every 100minutes. Interesting. But to make that a rule for each iteration and make the number a performance indicator over every iteration to “make the target” is just ludicrous.

I’m all for metrics being utilised to measure and estimate business return on investment- it’s part of being accurately successful- but tempered against the temptation to be blind. The knee-jerk response is not to use any metrics for fear of being misrepresented. And there are some rather “extreme programmers” πŸ˜‰ supporting this position. Don’t measure because you can’t measure. That kind of thinking can also seriously limit your potential.

So either extreme position on the use of metrics is snake magic and has no place in a team committed to beautiful programming. They’re useful, everybody uses them and there are plenty to choose from. Keeping track of them, and in particular CPW, tempered with some common sense, can give you some very early indicators of what kind of impact your team is going to have before it gets called an impact.

Multicast Delegate Gotcha

There are enough multicast delegate samples in .Net available on coding websites including MSDN to get you started on how to make use of them. The ‘why’ is tackled on some of them but not many include a section on “things to look out for”. This post details one such gotcha: using it in the Observer Pattern.

Let’s start with a commonly published multicast delegate sample, flavoured with Observer Pattern language:

public class Subject
{
   public delegate void EventHandler(object from, EventArgs e);
   public event EventHandler Notify;

   public Void OnNotify(EventArgs e) {
   if(null != FireEvent)
      Notify (this, e);
   }
}
public class ObserverA
{
   public void Notify(object from, EventArgs e) {...}
}

This design has several advantages and is endorsed by popularity of use in ASP.Net, so why not use the model outside that environment? For instance, an observer simply needs to subscribe by:

ObserverA obs = new ObserverA();
subjectInstance.Notify += new EventHandler(obs.Notify);

The burden of managing subscriptions is relegated πŸ™‚ to the multicast delegate. No more foreach loops and keeping references on the observers. So, on the surface, all seems well in paradise. Further, the loose coupling between subject and observer via an interface [the delegate] promotes a warm and fuzzy feeling. The only snag is scope.

None of the samples you find deal with scope effectively since most of them deal with observers as either static methods [probably the most famous] or with methods inside and ASP.Net page cycle. And things can get fuzzy there. My next challenge is, what happens when Observer goes out of scope?

In order to clean up properly, the observer has the responsibility of unsubscribing. If it doesnÒ€ℒt, the subject will just “resurrect” it each time it fires an event, even you dispose your observer. Ideally then, in order to overcome this problem, the “destructor” on the observer needs to unsubscribe, if subscribed. To do this, the observer must keep a handle on the subject [still part of the pattern rules]in order to:

subjectInstance.Notify -= myEventHandlerInstance;

Whic means the observer starts to take on more form:

public class ObserverA
{
   private Subject source;
   private EventHandler myEventHandlerInstance;
   public ObserverA(Subject subjectInstance) { source = subjectInstance; }
   public void Notify(object from, EventArgs e) {...}
}

Now we have each observer maintaining a reference to the subject instance so that if the observers fall out of scope, they can clean up their subscription. But now what happens if the subject instance goes out of scope first? How does it know notify observers of its unavailability? First jump is create a “DyingEvent” which becomes a mandatory subscription for all observers: Yuk! But thinking some more on it, does it even matter? If the subject dies, the observers wonÒ€ℒt receive any more notifications, but then when they die, they try “unsubscribe”. Oops. Gotcha!

Any ideas?

Other references:
Softsteel Solutions: C# Tutorial Lesson 16: Delegates and Events
A Beginner’s Guide to Delegates

Simplest Thing

An *agile-biased* blog would just not be complete with at least *one* article on TheSimplestThing… so here goes:

I has occurred to me [and no doubt, countless others] that after all that has been said about TheSimplestThing; there is no simple definition. The irony does not escape my sometimes blindingly slow wit, but simply put, simplest thing has not [yet] been defined with the simplest definition that can be agreed on. If simply defining a concept is not clear, how can it even be implemented?

Fortunately it seems that TheSimplestThing technicality is mostly a debate of semantic correctness but when pursued it becomes a philosophical fisticuff. You can start by asserting TheSimplestThing is X and find yourself arguing, moments later, that Such-And-Such is not TheSimplestThing because it isn’t Y.

In an attempt to capture the essence of TheSimplestThing, we use:
a) dictionary definitions
b) authoritative quotes
c) classic one-liners
d) Occam’s Razor

The dictionary definitions are the most controversial because they mix complexity and complicated [and in fact use them as direct synonyms] in order to define what is simple: ie. that which is not complex/complicated. In software, however, complicated and complex have very distinct meanings.

In fact, the two can never be used synonymously because their differences are big enough to create more potential chaos than a power-crazy nuclear arms dealer on crack. So, we can choose our words carefully with these two, unless we agree to use them synonymously but then to be specific about which one we are talking about. Or just keep the definitions distinct: surely that’s the simplest thing? πŸ˜‰

Personally, i maintain that TheSimplestThing is a bit of misnomer- a red-herring- a goose chase- a magic mushroom. I prefer TheLeastComplicatedThing. That way, the solution can still be intuitively *complex* [if need be becos, well, the requirement is complex] but at least, and most important, simply understood.

Further Reading:
Simple Ain’t Easy
A Field Guide To Simplicity
complex vs complicated, + xaos
No ! Your software is complicated, not complex.

Seduction of Reduction

There are many definitions for reductionism but the most fitting here is
…complex systems can be completely understood in terms of their components
…the analysis of complex things into simpler constituents

Aaahh… this is the stuff of programmers. Give us any complex problem and we can quickly break it down into simpler consituents and provide a solution. Unfortunately, this habitual mode of thought also stops us from *really* seeing things they way they are.

The problems we deal with are indeed complex, but by no means linear. By this, i mean that the impact on the global system of a proposed solution in one area cannot always be accurately predicted. This becomes particularly true as the scope of the system increases. Manifested, we look to the codebase to see how this principle plays itself out.

As the codebase grows, small changes may ripple across the system and produce undesired consequences somewhere else. This is not always a design flaw, but can be attributed to the natural evolution of the system. However, in both cases, you end up with the same emergent non-linear complexity. Wether it falls into complete chaos or not is another discussion πŸ™‚

Our linear reductionism works effectively in small systems where the number of interacting components is easy to snapshot at any one time. As it grows, we need to adapt our thinking to facilitate this shift in complexity. This pattern of thinking i call, Density Dependance, adapted from a similar concept in the research of AI.[1]

When we start finding that our habitual reductionism starts letting us down [manifested most notably by statements like: “Aaah.. yes, i forgot about that”], we should flag ourselves to start actively looking at the global properties of the system and start thinking about solutions “holistically” [ although i’m quite hostile towards that word itself πŸ™‚ ]. How we think directly depends on the density of the solution, and conversely, the density of the solution depends on how we think.

Ironically, loose coupling contributes towards complexity through perpetuating the fallacy of composition, yet, it’s [LC] considered a good practice.
If all my components are reusable, i should be able to plug ‘n play different components- thereby making TheSystem itself a reusbale component.
Integration is easy? Not that loose coupling is a bad idea. It’s just funny, is all πŸ™‚

Even more funny, when business employ reductionism to negotiate deliverables, dev reacts badly:
If it takes one developer 4 days to do the job, then it should take 4 developers one day. So why can’t we ship on Tuesday?
Again, we’re never really engaging with any kind of linearity, so that kind of maths won’t fit. And software delivery is complex [on many fronts] but governed by a few deeply simple rules in order to manage the complexity. Get your simple rules right, and it’s B-E-A-Utiful πŸ™‚

And again, the theme keeps coming up time and again– even they way we think about our project should just be one of many tools we have available to deliver successfully what we, as programmers, agree to deliver.

References:
[1] Brooks, R, Cambrian Intelligence, MIT Press, 1999

Crime Expo

[Ammended: 9:15pm]

Crime Expo South Africa is attracting a fair amount of attention of late. With good reason. South Africa relies on tourism and foreign investment for growth, notwithstanding 2010, when we have the honour of hosting the greatest game on earth πŸ™‚ But how long can we pretend that all’s well when there’s serious trouble [not without good evidence] in paradise?

What’s more troubling though are the many responses to the site from people of all walks of life.

find more “civilised” :p debate at:

The Civic Platform
or
The Unbroken Barometer

I find it deeply troubling [irony aside] that many “peaceful” saffers would advocate crime [hacking] as a solution to combat something they don’t like. Oops. Although not meant to be serious and more said out of frustration or bursts of emotion, i do hope the site remains up and gets tested against time. further, this site is a showcase for our tolerance and commitment to democracy and freedom of speech. so let it be…
personally, i’m already bored with that site that ‘cos it’s just so negative- it ruins my day completely! but that doesn’t mean i’m not ignoring crime- that’s still a big issue for me. i need something to be done about it… but i’m pretty sure there are better ways of doing it…

Agile Relationships

The XP Coach label often gets thrown in with some other descriptive titles like facilitator, mentor, team lead, trainer in order to paint a picture about what the role entails. A big picture indeed, but not all coaches can/will fit all attributed descriptions since each is quite different, particularly mentor.

A mentorship … is a dynamic shared relationship in which values, attitudes, passions, and traditions are passed from one person to another and internalized. Its purpose is to transform lives (Berger, 1990).

If we combine the above definition and this list of attributes of a good mentor [inspired by Lewin, 1993 and Gordon, 2002]; a mentor is:
enthusiastic about the mentee’s progress
willing to help mentee whenever needed
willing to recede when credit needs to be distributed
willing to protect the reputation of the mentee through, for example, co-authorship
leading the way through support and encouragement [not through dictation]
unconditional in accepting the mentee along with his/her’s ideas

… it stands to reason that most relationships within programming, and in particular the emerging programming culture [accelerated through Agile practices like pair-programming] have an element of mentorship: coach or not. Like it or not πŸ™‚

I have benefitted more from mentors in my career than i have from learning resources. Make no mistake, these resources are invaluable but my mentors shaped my values, challenged my thinking and encouraged my passions. These determine the who i am and not just the what i can[can’t] do.

Mostly though, my menteeship has been ad-hoc [as is most mentoring, i’m assuming]. I wish it had been more explicit since the value it offers is obvious. Be that as it may for me, i do believe that in the now and in the generations to come, mentorship in software should be more explicit.

We all learn tricks from the gurus. This will never change. But where do we learn to think and how do we sharpen that axe, constructively? OpenSource/ forums/ blogs; this is part of it, but relationship is key. Afterall, computers should be about people, right? And mentorship hold some answers. Something we can all take on more explicitly as we advance, always being mentored, forever being a mentee.

References
Berger, S (1990). Mentor Relationships and Gifted Learners. [http://ericec.org/digests/e486.html] In Boston, B. (1979). The mentor and the education of the gifted and talented. In J. H. Orloff (Ed.), BEYOND AWARENESS: PROVIDING FOR THE GIFTED CHILD
Gordon, J. (2002). Qualities of an Inspiring Relationship [http://www.qualitycoaching.com/Articles/mentor.html]
Lewin, R. (1993). Complexity: Life at the Edge of Choas

Continuing Education Gaps: Part Deux

This is a bit of a carry over from a previous post where i discussed gaps in continuing education. Although the focus of that discussion was about form and function, Raph Koster’s book on game design [A Theory Of Fun] made me realise something else about continuing education: it’s just boring.

A large part of it at least. There’s no shortage of content and no shortage media and presentation variety. But it’s still mostly boring and hence, difficult to engage with. From the learner’s perspective that is.

As a continual learner, most of my time is spent is spent gathering information from blogs, stories and special cases. I will use MSDN, and the like, for a point of reference on technical detail, but if really want to “learn” something- to grok it- i read a story about it.

I read about the human being behind the problem: how and when they discovered the challenge. What did they try, what *almost* worked and why not. What frustrations did they experience and finally, what solution do they suggest. In the story, i pick up on the mood and get to feel with the writer and become part of the adventure. In doing so, i am having fun because i’m engaging in all sorts of patterns thread into the story [implicit and explicit]. I also get to use my imagination: what does the writer look like? what kind of cubicle do they work in? what kind of boss is breathing down their neck? what time of day is it? what is the look on their face when they make their discovery? how do they feel about it? And i know that this is real.

It’s not another hypothetical Bob and Alice story but a real life event. Real blood and sweat and tears are involved in finding the solution and so, in turn i integrate more than one sense into the story. It’s interesting and above all, it’s fun and that’s how i learn.

Then i stare blankly at the table of contents in front of me for a .NET Fusion course. As much as i would love to learn all about it, where do i start? How do i apply all this to my real life; the here and now? As good as it might be for a reference, it’s definitely not something i can learn from…

Design Rules

li#font {font-color: #000000}

Obscurely labelled as the The Zen of Python (by Tim Peters) offers some lighthearted pragmatism and welcome relief in the face of TheCheeseMovement.

  • Beautiful is better than ugly
  • Explicit is better than implicit
  • Simple is better than complex
  • Complex is better than complicated
  • Flat is better than nested
  • Sparse is better than dense
  • Readability counts
  • Special cases aren’t special enough to break the rules
  • Although practicality beats purity
  • Errors should never pass silently
  • Unless explicitly silenced
  • In the face of ambiguity, refuse the temptation to guess
  • There should be one– and preferably only one –obvious way to do it
  • Although that way may not be obvious at first unless you’re Dutch [personally don’t get the Dutch connection but it does apply to some i have in mind πŸ™‚ ]
  • Now is better than never
  • Although never is often better than *right* now
  • If the implementation is hard to explain, it’s a bad idea
  • If the implementation is easy to explain, it may be a good idea
  • Namespaces are one honking great idea — let’s do more of those[erm… well.. mmm…]