Myth Busting

Atheists.org are a fascinating bunch.
For a start, an atheist is supposed to beleive that there is no God; have no theistic beliefs; disbelieve the existence of deities; the idea of god to them is senseless; yet is an as active religious system as most others? more definitions here

So despite the fact they don’t believe there to be God, they go one step further out of their way to propogate what they call “myths” about Christianity. Now, it’s one thing not have a belief in God, but a completely different story if you’re going out of your way to try and prove that He doesn’t exist.

Why do you need to prove that He doesn’t exist if you already believe that He doesn’t?

My guess is because the evidence is so overwhelming that He does exist that if you want to hold to the belief that He doesn’t, with any form of reasonable grip and integrity, you need to counterweigh the evidence. And the weighing, in most cases, is based on quantity rather than quality of argument.

For example, one of their arguments for one of the many alleged myths around Christianity go something like…
<argument>
Jesus said..
If I alone bear witness about myself, my testimony is not deemed true- John 5:31

And then later says…

I am the one who bears witness about myself.. – John 8:18

So therefore he’s a false witness.
</argument>

It’s funny how we will sometimes scrutinize fragments of the scripture to try and make a point. In cases like this, it seems the search for “truth” already began with a foregone conclusion and then evidence was gathered in support. Honestly, if we were to be real here, we would set out to find the evidence first and then come to a conclusion….

1. In John 8:18, the FULL quote is actually…
I am the one who bears witness about myself, and the Father who sent me bears witness about me.
And just before that He says [v14]
Even if I do bear witness about myself, my testimony is true, for I know where I came from and where I am going, but you do not know where I come from or where I am going

2. John 5:13 leads into the next verse…
There is another who bears witness about me, and I know that the testimony that he bears about me is true.
And then He continues and talks about John bearing witness too.

Both, fully quoted in context, make very powerful and merciful claims.. but i guess there are none so deaf as those who do not want to hear.

Owning Your Continous Integration

The ideas here is to “own” your continous integration process and make it work for you. Afterall, you spent some time setting up CruiseControl.Net and it’s ticking along nicely. At some stage, you need to make it “yours” and let it reflect your needs in terms of your process and record build data that you use [need] to make decisions.

<soapbox>
In fact, i’d encourage you to really push into making it your own. Not just the look ‘n feel- but the real nitty gritty. Define and extend and experiment with your continuous integration process to make it add value…
</soapbox>

the Scenario
Our automated build process runs a series of tests against a database. Before we start the tests, we need to record some information about the database; this being gathered by means of a stored procedure. We also need to persist the results as part of the build so we can monitor trends.

the Tools
* Nant build script
* Nant custom user task
* stored procedure
* xslt template
* cc.net config files

The Nant build script is included as part of the build process kickstarted by the bootstrap.build configured in cc. For more information on setting up cc.net, review the documentation that comes as part of the distribution. The ccnet.config file should include a nant <node> under tasks, similar to this…


<nant>
<executable>C:\nant-0.85\bin\nant.exe</executable>
<buildArgs>-D:debug=false</buildArgs>
<buildFile>bootstrap.build</buildFile>
<targetList>
<target>go</target>
</targetList>
</nant>

The Nant build script then calls a Nant custom user task which has the responsibility of executing the stored procedure and formatting the results into an xml file. To execute the custom task, somewhere along your build path, you will include something like…


<project name="ccnetLaunch" default="go">
<target name="go">
<loadtasks assembly="${nant::get-base-directory()}/mytask.dll" />
<myTask ConnectionString="MY_CONNECTION_STRING" SqlStatement="sp_myTask"
LogFile="myTask.xml" />
</target>
</project>

<loadtasks/> helps define <myTask/> which is built into an assembly and in this case, placed into the nant/bin directory. The nant custom user task is straightforward and the nant distribution includes a basic template sample which you can follow easily enough. another sample here

The xml output file itself is then merged into the cc build report. Again, your ccnet.config file handles this easily with…

<publishers>
<merge>
<files>
<file>myTask.xml</file>
</files>
</merge>
<xmllogger />
</publishers>

Finally, you need to add the xslt which transforms the xml output into the cc.net dashboard. Add the xslt file to the webdashboard xslt folder and append to the dashboard.config file, something similar to this…


<xslReportBuildPlugin description="My Results" actionName="MyDataReport" xslFileName="xsl\mytask.xsl" />

Now each build successfully calls your custom user task, saves the result of the stored procedure to an xml file which is merged into the build output and accessible from the dashboard for as long as you got build logs 🙂

Following this concept should give you an idea of how to start owning your process and not just going with the stock standard install. Set it up, take some risks and call it your own.

Doom?

The List itself should probably be rated somewhere 🙂
But certainly some valid reasons to point out the obvious: we cannot continue as we currently are [as a species] with our pervading [and degraded] morals and [de-]values and still expect to survive in some half-functional world.

who be bluffing who?

but that’s not to be a nay-sayer… o no.. on the contrary:

life is for living and living to the full.. indeed:
the Good Shepherd came so that we could have life and life more abundantly.
and we all know that life is not just the breathing and eating part.. it’s that which refers to vitality- and that is what He came to give us… abundant vitality 🙂

so look past the doom and nay-saying- these are more comments stating the obvious… life is good, because God is good.

Obituary Of Common Sense

circulated and received by email… author: unknown.

Today we mourn the passing of a beloved old friend, Mr. Common Sense.
Mr. Sense had been with us for many years. No one knows for sure how old he was since his birth records were long ago lost in bureaucratic red tape.

He will be remembered as having cultivated such value lessons as knowing when to come in out of the rain, why the early bird gets the worm and that life isn’t always fair.

Common Sense lived by simple, sound financial policies (don’t spend more than you earn) and reliable parenting strategies (adults, not kids, are in charge).
His health began to rapidly deteriorate when well intentioned but overbearing regulations were set in place.

Reports of a 6-year-old boy charged with sexual harassment for kissing a classmate; teens suspended from school for using mouthwash after lunch; and a teacher fired for reprimanding an unruly student, only worsened his condition.

Mr. Common Sense declined even further when schools were required to get parental consent to administer aspirin to a student; but, could not inform the parents when a student became pregnant and wanted to have an abortion.

Finally, Common Sense lost the will to live as the Ten Commandments became contraband; churches became businesses; and criminals received better treatment than their victims.

Common Sense finally gave up the ghost after a woman failed to realize that a steaming cup of coffee was hot, she spilled a bit in her lap,and was awarded a huge financial settlement.

Common Sense was preceded in death by his parents, Truth and Trust, his wife, Discretion; his daughter,Responsibility; and his son, Reason.

He is survived by three stepbrothers; My Rights, Not me, and Lazy.

Productivity Metrics

There is one way to define output for a programming team that does work. And that’s to look at the impact of the team’s software on the business.

Nothing like common sense to clear the air 🙂 The fuzziness sets in however, when we try to gauge what that impact will look like before it is an impact. So we [everyone involved in software production] have tried a number of ways to refine the one metric [or group of metrics] that we can rely on to give us a reliable clue ahead of time.

Mr Shore has posted his take, and refers to the Poppendiecks and Fowler for their perspectives. In addition, there are a dozen or so prescribed metrics that have been suggested, with varying degrees of intensity, by the handful of established agilities. Amongst all of them, my favourite and most accurate has got to be: “commits per week” [CPW].

A commit, in the way my team understands it, is the smallest change you can make to the codebase without breaking it. That’s it. It can be one story, two stories or only half a story. It can be a bug fix or a refactoring. Either way, it’s a productive change because whatever gets committed is, by principle, designed to improve the codebase.

Why this is such a good metric is because it is hard to jimmy and it is wonderfully self-regulating. In light of a team environment, the metric is also straightforward and honest in its interpretation. Most productivity metrics usually fail because there’s always *another* way to interpret it and it is loaded with ambiguity which can be used negatively; especially when the going gets rough or political.

Off the bat, if that’s a motive [ammunition] for even using a metric, or if that kind of temptation is too great- then no metric is ever going to work fairly. That being said…

Why you can’t jimmy a CPW
Wether you bloat your code, starve your code, design badly, over engineer, don’t design at all, there’s only *so* long you can work with something before you need to “offload” it. By offloading, i mean, add it to the codebase so you can work on the next part of it. In a team environment [more than one developer], the longer you wait, the more painful your commit is going to be [merges and out of date logic]. The more painful your commit, the longer it takes to commit, the more trouble you start running into. Now when everyone else in your team is committing n times a day, and you’re only contributing n/2, your siren should start wailing. The team as a whole is not productive. If you try to compensate for a bad CPW number and try make multiple commits of very little consequence, you’ve got to sit in front of a test harness for most your day watching for green or risk breaking the build, which disqualifies the commit anyway. As a result, you end up getting less work done which impacts on your estimates and delivery time anyway.

For each team, the average CPW number will vary depending on the type of work being done. For example, a spike mode will cut down CPW but the CPW number for the spike team should be the same. And it is important to realise that CPW will fluctuate, peak and fall and that you cannot aim for an “average”. Not to say that you cannot maintain an “average” for a length of time if you’re into a long predictable season of development.

As with most numbers, the actual value of a CPW holds more of an academic interest, but the values compared as trends are highly indicative of production. For example, over a period of +220000 changes to the code base, our average CPW per resource [be that a pair or individual] is 20. That’s 4x per day at roughly one commit every 100minutes. Interesting. But to make that a rule for each iteration and make the number a performance indicator over every iteration to “make the target” is just ludicrous.

I’m all for metrics being utilised to measure and estimate business return on investment- it’s part of being accurately successful- but tempered against the temptation to be blind. The knee-jerk response is not to use any metrics for fear of being misrepresented. And there are some rather “extreme programmers” 😉 supporting this position. Don’t measure because you can’t measure. That kind of thinking can also seriously limit your potential.

So either extreme position on the use of metrics is snake magic and has no place in a team committed to beautiful programming. They’re useful, everybody uses them and there are plenty to choose from. Keeping track of them, and in particular CPW, tempered with some common sense, can give you some very early indicators of what kind of impact your team is going to have before it gets called an impact.

Multicast Delegate Gotcha

There are enough multicast delegate samples in .Net available on coding websites including MSDN to get you started on how to make use of them. The ‘why’ is tackled on some of them but not many include a section on “things to look out for”. This post details one such gotcha: using it in the Observer Pattern.

Let’s start with a commonly published multicast delegate sample, flavoured with Observer Pattern language:

public class Subject
{
   public delegate void EventHandler(object from, EventArgs e);
   public event EventHandler Notify;

   public Void OnNotify(EventArgs e) {
   if(null != FireEvent)
      Notify (this, e);
   }
}
public class ObserverA
{
   public void Notify(object from, EventArgs e) {...}
}

This design has several advantages and is endorsed by popularity of use in ASP.Net, so why not use the model outside that environment? For instance, an observer simply needs to subscribe by:

ObserverA obs = new ObserverA();
subjectInstance.Notify += new EventHandler(obs.Notify);

The burden of managing subscriptions is relegated 🙂 to the multicast delegate. No more foreach loops and keeping references on the observers. So, on the surface, all seems well in paradise. Further, the loose coupling between subject and observer via an interface [the delegate] promotes a warm and fuzzy feeling. The only snag is scope.

None of the samples you find deal with scope effectively since most of them deal with observers as either static methods [probably the most famous] or with methods inside and ASP.Net page cycle. And things can get fuzzy there. My next challenge is, what happens when Observer goes out of scope?

In order to clean up properly, the observer has the responsibility of unsubscribing. If it doesn’t, the subject will just “resurrect” it each time it fires an event, even you dispose your observer. Ideally then, in order to overcome this problem, the “destructor” on the observer needs to unsubscribe, if subscribed. To do this, the observer must keep a handle on the subject [still part of the pattern rules]in order to:

subjectInstance.Notify -= myEventHandlerInstance;

Whic means the observer starts to take on more form:

public class ObserverA
{
   private Subject source;
   private EventHandler myEventHandlerInstance;
   public ObserverA(Subject subjectInstance) { source = subjectInstance; }
   public void Notify(object from, EventArgs e) {...}
}

Now we have each observer maintaining a reference to the subject instance so that if the observers fall out of scope, they can clean up their subscription. But now what happens if the subject instance goes out of scope first? How does it know notify observers of its unavailability? First jump is create a “DyingEvent” which becomes a mandatory subscription for all observers: Yuk! But thinking some more on it, does it even matter? If the subject dies, the observers won’t receive any more notifications, but then when they die, they try “unsubscribe”. Oops. Gotcha!

Any ideas?

Other references:
Softsteel Solutions: C# Tutorial Lesson 16: Delegates and Events
A Beginner’s Guide to Delegates