Categories
programming Technology

Long Live the Desktop

No doubt about it, hardware is getting plain ridiculous in terms of throughput. Gone are the days of old where 64K RAM was *plenty*. That said, we still have this unyielding and biased fascination with RAM and processor speed when it comes to comparing computing devices on the market. Truth is, computing has changed sufficiently for us to largely ignore those specs and look elsewhere for performance gains, namely the hard drive.

“Hahaha”, I hear some tech enthusiasts saying. How does a hard drive speed up your computing? A bigger or even faster hard drive won’t make any difference to the speed of your processing: your processor does that, right? And the ability for you to multitask and have lots of applications open at once is directly dependent on how much RAM you have, right? Right? Computing 101. Well, no.

A regular hard drive can only spin so fast, at about 7200rpm. On a laptop, usually 5400rpm. In days of olde (when everybody learnt all there was to know about computers) the biggest bottlenecks were in fact processors and memory and out of memory errors were common place. To get past that, operating systems started using a technique called swopping which persisted some memory contents to the hard drive to make place for more memory. That way, every time you tabbed between applications, if the working memory for that application wasn’t immediately available, it would “restore” the working memory from the hard drive and swop out what was currently in memory. And by doing so, RAM now became a more pressing requirement. The more RAM you had, the less you had to regularly swop out.

But when you did need to swop out, you’d need more space on your hard drive to persist the swop because there was, simply put, more RAM needed to persist. Of course, the speed at which your device could handle that (directly related to the frequency of your personal requirements) also impacted on perceived performance. For most the time, adding more RAM and a better processor did increase performance. To a point. Memory speed and processor speeds are getting so fast now that in most devices, the RAM and processor are now waiting for the hard drive to spin fast enough and catch up with writing and reading from disk.

You want to give your computer real performance? Get an SSD. 3 years ago, I got an SSD for my MBP (Core 2 processor with 4GB RAM). At the time, it was on the edge of requiring an upgrade to a Core-i processor but the cost of getting a faster processor hardly justified the little-to-no benefit gains. I would still be stuck with 5400rpm of spinning hard drive. Not to mention that laptops are limited by design in their speed because they are less efficient at cooling themselves down, no matter what you have inside. Down the line, I’m still running that same SSD and my MBP still outperforms its more recent versions.

As a software developer though, working off a laptop has limitations, especially when you start running multiple virtual machines, each with their own development environments. Sure, you get the benefit of being able to work from anywhere but that’s very different to being hugely productive and efficient while you are actually coding.

Waiting 40 minutes for a build? Long enough to walk away but what if unexpected stops while you’re away. You could get on with something else, but then risk the interruption of a context switch. What about the performance of the machine while the build is taking place (and yes, automated builds and having a dedicated machine are all good- this is not that kind of situation).

Waiting 40 seconds for an environment to load before debugging? Enough time to almost forget what you’re doing. Code and test for 2 minutes and then make another change which triggers another reload. Wait another 40 seconds watching progress spinners. Add up the number of 40 second waits during the course of one day and you get a free lunch.

Now what if that same build went from 40 minutes to 8 minutes? And what if that massive productivity increase came at a ZERO cost to perceived performance working on other applications during the build?

Enter an All-In-One, aka the AIO, Core i5-4570 2.9GHz with 16GB RAM. On it’s own, a pretty decent spec but dished out with SSDs (yes, plural, something you can’t do with a laptop) and you have, quite simply, a beast of a workhorse. One SSD to keep up with the RAM and processor and service the primary operating system and another SSD to run VMs off. And the sweet part, it costs the same as a laptop but very clearly at N-times more horsepower, where N is a real integer > 5.

The cost difference between the SSD and traditional 7200rpm HDD has paid for itself inside month in billable hours alone not spent waiting. And that’s without factoring in productivity gains and general mood enhancement by being able to spend more time actually coding and not waiting for anything- ever.

I don’t wait for a system to boot up. I don’t wait for an IDE to load. I don’t wait for a compile. I don’t wait for a VM to get up and running. I don’t wait for a system to shutdown. I just simply don’t wait anymore. The only time I stop while coding now is when I stop to think and plan. I am back in control of all the subtle interrupts and can direct my attention more intentionally. Liberating.

Some say the desktop is dead- I disagree. Not for the kind of software development I do. Yes, I once tried to run everything off an iPad and Cloud and what a mistake learning experience that was. And I really don’t like desktops but the AIO is compelling. And it comes with a capacitive multi-touch screen so testing apps with gestures is dead easy. The laptop is still useful for some things, I guess, but less and less so.

The desktop computer is dead. Long live the desktop computer.

Categories
Apps Business programming

BabyGroup Tech Journey

So here’s a little behind-the-scenes on one of SA’s newest e-commerce shops, BabyGroup. This has been a really great project to be involved with so it’s nice to be able to share a little on the journey. It’s also one of the few that are not “top-secret” so I can actually share some detail on it. Ha!

The journey:

BabyGroup started with the usual preliminary pre-project engagements and project proposals during August/September 2012. In that time, over a series of high-level discussions, the vision was explained with a view to understanding the technical requirements, all leading to the ultimate appointment of yours truly to head up the technical side of the project commencing October 2012.

We opened doors 1 May 2013, 8 months later (we did some soft launching before) with full ecommerce and online creditcard payments with 3D Secure (yes, there was some small delays, both real-world and technical with regards 3D Secure).

So what happened in that 8 months:

We designed the aesthetics of the site and put together the 60+ page template designs required for launch

We iterated through the site’s workflow, with a focus on the basket checkout process and making sales (ie, UX)

We picked out an ecommerce/cms platform to serve the base of our needs

We architected integration points and highlighted custom code requirements to meet some of BabyGroup’s unique requirements

We implemented all of the above

Outside of that, in the real world, offices need to be setup, with warehousing, and stock needed to be bought. Photos needed to happen, marketing needed to kick in, basically- a LOT needed to go down. I, however, focused on the site architecture, implementation and roll-out.

And to do that, I had 1 resource, me. But this is an area I thrive in. Full-stack development, with a handle on the full life-cycle.

I chose Kentico CMS as the base platform because:

it’s an ASP.NET solution and I prefer the predictability of strongly-typed systems where the complexity is expected to explode. There’s a lot that can be said about that, and it would stir up a debate, no doubt, but in the end, I needed to move on a platform I could do gymnastics on. And the trusty Windows Server, SQL, C#, ASP.NET is hard to beat in that regard.

But I didn’t have any Kentico experience at all. In fact, I had far more Magento experience at this point. Well, it was probably my experience with Magento (and PHP specifically) that motivated my decision to go Kentico/C#. Read what you will between the lines *wink*.

If you would like more information on Kentico CMS as a platform option, I would strongly encourage you to visit their site.

I also chose Azure as the hosting provider. Not because I had experience with it, in fact, my experience with Azure at the start was very limited, but I could see (and not from reading the headlines of pseudo-tech-journals) the promise it held along with some of the successes others were having.

Both were very steep learning curves. Again, something I love tackling.

Looking back, this was a perfect project for me really: I had 2 brand new technologies to master, I needed to code fluently in C#, integrate with several unknown APIs, do some HTML/CSS wizardry based on a PSD file, add in some funky Javascript in places (I even sliced some images in Photoshop.. oooo *hahaha*) and do all that asap, based on a Word document (with a lot of Visio diagrams). Love it!

We changed strategies en-route (several times) and stuck to the plan of planning to change when we need to. Again, something I absolutely love about programming and software. The environment is very fluid, the ideas are flowing thick and fast but budgets need to be honoured. A lot of this really was due to the sheer hard work and brilliance of Henri and James, the Jedi masters behind it all, who have taught me heaps about real-world business.

This is also something I love about programming in startup environments. The mind-share with and interaction with the business side adds a very real dynamic and pressure to the software side and keeps it real. Yes, you can get all fluffy about a lot things inside coding with design patterns and performance and unit testing and strict iterations and planned releases… but there’s a point where the bone meets flesh and the only thing that matters is what’s happening on the site, live, right now.

And this takes discipline. It doesn’t mean you throw out good practices and take shortcuts. If anything, you specifically avoid those oh-so-tempting shortcuts and don’t make decisions that’ll bite you in the ass later. It just means that at times, you put your foot down, drink a double espresso and make it happen. Even at 3am if you have to.

Yes, we have full unit testing on all custom code (and still running green). Yes, we have a development environment, a fully hosted quality-assurance environment, a pre-production environment and a live production environment running on the cloud. Oh yes, we do. Excuse me while I high-5 myself here quickly.

Since launch, we’ve had some great sales, the site has been acting nicely, the custom integration points ticking over neatly, all paying rent and everything, for the most part, sweet. Yes, we’ve hit some hurdles and had one or two oopsies, but nothing that can’t be fixed quickly and nothing that’s derailed us.

There’s still plenty new things happening behind the scenes and lots more coding happening. And the journey is, in true geekdom, tres cool. Here’s to making it happen even more and, oh, if you looking for some neat (but not just your everyday type stuff mind you) baby-related products, BabyGroup is where you should be looking.

P.S. And if I sound a little like I’m gloating, well, maybe I am. I’m just proud of the work I have done and try to do. I love what I do and hardly work a single day. I’m not ashamed of that. Why would I be? Yes, I’m not perfect and make mistakes from time to time, just part and parcel of being human. But I do try my best to make right. And this project, like a lot of others (even those I can’t talk about) have been such a blessing to be a part of, it’s hard not to sing and dance about. So, here I am, singing and dancing….

Categories
perspective programming

Getting In The Way

2013 and almost a decade has passed since I embraced agile development practices. Along the way, some of the fluff got dropped, some of it became lifestyle, some of it evolved into something better. “We” went through a lot. And those that made the decision back then to abandon established practices and try something new did it because “there had to be a better way” to releasing software. That was 10 years ago. Software release was messy. Bugs were expensive. Fast forward to today and production bugs are still expensive; if not more so.

What surprises most though is that after all this time, there still remain a number of projects (a collective term which includes startups, corporate teams, freelancers) where the methodology and practices are *still* 1990. If anything, members on some of those teams are even more resistant to change. And disturbingly, not all of them have even been coding for that long either. Oh, and the positions on the debates haven’t changed argument in 10 years. Time to move on…

I guess what has happened is that the experienced members of the team tried to adopt agile in some form or another at the wrong time in the wrong place with the wrong stakeholders. They got burnt. Juniors joined the team, maybe some of them eager to apply some agile practice, but the prevailing ethos was not going to let that evolve. If anything, the sentiment against any kind of agile practice was antagonistic (yes, not all decisions made by engineers are rational).

The nett effect; incumbent teams with a deep mistrust of change, still struggling to release software like it was 1999. Beyond that, unhappy programmers who complain about how dull software development is. And worse, a generation of developers unwilling to take ownership. Attitudes get in the way of releasing beautiful software.

Thankfully though, on the flip side of that are a bunch of shiny, happy people. They don’t need to be agile or follow an agile methodology to be great- they just are. They get on with the job and release beautiful software. Projects (which is to say, teams of developers from 1-n) need, more than ever these days, devs happy to embrace change and own the tech domain. Introducing agile (wether actually adopted or not) is just one way of gauging how resistant your team’s mindset is.

If you’re struggling with a team, or part of a team that’s struggling, the best thing you can do is make an effort to make it better- and the thing you’re looking for is not a messiah, a methodology, a bullet or a graile: it’s a thing which starts with a change inside you.

Categories
programming

Engineering CSS

With sites becoming increasingly “full” with the focus on design, regardless of the simplicity, the explosion of CSS and it’s management is burdensome. There are some tools; some processes, but working with CSS on a “code level” has still got a long way to go. Yes, there’s SASS, which can help, but you’re still dealing with a myriad of artifacts. Let’s look at a simple block, for example:

Header
some text

There are 3 basic elements involved: a container block, a header and a body. This is just one block with 3 different CSS declarations. On a simple site, that’s only 3 artifacts you need to:
* name
* code
* maintain
* version control
* test

Scale that effort across a decently-sized site/project and you quickly end up with an insurmountable number of artifacts. And one of the biggest challenges facing just about every programmer is coming up with names!

Name for a library. Name for a class. Name for a method. Name for a variable. Name for a stored procedure. Name for a table. Name for a view. Name for a controller. Name for a document. Name for style. In code, it can be relatively easy since it’s all function-based. CSS is a little different.

Sometimes you name it according to function, but you can’t always recognize it that way as a reference point (later on in time). For example: the function of the block above might be something quite general: “information-block”. But later on the same site, you have a similar function “information-block” that looks different.

Header
some text

It’s subtle. But different. So you try “light-header-information-block” and “dark-header-information-block”. Ugh. It’s getting bad, so you go with abbreviations: “lthdr-info”, “dkhdr-info”. Ugh. Worse. Maintenance nightmare. So you try a different strategy, name things according to area: “new-info-block”, “forums-info-block”. That gets wieldy, so you break up the CSS files into: news.css, forums.css, assessments.css. Inside there, you have an “info-block” definition and then just make sure you only pull in the relevant stylesheet. Meh.

Then you get those general, across-the-site-but-not-always, styles. It’s not easy. Then you throw IE into the mix or vendor-specifics and.. well.. your 3 artifacts just got trickier. And then let 2 or more front-end guys loose on the same site with their own thinking and logic…

Ok. So while I don’t have a silver bullet here, I do have some strategies for dealing with the pain efficiently (as might be). Some already mentioned. Another is in the naming of sub-elements. As a guideline, I only focus on naming the bigger containing elements.

So, .info-block-wrapper { } for example. The nested elements we can always derive.
.info-block-wrapper .header {} or
.info-block-wrapper .body {} are easy. Even:
.info-block-wrapper div.header {} Or just:
.info-block-wrapper div {} where applicable.

Problem is using class=”header” and class=”body” is going to cause some issues just because they are such general names. So I prefix general names with the beloved underscore. Meh. _header and _body. As a co-operative to that guideline, I *never* define a _class style on it’s own. It’ll always be as a nested definition:

.info-block-wrapper _header {}
never
_header {}

This helps maintain some level of sanity while navigating through thousands of ephemeral definitions (yes, designs iterate way more than functional code) over the lifetime of a project with potentially as many front-end workers.

And then of course, there’s also the performance of CSS to consider. In terms of engineering, yes, CSS still has a way to mature while it also has the responsibility of being the most noticeable deliverable: a double edged sword if there ever was.

Categories
programming Technology

Building Kentico CMS for Azure

When you’re putting a Kentico CMS site together, and you’re deploying to Azure, the one thing that doesn’t work too well is the emulator. It works, it’s just really intensive and slow-going. Especially if you want to run through small features or logic and need to do relatively frequent (and normal) build updates.

Enter build configurations and preprocessors. Under the hood, a KCMS web app for Azure is not that much different to a regular web application, with the exception of the AzureInit class. For local testing and running my app as a regular web application, I simply created a new build configuration (copied from the default ‘Debug’) calling it ‘Local’. In the other configurations, I added an extra pre-processor definition “FOR_AZURE” and then in the AzureInit class (in the web app project), I added:


public void ApplicationStartInit()
{
#if FOR_AZURE
AzureHelper.OnGetApplicationSettings += AzureHelper_GetApplicationSettings;
...
#endif
}

That way, when I’m building for local testing, I skip all the Azure goodness, and then if I need to package and deploy, let the project settings take over. Sure, there some specific things that you won’t be able to work through- but you can still get a near-perfect resemblance on a straightforward deploy.

Categories
Business programming Technology

Opensource is not Free

Opensource, love it or hate it, has a definite place in our software ecosystem. I use opensource all the time, and i put a lot of projects up on opensource too. I don’t have any issue with it at all, on the contrary, opensource, by-and-large, has helped shape me as a developer. So, for the record, i love opensource. But then there’s also free software…

Free software is software you can download and use, for free, to do *something*. There is a lot of it. In fact, there is so much free software, some folk don’t even like paying a single penny for *any* software. I’m not going to touch on piracy- that’s a completely different issue. Chances are, if you look hard enough, you’ll more than likely find a free piece of software out there to do whatever you need to do. Even if you end up using two or three different free software apps to accomplish one task…

Now the proliferation of free and opensource has had an interesting side-effect. I hear a lot of “wtf?!” in response to actually paying for software. For example, I mention a licence fee of USD$300 to do a relatively complex task and I often hear: “why pay $300 when you can download and use some opensource component to do the same thing?” Or even better: “but you’re a developer, can’t you just build the same thing for me?”

*sigh*

The same thing.

It’s never the same thing. And often, to build the “same thing” would probably cost 3 times as much, if not more. And it has nothign to do with how good or bad a developer I am. Sure, I could use an opensource component. But I still need to vet the code, integrate the libraries, write a test suite for it (if there’s nothing attached) and make sure it does everything and that the “same thing” is exactly what is needed.

Sometimes that works. Sometimes that doesn’t. You can spend hours working with a “free” or “opensource” project only to discover some weird edge case that you need to cover or you end up trying to extend it (or trim it down) and find out the architecture is a real mess. Either way, your free component is now costing you, and more, putting __your__ product at risk.

On the contrary, a paid-for product has (usually) been built, tested in a standard QA environment, released with support options and comes with a warranty of sorts. I say usually because, as in life, you get software lemons too. But mostly, there is a system of connected people behind a paid-for product. An opensource project on the other hand, is (again) usually a smaller team more focused on *other things* (or next delivery milestone) that they don’t, can’t, won’t fix *your* problems with the codebase -for free-.

Again, there are always exceptions.

Bottom line, there’s a real business, with real people who need real money, behind the product. And yes, there are some opensource projects that are also a business- those actually fall more into a product offering than “free as in free beer” code.

Free beer code works when your dev team integrating it groks the codebase. It’s no different than having a regular team member contribution. Everyone needs to know what that code is doing: they can read it, can debug it, can modify it. If your team can’t (for whatever reason) then leave that code alone, or be happy to run the risk of bleeding later, which is usually when you really don’t want to.

And don’t be so scrooge about paying for software. Often, the licences are well-priced because the spend has already been done- and the model is in place to make a business of selling. Again, due diligence is required, as you would any other purchase. But you cannot run on the default strategy that opensource = free = cheaper.

Categories
programming

OAuth, Magento, Cookies, Ubuntu and Time

After upgrading my VMWare host (and some Ubuntu updates at the same time) I started noticing some strange behaviour on my Magento platform in QA. I just couldn’t login with my regular test user account. After clearing cookies on the browser, Magento started redirecting me to it’s “Please enable cookies” page. Boom! Nothing else had changed between stopping work on Monday and then resuming work on Tuesday. Frustrated, I simply created a new account and carried on using the new test account. That soon bombed though; mid workflow…

Now as if that wasn’t mysterious enough, my Twitter OAuth integrations started falling over. I really thought it was Monday- but it was actually Thursday. Not less than 45 minutes ago, a work items which had passed QA and committed to the repo was now suddenly failing. At the same time, my Magento session expired and I couldn’t login again. Wait a minute…

Both the OAuth error message and the Magento error were pointing to clock-synch issues. Could it be? Short answer: yes.

Now NTP is set to run automatically on the Ubuntu machine, but that wasn’t cutting it somehow. So i stopped that daemon, run ntpdate manually, and walah! Zee problem vas gone. Turns out my server was 5 minutes behind schedule.

So somewhere between the host clock, the VMWare time synch mechanism, the Ubuntu (11.04) updates and the NTP daemon- i’m still losing time.. somehow. I haven’t tracked down the culprit specifically, but at least the cause correlates with the symptoms and the appropriate fix is related to and resolves the issue.

The hunt will continue once this current story is completed….

Categories
Business programming

Tech And The Art of Ninja

There’s a lot of folk that like to call themselves ninjas out there- and it’s just plain wrong. Now unless you’re familiar with the Togakure-ryÅ« then you probably need to rethink your strategy a little…

Ok, so you don’t mean that you’re actually a -real- ninja, like, a real ass-kicking ninjutsu warrior with highly developed skills in espionage, combat and a multitude of other arts and skills. Fair enough. So what does ninja really mean then- that you’re supposed to be highly skilled? Well, here’s a funny twist: you cannot call yourself a ninja- not even a non ass-kicking-warrior one. Any title of excellence has to be bestowed upon you by one greater and more skilled than yourself. Formally. And usually, there’s a ritual of sorts involved. Sometimes, live animals are hurt.

Stop for a minute and think about every other title out there which conveys a sense of mastery: take the most common ones like professor or doctor even. These are titles earned after years of training and bestowed upon the learner by their guardians and masters- who themselves have acquired great learning over YEARS. A ninja, a real ninja, follows a similar path. The same goes for the word guru. By maintaining the traditions that these titles have been founded on, they ensure the longevity of a well-respected standard.

So calling yourself a ninja or a guru doesn’t make any sense. You’re just buying into your own hype and unfortunately, people who know even less than you, fall for it emotionally and romantically. Basically, you’re just conning everyone (most of all yourself). An even greater irony is that most esteemed masters seem to understand just how little they do know and shy away from the all-powerful titles in humility (enlightenment seems to have that effect).

So you’re a wizard (oops, there’s another title) with HTML or C++, maybe some Python- or… wait for it.. Twitter of all things. People who know less than you, will look up to you and respect your talent and skill. They might even go: “Wow. You’re such a jedi with INSERT_TECHNOLOGY_HERE!”

Awesome. Now if you believe that and start calling yourself a jedi, guru, ninja, prof because a bunch of people who know less than you actually think you are, then you have a serious dilemma. If you haven’t spotted the irony yet, let me make repeat: a worthy title is bestowed upon you by one greater than yourself- not by one lesser (and greater and lesser are purely relative terms with respect skill levels- not humanitarian judgments). That’s just plain backwards.

And no, not even if your peers think you’re a guru can you call yourself a guru. It simply doesn’t work that way. If you had to see a doctor about a terminal illness and enquired about her credentials and the reply you got was: “Well, all her friends thinks she knows quite a lot about health and medicine so they just call her The Doctor”. You can see where this is going, and where it came from. In early civilisations this is is exactly how it worked. But part of that entitlement included a wealth of supporting evidence.

So, now you might say: well, I have a collection of really good websites (or Tweets and followers) and everyone thinks I’m the bees knees, so why not call myself a ninja if everyone else thinks I am? *sigh*
Well, for starters, you probably cannot ride a horse…

Elevate your own standards and have a little respect for yourself and your own hard-won skills. If you want to be called a ninja- look for someone who is WAY better than yourself- is proven to be years ahead of you in skill- and then go and try impress the bajutsu out of them. So much so that they say: “Well done, padawan!”. Then one day they will turn around say: “The master has finally become the student” at which point you have arrived. Funny thing is, it doesn’t matter anymore and now that your skills exceed that of your master; he/she can probably no longer bestow the title on you since you are now “greater”. Checkmate.

And yes, I get that people use the terms as metaphors: “I’m looking for a ninja front-end web developer” => “I’m looking for a highly skilled front-end web developer”. Ok, so what’s wrong with the plain and obvious in the second statement? Why on earth would you choose “ninja” over “highly skilled”? Does it make you sound cool? Does it make you trend or get more search results? Are you bored with “highly skilled”?

And probably the biggest issue of all is that real ninjas and gurus are finding it harder and harder every day to find decent work and support their families since they need to spend all day trawling through completely unrelated spam.

Job Ad #3209 of 40,600,000 related to search for ‘ninja jobs’:
“Wanted: Ninja.”

Ninja:
“Yes, please!”

Job Ad #3209:
“Duties: Resolve CSS issues in IE6”

Ninja:
“WTF? Seriously?”

Ninja’s wife:
“Hey, honey! How’s the job-hunting going?”

Ninja:
“Well, I found someone I need to kill but it doesn’t pay…”

Categories
perspective programming

Team Balance

There’s a lot to be said for flexible work hours. They’re all the rage but they can be tricky when you need to collaborate on something meaningful. Maturing teams understand this and introduce “core” hours. That is, everyone -must- be in the same space for a set number of hours during the day; you get some flexibility to decide where to put the rest: on the head, or the the tail of that day. And then you have overtime….

An experienced team will undervalue overtime. The productivity gains are superficial and short-lived. The latent bugs and burnout issues which crop up down the line are way harder to solve. And that realisation, unfortunately, takes experience. Sometimes, more than you need. And sometimes, even when you have that experience, you doom yourself to repeat the same mistake under the illusion of end-of-the-world-pressure. But it’s not just the team you need to look out for; it’s also the individual within a greater team…

The output of any team is not merely the sum of individual contributions. It’s a collective output which is the combination of a number of quantifiable, but difficult-to-measure, actions: hallway conversations, peer pressure dynamics, inter-personal relationships, email:work ratios, extra-office activities… and on and on. For the most part though, a team result requires that the team move together at the same pace. Which is why they can either be really funky, or really frustrating, depending on your own personality.

So when you have one individual burning faster and more than the average pace of the team, you need to be cautious. Yes, leaders will put in more than normal: that’s what gives the team acceleration; impetus; momentum; drive. You have to start somewhere, sometime. But at some point, everybody should work roughly at the same pace. I.e. given a particular skillset and competence, the task should be completed by two different individuals on the team within the same business delivery time frame.

What happens when one cog spins more than the rest? For a start, that project plan (which you mostly ignore) gets even more muddled. The expectation for delivery timeframe changes (not just for current workload, but for future reference too). The inter-personal dynamics change (competition). The path of least resistance shifts and the workload tips in favour of the “Doer of More”. The rest slack off and you have this horrible elastic stretch in your execution plan. It will snap. 75% of the time. The other 25% of the time, it also snaps.

Now, you have to stretch that elastic from time to time. A balanced and measured test of that is a good thing. Shake things up and stir the pot. It’s healthy if controlled properly. Sometimes you get an unexpected organic boost of productivity which shifts everything into a neater gear: run with it. Recognise that and facilitate it. But don’t let it run out of steam “naturally”. The human spirit is strong: just watch the finish to any massive endurance test. People will push themselves beyond what’s probable and into a state of “cannot-do-anything-for-3-weeks-until-I-am-recovered”. So unless that’s what you want to achieve, nip it in the bud. And that takes a skilled project leader.

But as a peer, you are more intimately aware of when the pace changes. You have a responsibility to highlight that and bring transparency through to the fore. That’s what standups are for, right? Talking about the technical hurdles and objective progress of a project is EASY. You can go through 100 standups without challenging yourself or anybody else. Another path of least resistance. So how about using standups to bring attention to more important matters- issues that bug you in the soul but are hard to talk about?

Again, a good leader will cut short any ad-hominem or diatribe and schedule a time and space for it to be dealt with properly. The nice thing about using the standup for that is that you have the opportunity every day to voice your concerns. You even get to sleep on it for one night, to shift your ego away from the team’s greater good, before raising it.

Balance is not something we can always achieve on our own; despite our ingrained philosophies. That’s where East and West are remarkably similar. Both focus on the individual achieving his/her own internal balance and striving for that. But there’s nothing like pitching in and helping each other achieve balance.

You’re not walking on that tightrope alone. And yes, you work hard at making sure you’re not the one that causes the fall. But help others at the same time. In the wise words of Oogway: “One often meets his destiny on the road he takes to avoid it”

Categories
programming

XSD minOccurs Specified And The XmlSerializer

In your XSD schema definitions, the minOccurs has a subtle nuance through a leaky abstraction. Getting right to the point:

Let’s take an element definition such as:

<xs:element name="OptString" type="xs:string" minOccurs="0"/>

Now when you create your default classes using the xsd.exe tool, you will end up with a class having a property OptString. Neat, since in code, you can just set that property (or not) and it will appear in the XML (or not). It’s optional.

Now what about:

<xs:element name="OptBool" type="xs:boolean" minOccurs="0"/>

Again, there will be a property named OptBool for you to set (or not). Only this time, it won’t appear in the serialized XML. There’s an additional property named OptBoolSpecified which you need to set. If false (which it is by default) then it won’t serialize.

Now, let’s look at:

<xs:element name="OptValueString" type="ValueString" minOccurs="1"/>

where ValueString is defined as:

<xs:simpleType name="ValueString">
    <xs:restriction base="xs:string">
      <xs:minLength value="3" />
      <xs:maxLength value="10" />
    </xs:restriction>
  </xs:simpleType>

This has the same effect as a regular string. If it’s set (or not), it will serialize (or not).

Then we have a custom type (an enumeration) such as:

<xs:element name="OptStrongString" type="TypedString" minOccurs="0"/>

where TypedString is defined as:

<xs:simpleType name="TypedString">
    <xs:restriction base="xs:string">
      <xs:enumeration id="ValueA" value="A" />
      <xs:enumeration id="ValueB" value="B" />
    </xs:restriction>
  </xs:simpleType>

It has it’s base in type String, but it’s an enumeration, so how will it behave?

Well, it needs it’s related Specified property to be set. Without that, it’s not going to serialize. And the same goes for type int.

In fact, experiment with the different types and understand the subtleties of minOccurs better. And you could question the existential “why” it is so or you could also just accept the way it is and move swiftly along 🙂

I’ve attached some code here to jumpstart your experimentation.

Download project XSDMinOccursSandbox