Project Post Mortem

May 3, 2005

Since July last year I have been working on the design, development and testing of a project designed to provide wireless data access to field workers for a resources client. My role was primarily focussed on the design, development and performance/resilience testing of the EAI solution in Microsoft BizTalk 2004 and our own custom made Resequencer software. This part of the project was initially planned as being a migration from BizTalk 2002 to BizTalk 2004, however as almost anyone tasked with this type of work has learned, there is no point in migrating from one to the other without redesigning your messaging system to take advantage of all the new bits that BizTalk 2004 gives you.

We went live 6 weeks ago, replacing the existing architecture over the course of a weekend, and handling circa 1000 users on the Monday morning without a hitch (bar a few minor things that cropped up in the following weeks). I know this is really classed as a relatively small enterprise application, however it is the biggest single thing I have been involved with to this point with relation to concurrent users on a single system. All my previous work was centered around smaller numbers of users on a site, but distributed among hundreds of individual clients.

I have now moved on to new things and have had time to reflect on the entire development cycle. I think now is a good time to share some of what I believe we did right. Of course there is things we should have done better too. I’ll throw them in as well. I’m not such a good writer and I find it hard to squeeze these things out, so I will do this in a couple of parts.

Virtualized Development

This is the single best thing we did. For those who do not know what I mean by virtualized development, the simplest explanation I have found so far is in the first section of the VMWare manual. VMWare is the software we used to run our virtual machines.

“VMware Workstation is desktop software for developers and IT professionals that allows you to run multiple x86-based desktop and server operating systems simultaneously on a single PC, in fully networked, portable virtual machines—with no rebooting or hard drive partitioning required.”

As an extension to this I would describe Virtualized Development as the process of developing, testing and building software within a virtual machine.

Our virtual machine runs Windows 2000 Advanced Server (as this was the main platform we were developing for), and contained the standard raft of development tools, SQL Server and BizTalk Server. A single contained environment that could operate the entire system.

The main benefits of running in this way were as follows:

I believe this really did help us quite a lot as our developers (myself included) were constantly up to date with the latest changes everyone else was working on. The process generally adopted from moving one build to another was.

  1. Fire up new Virtual Machine.
  2. Spend X number of days working on specific features / SIRs.
  3. Check in all changes to source control and do a build. (build would generally occur overnight)
  4. Get the new virtual machine generated by the build (with yours as well as everyone elses changes).
  5. Start working on new tasks.

Our build process made use of overnight builds, and due to this there was a period of time lapse between checking in changes and being able to get a new virtual machine with all the changes built into a virtual machine. Although we didn’t use it, I would definatly recommend using a continuous build process using something along the lines of Cruise Control to reduce this latency to something more acceptable.

Fully Automated Build

Using the virtualized development concept above would have been completely impossible without having a fully automated build process. I’ll post more on our automated build process in the next post.