8 Steps to improve quality and reduce delivery time: Part 2

In Part 1 we looked at some conventional approaches to quality and exploded a few myths. Getting better at testing isn't the only thing you can do. Remember your customer doesn't care how good your code is if he doesn't have a usable, working solution.

5: Use static analysis from day one

Recently I went on a rescue mission for a small project (don't tell them I said it was small!).
Six months into the project the development team ramped up by 400% and there was concern that coding standards and quality were slipping. The company started to enforce its formal manual code review policy and was surprised that this had not produced any improvement. I surveyed the developers and discovered that about half of them were referring to coding standards. Correction - they were referring to 8 different sets of coding standards.

As a first step we installed Sonar and turned on the default Checkstyle and Findbugs tests. The results were truly shocking. I learnt that Checkstyle was installed in their default IDE but most had turned it off because of the size of the reports. Findbugs had initially been part of the build process but this too had been turned off because

  1. It made the builds take too long
  2. There were too many failures, fixing these to make the build work would take too long and reduce productivity!!!

The issues reported by Findbugs were real and did need fixing. A percentage of those highlighted by Checkstyle were just noise but the rest did need fixing. Remember that a developer spends far more time reading code than writing it. We were unable to enforce the rules immediately because fixing the issues needed several man years worth of effort. The first thing we did was synchronise the formatting rules between the IDE and Checkstyle and setup the IDE to format code on save. We also turned off some rules that we weren't interested in (if you don't need to fix it don't report on it). This significantly reduced the noise and allowed us to insist that when code was checked in it had to pass all of the rules. It did take several months before we were able to turn on strict enforcement but it allowed us to significantly improve quality without stopping all development.

Along the way we installed and activated additional sensors for Sonar. (Its a great tool if you're interested). Oh, and we stopped wasting time on manual code reviews!

6: Get real early

Traditional project plans defer performance, security and failover testing until late in the project. Often because it improves the cashflow by not spending money on the infrastructure until it can no longer be avoided. In the rush to adopt agile delivery and test driven development how many organisations keep deployment and infrastructure separated from software development. If you are building a system that needs to process thousands of transactions a minute in a clustered environment you cannot test effectively against a single server that is capable of processing a few hundred transactions per hour.

The reality is you have no idea about the real performance of the system or how much effort will be required to make it acceptable until shortly before go live. Wouldn't you rather know that you had made an incorrect design choice after five days than trying to fix it after 200. Real means real data too. All the variants, all the volumes and the right quality. You cannot test against dummy data that was cobbled together in SQL developer, or Excel. There is no point in acting surprised when you have made invalid assumptions about the data. This means joint responsibility. The customer has to join the party too. It is not fair to expect a vendor to deliver a system by a deadline and hoping they will just get it right without real data to test against.

7: Walk the unhappy path

The happy path is a popular term in modern software development and testing. It provides a way to prove quickly that you are doing the right thing and deliver value to the customer. It certainly is useful to follow the happy path but its all to easy to forget:

  1. We are trying to deliver completed features
  2. We are trying to deliver potentially shippable software every sprint so that we can realise the value early
  3. We deliver work that is done - not nearly done
  4. The happy path is only a small percentage of the entire feature - delivering the happy path does not mean you are nearly done
  5. Your customers (and their customers) will always find the unhappy path

If you don't start down the unhappy path early you are likely to spend a long time on it after your product has been delivered.

8: Automate your deployment

If its too difficult to automate its probably too difficult to repeat. There is good value in the adage Intergate early - Integrate often.

Continuous integration does not end with the build server or automated testing. The next step is to automatically deploy it to a different like real environment and, of couse, run the tests again. This isn't to make the life of your infrastructure team easier - its to verify your deployment process and its repeatability. It also gives you the opportunity to iron out the wrinkles long before you come to do it for real. There is another old adage that is appropriate here Practice makes perfect. With the best will in the world someone will miss a step in your meticulously constructed deployment guide. Or the person who repeatedly does your manual deployment will not notice the missing step in the instructions because its part of his routine. Unfortunately he is not being delivered with the software. The customer does not care if your software is perfect, meets all of his functional requirements and has zero bugs. If he cant use it it has no value. And if he can't deploy it he can't use it.

So you want to improve quality and reduce your delivery time. There are no silver bullets or even radical changes required. But a little common sense goes a long way.