Monday, December 8, 2008

Continuous integration step-by-step

Let's start with the basics: Martin Fowler's original article lays out the mechanics of how to set up a CI server and the essential rules to follow while doing it. In this post I want to talk about the nuts and bolts of how to integrate continuous integration into your team, and how to use it to create two important feedback loops.

First, a word about why continuous integration is so important. Integration risk is the term I use to describe the costs of having code sitting on some, but not all, developers' machines. It happens whenever you're writing code on your own machine, or you have a team working on a branch. It also happens whenever you have code that is checked-in, but not yet deployed anywhere. The reason it's a risk is that, until you integrate, you don't know if the code is going to work. Maybe two different developers made changes to the same underlying subsystem, but in incompatible ways. Maybe operations has changed the OS configuration in production in a way that is incompatible with some developer's change.

In many traditional software organizations, branches can be extremely long-lived, and integrations can take weeks or months. Here's how Fowler describes it:
I vividly remember one of my first sightings of a large software project. I was taking a summer internship at a large English electronics company. My manager, part of the QA group, gave me a tour of a site and we entered a huge depressing warehouse stacked full with cubes. I was told that this project had been in development for a couple of years and was currently integrating, and had been integrating for several months. My guide told me that nobody really knew how long it would take to finish integrating.
For those of you with some background in lean manufacturing, you may notice that integration risk sounds a lot like work-in-progress inventory. I think they are the same thing. Whenever you have code that is un-deployed or un-integrated, it's helpful to think of it as a huge stack of not-yet-installed parts in a widget factory. The more code, the bigger the pile. Continuous integration is a technique for reducing those piles of code.

Step 1: get a continuous integration server.
If you've never practiced CI before, let me describe what it looks like briefly. Whenever you check-in code to your source control repository, an automated server notices, and kicks off a complete "build and test" cycle. It runs all the automated tests you've written, and keeps track of the results. Generally, if all tests pass, it's happy (a green build) and if any tests fail, it will notify you by email. Most CI servers also maintain a waterfall display that shows a timeline of every past build. (To see what this looks like, take a look at the CI server BuildBot's own waterfall).

Continuous integration works to reduce integration risk by encouraging all developers to check in early and often. Ideally, they'll do it ever day or even multiple times per day. That's the first key feedback loop of continuous integration: each developer gets rapid feedback about the quality of their code. As they introduce more bugs, they have slower integrations, which signals to them (and others) that they need help. As they get better, they can go faster. In order for that to work, the CI process has to be seamless, fast, and reliable. As with many lean startup practices, it's getting started that's the hard part.

Step 2: start with just one test
.
You may already have some unit or acceptance tests that get run occaisionally. Don't use those, at least not right away. The reason is that if your tests are only being run by some people or in some situations, they probably are not very reliable. Startng with crappy tests will undermine the team's confidence in CI right from the start. Instead, I recommend you set up a CI server like BuildBot, and then have it run just a single test. Pick something extremely simple, that you are convinced could never fail (unless there's a real problem). As you gain confidence, you can start to add in additional tests, and eventually make it part of your team-wide TDD practice.

Step 3: integrate with your source control system
.
Most of the times I've tried to introduce TDD, I've run into this problem: some people write and run tests religiously, while others tend to ignore them. That means that when a test fails, it's one of the testing evangelists who inevitably winds up investigating and fixing it - even if the problem was caused by a testing skeptic. That's counter-productive: the whole point of CI is to give each developer rapid feedback about the quality of their own work.

So, to solve that problem, add a commit hook to your source control system, with this simple rule: nobody can check in code while the build is red. This forces everyone to learn to pay attention to the waterfall display, and makes a failed test automatically a big deal for the whole team. At first, it can be frustrating, especially if there are any intermittent or unreliable tests in the system. But you already started with just one test, right?

The astute among you may have noticed that, since you can't check in when the build is red, you can't actually fix a failing test. There are two ways to modify the commit hook to solve that problem. The first, which we adopted at IMVU, was to allow any developer to add a structured phrase to their check-in comment that would override the commit hook (we used the very creative "fixing buildbot"). Because commits are mailed out to the whole team, anyone who was using this for nefarious purposes would be embarrassed. The alternative is to insist that the build be fixed on the CI server itself. In that case, you'd allow only the CI account to check in during a red build.

Either way, attaching consequences to the status of the build makes it easier to get everyone on the team to adopt it at once. Naturally, you should not just impose this rule from on high; you have to get the team to buy-in to trying it. Once it's in place, it provides an important natural feedback loop, slowing the team down when there are problems caused by integration risk. This provides the space necessary to get to the root cause of the problem. It becomes literally impossible for someone to ignore the failures and just keep on working as normal.

As you get more comfortable with continuous integration, you can take on more advanced tactics. For example, when tests fail, I encourage you to get into the habit of running a five whys root-cause analysis to take corrective action. And as the team grows, the clear-cut "no check-ins allowed" rule becomes too heavy-handed. At IMVU, we eventually built out a system that preserved the speed feedback, but had finer-grained effects on each person's productivity. Still, my experience working with startups has been that too much time spent talking about advanced topics can lead to inaction. So don't sweat the details - jump in and start experimenting.



Reblog this post [with Zemanta]

3 comments:

  1. Great article. I'd also suggest looking at CruiseControl and Bamboo (by Atlassian).

    The one experience I've come across with TDD is encouraging developers to add new (and good) tests. Starting with 1 test is easy, but getting developers to write more with each new feature (let alone on existing features) is a challenge. (The only successful way I've seen this work is by having the developer experience the awesomeness of unit testing on their own)

    The other experience I've had is ensuring that test coverage doesn't drop off. When a test fails it is just as easy to comment it and do the "will fix this later" routine.

    (Note: I haven't begun to ask about your experiences with "what is a good test?" and test coverage)

    Do you have any suggestions on these fronts?

    ReplyDelete
  2. My vote's for using TeamCity as a CI server.

    ReplyDelete
  3. the link given in this blog "CI" is very impressive. Theoretically, i can visualize the Continuous Integration as RAD ( Rapid action development) along with iterative method, which we used to study in Software Engineering's Process model. Studying these process only gave theoreticial knowledge. The time for making a change in the project is short. Here short can be termed as a day or an hour, depends upon situation... We, the s/w engineer have to rethink about the solution, the code which need to change, which would reflect the right change.. and also taking care of side effects. This "short" time really does not reflect tension in the developer's mind. I was not tensed while deploying a small change in the code online for the first time. Now it will be a routine job for me.


    2nd concern was testing part. For me, self testing is not a big deal. "We cannot evaluate ourself in any field, that why we have examination after every semester/year. I would like the users of my software to test and give their feedback to me. Rate me on some scale. I should be worried and tensed while coding, twice. I should feel the embarrassing feedback which an unsatisfied user can mail me. This will help me improving me my work skills in term of coding and designing. That what actual CI is. Get out of nothing.

    I have heard about TDD from my friend in one of the company, but i was not impressed. I would like to test others else me. I would like to create problem for other by writing a single test case than to think thousand times, banging my head for my own code. In short... "let other decide your code (not your DESTINY)"

    ReplyDelete