Pipelines: What are they good for?

Continuous Integration is great and Continuous Delivery is better. CI helps you get feedback as and when you make changes so you always know the state of your system and whether you can safely deploy your application. The problem appears once your codebase reaches a certain size and your tests (particularly functional/acceptance tests) or your deployment start taking too much time and that 'immediate' feedback that you cared so much about isn't that immediate any more.

Hold on. What do you mean Continuous Delivery?
Continuous Delivery is one of the core principles behind the Agile Manifesto (the 'highest priority' even). Code that is not live and not generating value for the business stakeholders is basically worthless. A team practising Continuous Delivery writes enough automated test coverage and automates deployment itself so that they can confidently deploy code every single day (or even multiple times a day). The effervescent Brian Guthrie & I spoke at RubyConfIndia about Continuous Delivery in Ruby.

Enter pipelines.
The trick is to break up your build into multiple steps or stages where you put the tasks which give you the most valuable feedback per second spent first, and put the less efficient steps later, where the later steps only run if all of the preceding steps have passed. Of course, you move your deployment itself, which requires all other tasks to pass, right to the end. It should to be pointed out that the team needs to make sure that no stage in the pipeline fails and that that discipline does not change.

Distributed builds for the win.
This is fine and all, but maybe your tests still take too long to run, since the whole process still takes just as long, only broken up into steps. Perhaps you've identified your functional steps as a particularly slow step in your pipeline.

What you could do is to break that one step into multiple, and then distribute those individual steps across many machines where they can run in parallel. Of course, this requires you to use a CI server that supports distributed builds (or agents). Therefore, you have a branch in the flow of your pipeline and the build will wait until all of the parallel steps are done before proceeding with the next step. Of course, you will need more machines to support this but as we all know a developer's time is always worth more than hardware.

Distributing your builds also helps if you want to do any sort of cross-platform testing. e.g. You might want to make sure your ruby gem can run on all popular ruby distributions, or your Java application can work on Linux, OSX & Windows, with agents running on the different operating systems.

Okay, you've sold me, what can I use to do this?
I'll admit the only CI server with pipelines that I've only used on live projects is ThoughtWorks Studios' Go and they've done a really good job at it and from everything I've seen they're still the leaders in the corporate/enterprise space. On a personal note, the first time I ever heard the term 'build pipelines' was from Jez Humble, the product manager of Go, on a ThoughtWorks project we were both on about four years ago. On the open source front, Jenkins has a lesser featured implementation using a plugin, and of course our own Goldberg 1.1 which should come out some time in August will introduce pipelines (and finally make the name of the project appropriate).