First rule of builds: Make them consistent

First goal of builds: Make them fast

Developers and QAs focus on their most expensive commodity - time. There’s pressure to reduce time spent on an activity in order to raise throughput, but the decision as to how to reduce time can be irrational. Builds take time, so making them faster is key. Modern IDEs are good an incrementally compiling source that changes minute by minute, so we don’t consider that a time sink any more. It is the formal build codified by a script of some sort that extends into testing phases that is the major cost when we consider that slowness.

In any quest for faster builds, there’s a constant battle to what is in the build (prior to commit/push), what is in a longer build (that Continuous Integration servers execute again code pulled from source-control), and what is even longer builds run less frequently than that. Not only less frequently, but perhaps manually executed (test plans) as well.

So it should go without saying that we should not take slow steps out of developer’s builds and push them to later nightly / weekly stages if we have alternatives. And we do. We make them faster using any trick we can.

New Constraint: developer’s builds should be shared nothing

Maybe not for the first build where we can pull down dependencies, but for second and subsequent (given a set of sources) no TCP/IP should leave the developer’s workstation (relating to the running build/test). Continuous Integration (CI) servers should follow the same idea - all the parallelized nodes in CI-land should be microcosm environments by some definition.

If you have to have an install of Oracle on your developer workstation, to hit that, so be it.

If that is too expensive (in many ways), then have a leaner RDBMS and rely on your ORM technology to keep things consistent. Dan Worthington-Bodart talks of choosing HSQL instead of the real RDBMS in an excellent InfoQ presentation.

If your major codebase is a JavaScript web app (in a three tier architecture), then service virtualization can be your friend. Either of the web-server or of the tier below that.

Detractors will say that speeding up should not leverage non-production technologies or techniques, but what they really mean is that we should not go live without having tested all the production pieces fully integrated together.

That does not have to be manual, it can be automated too and shared code from earlier stages.

Thus for every in-memory RDBMS trick in the early stage builds, and every RESTful layer we use wire mocking to represent, we need to have an honest step in the later build/test to do the right thing. So in the end then, there is just a cut - things that run before the commit and push, and things that run sometime after.



Published

February 12th, 2017
Reads:

Categories