A development team that is living the high-throughput dream will be able to deploy a microcosm of the production stack on each developer workstation, as well as CI infrastructure.

What?

If a formal non-live environment requires a number of machines (VMs?) and processes, a development team should should spend effort making it possible to instantiate the application on a developer workstation with a minimal footprint.

Aim for fewest processes, and the least amount of TCP/IP connections. REST/SOAP replaced with in-VM Dependency Injection is just fine for a dev-workstation deployment. Everything should be “most local” and “least involved” in order to make that deployment just right. Ignore the eight fallacies for this. It does not matter that the script (and app/package configuration) to deploy for self on workstation is different to that of QA1 and QA2. The Selenium tests should be able to adapt to both the microcosm deployment and the formal QA1/2 deployments.

The same know-how to setup that dev-workstation microcosm, can do so in CI infrastructure too for the same achievement. This enables a shift left agenda whereby defects are caught soonest - specifically the stages of the build that test most functionality can happen pre-commit without the allocation of extra VMs or the leasing of resources that other developers may seek.

ThoughtWorks has had a ton of mileage over the years doing just this.

Functionally correct, non-functionally incorrect.

The Microcosm environment should be functionally equivalent to the production environment, Non-functional differences are OK. Say the database could be H2 even if Oracle was the prod choice, as long as something like Hibernate makes good on a promise to abstract away the differences for the purposes of launching the app/service and testing.

Why?

Defects. They are going to creep in according to the cost of change curve. Each impediment to a self-deployable stack, lessens the chance of exploratory or automated testing, before a developer feels inclined to say “done”. Impediments count remoteness as well as sharing of resources with other developers. Specifically, sharing is infrastructure and data like a “shared_dev” environment. It might be nice for transient deployments that others can see (Jenkins dropping binaries into there), but it is no good if the data moves, or you have to wait in line to use it for your own deployment/testing.

Also, if you’re concurrently running a browser and an IDE, your resources might be tight. If you’ve made that microcosm stack then you could well keep the IDE open (and do debugging) at the same time as launching it. Better still, launch it from the IDE, maybe even treating the web-container as a library rather than a framework that boots from a shell script.

Jenkins

You’re could also use the Microcosm deployment on Jenkins-slave nodes. The ideal would be one slave node, deploying to itself, and spinning up Selenium controlled browsers (via Xvfb) to test itself, before tearing down everything at the end of a Jenkins job. You could make a claim that this is easier configuration for Continuous Integration capabilities, but maybe as VM setup (incl very short leasing) becomes more ordinary with Chef, Ansible, Docker (etc), then it’s not that important to focus on the simple symmetry of Jenkins slaves being self-contained.

Updates

Update - Nov 7, 2014 - Jenkins heading/section added. Update - Mar 24, 2017 - expand to describe usefulness for CI, too