PwC on Microservices versus Monoliths.

Recently some PwC tech supremos wrote an article: Agile coding in enterprise IT: Code small and local.

Subsections:

  1. Moving away from the monolith
  2. Why microservices?
  3. MSA: A think-small approach for rapid development
  4. Thinking the MSA way: Minimalism is a must
  5. Where MSA makes sense
  6. In MSA, integration is the problem, not the solution
  7. Conclusion

MSA is short for Microservices Architecture(s), in the above article.

The article posits that microservices is the antidote to monoliths. It doesn’t mention cookie cutter scaling at all, which is another antidote to monoliths, with the right build infrastructure and DevOps. Here’s a view of hypothetical architecture a company could deploy if they were doing microservices:

W is web server. P and Q don’t stand for anything in particular.

Here’s the same solution as cookie-cutter scaling, and the alternate (historical) choice of Monolith to the right of it:

The cookie cutter approach will often leverage components that are Dependency Injected into each other, and though monoliths might be the same today, pre 2004 they were probably hairballs of singletons (the design patten, not the SpringFramework idiom).

Continuous Delivery, Agile?

Here’s one excerpt that confuses me:

” … makes no sense to design and develop software over an 18-month process to accommodate all possible use cases when those use cases can change unexpectedly and the life span of code modules might be less than 18 months….

As I recall, the 18 month-delay problem was solved previously. Agile methodologies principally, and Continuous Delivery/Deployment in more recent times. It does not matter whether you’re compiling a Monolith, a cookie-cutter solution, old SOA services, or microservices, the 18-month fear isn’t real if you’re doing Agile and/or CD. Agile and CD were increasing the release cadence, and allowing the organization to pivot faster before microservices. It doesn’t matter whether you’ve got a monolith, something cookie-cutter scaled, or SOA (micro or not), you’re going to be able to benefit from Agile practices and DevOps setup that facilitates CD. In something like 30 ThoughtWorks client engagements since 2002, I have not seen the 18-month process at all. In fact I last encountered it in 1997 on an AS/400 project, which was the last time I saw a waterfall process being championed.

Build(s) and Trunk

Elsewhere there is a suggestion: “Each microservice [has] its own build, to avoid trunk conflict”. That isn’t unique to microservices, of course. Component based systems today also have a multiple build file (module) structure in a source tree. Hopefully “trunk” mentioned is alluding to Trunk-Based Development, as I would recommend.

Build technologies

This is a expansion on the above, and you can skip this paragraph if you want.

Hierarchical build systems like Maven has allow you to have one build file per module (whether that’s a service or a simple jar destined for the classpath of a bigger thing). Buck has a build grammar that allows for a build to grow/shrink/change based on what is being built (from implicitly shared source). Maven is for the Java ecosystem, while Buck promises to be multi-language. Both are doing multi-module builds for the sake of a composed or servicified deployment. Both Maven and Buck are presently competing to draw the most reduced set of compile/test/deploy operations for the changes since last build for a hierarchy of modules.

Anyway, what is it we are striving for?

What we want is to develop cheaply, and to deploy smoothly and often, without defect. We want the ability to deploy without large permanent or temporary headcount overseeing or participating in deployment. Aside from development costs, and support/operation, deployment costs are a potentially big factor in total cost of ownership.

What I like about cookie-cutter is the uniformity of the deployable things. The team size for deployment of such a thing doesn’t grow with the numbers of nodes that binary is being deployed to. At least, if you’re able to automate the deployment to those nodes, and have a strategy for handling the users connected to the stack at redeployment time somehow (sessions or stateless). The uniformity of the deployment is a cheapener, I think.

When you have a number of dissimilar services, you might be able to minimize release personnel if you’re only doing one service. If more than one service is being updated in a particular deployment, you’re going to have to concentrate to make sure you don’t experience a multiplier effect for the participants. It is possible of course, to keep the headcount small, but the practice needed beforehand is bigger, which in turn allows for some calmness around the actual deployment. If we’ve stepped away from the Project Management Office thinking that suggests three buggy releases a year (which is more usual than 18 month schedules of old), then we can employ Continuous Deployment to further eliminate personnel costs around going live. This is something that microservices does well at, but because the most adept proponents design forwards & backwards compatibility into the permutations most likely to co-exist in production. It is at least much quicker to redeploy and bounce one small service, N times than the the cookie-cutter uniform deployment.

Related articles

  1. I’m against separate repos for separate services. At least where there’s a possibility of interoperation in a single solution where each moves forward in time. My Source Code Laundering article speaks to that.

  2. Careful now, Conway’s sword is sharp: Refer my To SOA or Not To SOA article.

  3. Cookie cutter scaling, as mentioned.