Current Reality Trees (CRT) from The Theory Of Constraints tool bag again. This time for software development teams general lack of flow as exemplified by a slow release cadence. No two teams’ CRTs are the same, so take the one in this blog entry with a pinch of salt. If you are willing to start with someone else’s then spend an hour or more fixing it up, you might end up with one that is valuable for your team’s decision-making.

This is a typical CRT for a software development teams that do two or more planned production releases a year. Well all the way through to weekly planned production releases. Dev teams with releases quicker than weekly will not see much relevant in this diagram, but might still find a CRT useful were it to be drawn up for themselves from scratch. For slower than weekly teams, this below is close enough. For them starting with what I have and then tweaking might be useful a good way to get going. It is an amalgam of the last ten I’ve done for different clients, so hopefully no client seems themselves in too many boxes/lines.

Produced by OmniGraffle 7.21.2\n2022-12-07 13:51:39 +0000 Canvas 1 Layer 1 Story size would be OK, but … Our planned releases are buggy We do too many unplanned releases per planned release We have not done enough QA per planned release (or iteration): we expected (or were surprised) by running out of time Full QA would take too long in the time allotted in the release (or iteration) cycle Over-reliance on manual testing as not enough of our QA is automated QA test suites not run that frequently We struggle to complete work for the planned release and/ or iteration Stories are bigger than we estimate - we have trouble hitting and accurately estimating stories that are not small Wrong people on the team for high throughput Wrong technologies for high throughput Story analysis defective in itself Development foundations are crappy Too many odious non- functional requirements per story Our build* times are too long We build too much per build* The full build compile s too much code Our integration tests are too slow (service tests or functional tests, etc) Not doing test impact analysis We rely on (slower) Integrat-ion testing more than “pure” unit testing We do too much setup per integrat- ion test BA to Dev ratio is too low BAs not exper- ienced with “ready for dev” stories Devs are inexper- ienced with techs Not enough QA autom -ators our dev workstations are bad/slow/ inconsistent Too many shared resources between developers or CI (flaky / unavailable / broken from time to time, or permanently) e.g. Visual Basic 6 (etc) is slow to work & has limits Only DBAs can change table shapes (with delays) Our feature/ backlog/ tracker choice or use holds us back We disagree with benefits of Trunk Based Develop- ment We don’t have scripted setup of dev work- stations we don’t have scripted setup of shared-nothing environments** .. or for devs We started with manual QA (for a smaller application and dev team) and struggle to make the jump to automation now Our QAs feel productive and valuable with the old manual way (in the release cycle) and somewhat lost in the automation world Environment** setup is time consuming, expensive, inelastic, and was done years back - so we share a lot of services/infra inexperience, or “we’ve always done it this way”, or “not invented here” Our depth-first recursive build* processes all code regardless changed or not We have a actual monolithic source tree without a directed graph build system like Bazel .. technically .. contractually The downstack services we depend on are not virtualizable .. “upskill by hiring” has challenges Our incremental build isn’t working or setup “our org is more special than Google” Org fears various costs of that we’re not using all the features of our build tech many gatekeeping roles Quiet / solid stakeholder resistance to dialing up release cadence (despite agreed goals) Needs more exploration Some are comfortable with batch sizes, methodology and release commitments Too much post release/iteration rework (incl. deletion) as we “built the wrong thing” BAs in different org chart - perhaps with other duties? Participants don’t trust the “safe to fail” lean experiments assurances Needs more exploration what’s in the release often changes part way through (choice of the biz) The biz wants us to dial up releases cadence but we feel we can’t without something breaking “CI server” is late telling us about problems CI is not actually continuous (per commit) Our dev workstations are under- specified (CPU, RAM, Screen size or too remote somehow) team issues The release process itself is error prone we don’t (or we say we can’t) practice releases safely (rehearsals) Autonomy without alignment first, unsafe, egos, power wielding authorities, bullies, change averse, and more too much outside source control Our pre/non- prod environments** are corrupt somehow or perform badly Devs can deploy to pre/ non prod envs themselves (this is not Continuous Delivery) Specialist scripts not co- located with app source (or not script-able Time and money not spent on cross training Our develop- ment isn’t modular or decom- posed enough Slow and/or flaky downstack TCP/IP dependencies We’re not using Service Virtualization to improve speed and reliability because because because and because because and because because because and because because and/or because and/or because because because and/or because because because because and/or because because because because and/or because because because because because because because because because because because because because because because and/or because because because because because and/or because because and/or because because * Per build or the build” or builds …. - means all compile and test invocation steps - on build servers (e.g. CI) and on development workstations/laptops (including those used by QA automators) and/or because ** Environment examples: QA, UAT, PROD (and more and synonyms) because and/or because and/or because and/or because and/or because because and/or because because and because and/or because and because Bad experiences in the past? because Our Selenium tests all go through ‘login’ or interstitial pages e.g. Culture eats strategy for breakfast - Drucker and because Compouded tech debt slows actual coding Pressure to corner cut (be quicker) while not under-standing costs because and/or because because e.g. because We are too busy We’re not pair- programming because we’re not doing “component testing” with Selenium-alike technologies Our VCS slows things down We have a multi- repo … and/or because We don’t understand the true benefits of a true monorepo … or legacy tech choice … with no “common code ownership” with objective rules because because because because because because and/or because We have a crappy branching model because

That’s just inline SVG, slurp it from theDOM to your drawing package. Or use a purposeful tool like Flying Logic Pro to make CRT from scratch without having to move boxes by hand to detangle lines. Yes, my arrows go in the opposite direction to conventional CRTs. If you’re going to use this for any purpose (as is, or derived), prominently credit me and link back to this blog entry.

Updates

  1. March 2nd: Draft 2: Worked on “isn’t modular” hierarchy (bottom right). See video discussing that - 2 mins
  2. March 3rd: Draft 3: Worked on “integration tests are slow” (bottom right). See video discussing that - 2 mins
  3. March 4th: Draft 4: There are reasons why teams can’t/don’t do service virtualization (bottom right). See video showing that - 1 min (no sound)
  4. March 7th: Draft 5: An insight about Test Impact Analysis. See video showing that - 3 mins
  5. March 8th: Draft 6: More top level boxes, human factors, and rework added
  6. March 9th: Draft 7: Add a Selenium node as an example
  7. March 10th: Draft 8: Tech Debt and pressure to cut corners or go faster added
  8. June 4th: Draft 9: Add NIH and move it rootwise
  9. Jan 7th 2022: Add “too busy” and pairing.
  10. Dec 3rd 2022: Add common code ownership, and re-layout left to right
  11. Dec 12th 2022: Call out bad branching models

Notes for this mythical average team CRT

  1. Like in my reduction of cycle times blog entry, there is an idea that your average story size has to be a multiple of the build duration. Also, your iteration length has to be 3x or so multiple of the average story size (at least). Also, your release cadence is a integer multiple of iteration length (2x or 3x or 4x). Some teams have release cadence and iteration length exactly the same (1:1). All Agile dev teams with decent capacity planning understand this. Many teams that understand the cycles aim to get them smaller and smaller. In this diagram three of those four things are in blue boxes - I’ve left out iterations as an explicit thing, but teams could add boxes pertinent to that and move lines around.
  2. My blog entry on Value Stream Mapping (VSM) might introduce the other tool from Theory Of Constraints that goes hand in hand with CRTs for peering into a teams flow to look for improvement opportunities.
  3. I think small stories are a good thing. I drill into the causes of that here - Call to Arms: Average Story Sizes of One Day (and more) They’re represented in this CRT in a dotted line box.
  4. You-should-do-DevOPS (without being named as such) is a number of boxes spread around.
  5. Continuous Integration - always a problem for teams with less frequent planned releases. Lots of teams say they do CI when they don’t (not continuous), or have CI feedback loop so long to execute it makes talent want to quit.
  6. Specifically to that last, integration tests (those that are not pure unit tests that perhaps involves TCP/IP, file I/O and multiple processes/threads) are often too long to execute as a suite. Teams chop them or relegate them to a non-continuous build. There are other ways to make that class faster and less flaky.
  7. Team formation issues includes nuances around Mavericks, Loose Cannons, Dunning-Kruger transgressors, those not leading with strong arguments, quorums against alignment. All as detailed in a team pschology category of my blog). Also ego, politics, envy, and un-safe behaviors and -ism -ist ways of wielding power in a team.
  8. Planned production releases are your intention. Unplanned releases are the bug fixes that follow planned releases. If you’re zipping you have none. Thus, planned releases to unplanned releases as a ratio is important. At least as an average - your team might have 0.75 unplanned release per planned release @ 12 a year (monthly planned release cadence).

Lean Kata

After getting to a stable CRT for your team, by obeying the lean Kata, you’d change one thing at a time. With each change is a moment to measure how well that went - a week or so later. Then a team decision to lock it in or roll it back. After that you’d plan what to try next lean-kata style … after adjusting the CRT for your team.

Background

The Theory of Constraints gives us this industry tool Current Reality Tree. Eliyahu Goldratt devised this “focusing procedure” decades back. Chris Hohmann goes into the creation of a CRT from scratch for a biz problem in this 17 min video.

Future Reality Trees.

Some teams use a Future Realty Tree concept that allows them to imagine where they’d end up after a lean-kata experiment pays off.

Thoughts & real world CRTs online

Tim Cochran goes into “Maximizing Developer Effectiveness” on Martin Fowler’s site Lots of wisdom there. There’s a pic of “Micro-feedback loops compound to affect larger feedback loops” (his figure 3 - 60% of the way down the article) that’s a better visual for the reduction of cycle times thing I tried to get across in #1 above. I’m going to steal it going forward. Or maybe move to a cogs/gears metaphor from a circle of arrows one.

Using Theory of Constraints To Teach Introduction to MIS - Danilo Sirias, 1997. See page 4. I feel seen!

What Should Be Changed? A comparison of cause and effect diagrams and current reality trees shows which will bring optimum results when making improvements - Fredendall, Patterson, Lenhartz & Mitchell (2002)

The thinking process of the theory of constraints applied to public healthcare - Bauer, Vargas, Sellitto & de Souza (2019). See page 15.

CRTs: Collaboratively Tool for Continuous Improvement - Paul Hodgetts presentation (2013). Zoom into page 4.

Thinking for a Change - course handout by Marc Evers and Pascal Van Cauwenberghe (2009). Various pages.

Jeremy Lightsmith, at “Agile Vancouver” in 2011 45 minutes in talks through a CRT for a case he was involved with. His presentation: Agile Isn’t Enough

Thinking Tools - For Root Cause Analysis. Kelsey van Haaster and Tavis Ashton-Bell @ LAST 2015 - I can’t find the home page for the conference :( .. will update when I get more info

Addressing ill-structured problems using Goldratt’s thinking processes: A white collar example - Edward D. Walker and James F. Cox (2006)

Final comments

I’ve used “because” and “and/or because” instead of the shapes the aficionados would use (see the wikipedia page). This is better for the uninitiated I think. In the one I’ve drawn up here, there is no single root cause represented. No one thing that you could fix to alleviate “the problem”. Of course, the bottleneck only ever moves somewhere else in these things.

For biz/software problems you’d also use Value Stream Mapping (VSM). I learned both CRTs and VSMs at the elbows of Kevin Behr, Jabe Bloom and Jim Kimball (HedgeServ CTO) some years back for business-centric thinking exercises. I have been teaching and using them ever since. VSM is way faster to do - maybe only a couple of hours for a business/software process. The two are complementary. It is true to say that ThoughtWorkers were talking about CRTs as a useful tool as far back as 2010, if I recall. I’m shamed to say that I put “learn this stuff from x and y” on my TODO list, but never circled back.



Published

February 19th, 2021
Reads: