I’m inspired by the title of a blog entry that mulls Angular. The title in question, for those that have not clicked is The model is the single source of truth, and it refers to Single Source of Truth concept, which is occasionally bandied around in enterprise computing. Anyway, it came up today at work, so I thought I’d blog.

Models in the normalized database era.

Codd and Date gave us relational database theorems/designs in general and SQL in particular. From the 80’s through to the turn of the millennia, database design was the foremost consideration in application development. The initial work was done the old fashioned way, but later tools like Erwin allowed us to super-achieve in the realm of database design, with “physical” and “logical” models mapped and workflowed to completion. Outputs were DDL that teams could use to deploy databases and fill them with starting records (most likely reference data). Applications could be built on top of that, we were assured.

The Agile software movement (2000 onwards), decided that source control was the reference place for SQL tables/indexes/views, and so was happier to not use such modeling tools upfront. If they wanted pretty-printed models, they were happier to generate them from actual deployed schemas. Having everything on one SCM tool for a single-click-build was the highly desirable thing, and table-shapes changed as needed according to the ‘stories’ being developed in an iteration. Later the tool-chains improved for each language, to allow table-shape delta scripts to be automatically generated for day to day use, or more importantly between production releases. This would be to ‘upgrade’ a database shape, or ‘downgrade’ if a production push were lamented. Really though, for Agileists, the conflict was in the between object modeling and relational-schema modeling. In the eyes of the Agileists the object model was much more important than the table design. But it was not the Agile community per se that drove the next change: to document stores.

Models in the Client-side MVC era.

So everyone gets by now that I love the client-side MVC frameworks, and think they are the future. When it comes to data storage the obvious conclusion is that the backend should save something pretty close to the document that the client presents, mutates, and sends back to the server for posterity. I don’t though, as yet, think in “models” for the data. Instead I think in JSON documents. Indeed, it is the representation on the wire for a GET to populate a page, and the PUT for a hypothetical return of a changed document that my minds-eye considers. Ideally, the document’s structure in those two scenarios should be pretty much the same. Like for a normalized database in a previous era, there are one-to-many relationships etc, but everything for an canonical key is represented in one textual document. Now a backend could split up such a JSON document in a PUT operation, and write/overwrite dozens of records in many tables (in one transaction), but why bother? Use a document store instead. When would you use a normalized DB design today? The answer to that is: only when you have other processes reading and writing to your database. For the most part these days, we’re interfacing to other systems via service calls (Web Services, REST etc), and using yet more documents on Enterprise Service Buses (Tibco, JMS, etc). There is also the possibility of writing a reportable database later, as a compromise.

The development-time economies for avoiding a formal normalized database are huge, and teams should be starting with document stores, and justifying “why not” if needs be. In short folks, get up to speed with the NoSQL movement.


Jun 27, 2012: This article was syndicated by DZone


February 8th, 2012