A common development pattern is to adopt the Release-Reuse Equivalency model: if two applications depend on some common code, they should reuse only versioned, “released” packages of that code. They should avoid having their own copies, or using source control magic to share the latest uncompiled code. Here’s a decent write-up on release-reuse.
Using versioned releases allows us to isolate ourselves from unwanted changes, encourages avoiding hacks to the common components for our project that would detrimental to others, and helps build a manifest of everything that’s in the application.
But what if two applications share not just some code libraries, but also both run on the same kind of application server, operating system and back-end databases? Should we apply the same rules? I think so.
The DevOps matra of treating “infrastructure as code” provides encouragement. If we have a versioned “library” that is a base virtual machine, a versioned application server install, and versioned rules for applying patches and security settings we have the core of what we need. The run time application can depend on a version of the application, and a version of the software stack it runs on.
The implications for deployment are also interesting. An update to the the infrastructure should be propagated through the environments in conjunction with code updates. Functional testing should be considered to apply to the full stack from VM image to application configuration.
I’m going to continue to play with this concept, but I’d love your feedback. Should only versioned infrastructure be used and should changes to the infrastructure follow a release lifecycle that looks just like application releases?