During our induction into the IBM family, one of our new colleagues told an anecdote about a firm that outsourced its mobile application development. Managing the relationship of outsourced work with what is being developed in house is a challenge similar to what manufacturers face with their supply chains. While this is a topic the folks at IBM having been talking about for a while, it’s new to me.The implications however are both clear and profound.
Lessons from Supply Chain Masters
Consider the lessons that Toyota learned through its Lean efforts. Being Lean, and working in small batch sizes does not work unless suppliers are also able to react quickly, producing small batches of exactly the parts needed for the vehicles that were in demand. Liker’s book The Toyota Way recounts their work with Trim Masters where seats are ordered when a car begins its four hour trip down the assembly line. With Toyota’s help Trim Masters was able to produce the seats and deliver them just in time to meet up with the rest of the car a few hours later.
Or consider Walmart’s famous supply chain agility. While maintaining very low costs, Walmart is also able to react very quickly to changing market dynamics to rarely need to put items on clearance.
Applying Supply Chain Lessons to Software
Software companies that would decide to make a change and release it hours later are few and far between. However, we serve businesses that increasingly expect to be able to change plans often to exploit immediate or transient market opportunities. Speed of innovation is key.
We include a built in package repository, CodeStation, with AnthillPro, uBuild and uDeploy. They also integrate with third party repos. We place much emphasis on this capability because it is critical for safety, governance and audit.
To explain this, let’s first look at a simplified build and release lifecycle:
Developers submit their work to a source control system. A build is generated from that source and any dependency libraries retrieved by a tool like Maven or CodeStation. That build is submitted to test environments and certified. Finally, it is sent to production.
The following questions are important to be able to answer:
What is in production?
Was it certified in test environments (and by whom)?
How do you know your answers to #1 and #2 are true?
To be able to answer 1 and 2, you need an inventory of what version of something is in an environment or at least logging that indicates the version number. But to truly know that what is in production is what was tested, you have to ensure that not just are the file names the same, but that the exact same files were moved into each environment.
In order to know and prove that the files are the same, one must validate that they are bit-for-bit identical by comparing digital signatures or hashes at deployment time. It helps to actually have the original file around in a tamper resistant location. A good package repository, like the one in uDeploy, will provide that location, the automatic signature generation at build time, and the automatic verification so that you know that what is in production is exactly what was built from known source, and tested in the prior environments.
The Guardian reports that Apple cannot comply with an order to update a notice on its web site regarding its patent dispute with Samsung. The judge wants changes in 48 hours. Apple claims that it will need 14 days.
While I guess that this is mostly due to the need for PR and Legal to work something out, and perhaps a little stalling, I know more than a few companies that struggle to get even the smallest change through their release cycle in less than a month. Non-techies are always, like the judge here, incredulous when they hear this.
IT groups can expect the same reaction from their customers within the business. “This is a small change, why does it take so long?” The question is more than fair. We really should be able to get a tiny change out of development, through regression tests, and deployed into production very quickly. Somecompanies are capable of deploying small changes to production within a few hours of change. While that’s faster than appropriate for many applications, particularly those in regulated environments, we can learn from them. Good automated test suites are important. And dependable, self-service deployments are critical to a smooth flow of code out the door.
I recently attended devopsdays Rome. This was my first time at a devopsdays conference and I enjoyed the mix of talks and open spaces sessions. It was obvious that the presenters and the attendees have felt the pain of trying to introduce continuous delivery in their organizations and were there to share stories and seek advice to help.
The theme for this conference was “Culture”. If DevOps is a “culture thing”, then surely we need to involve both Dev and Ops. The surprising part was that there did not seem to be anyone from the Dev side of things. In fact, there were topics like “getting Dev to think in terms of DevOps”. The goal being to entice developers to think about the infrastructure of their applications including the operational concerns such as: scalability, redundancy, performance, monitoring, and feedback.
Monitoring was a very big topic, being covered by some 70% of the talks. For this crowd, Puppet, Chef, and CFEngine define configuration management. But it stopped there. Adding all of the discussions about stacks, tools, and cloud together yields something that looks like PaaS. It allows them to provision and configure machines or virtual machines to provide well-managed and monitored services on which applications can sit. However, at that point, Dev has no skin in the game.
Application delivery is needed to get Dev thinking in terms of DevOps. Specifically, if while developing application components, developers need to concern themselves with how their components will be deployed to an environment and in a state where they can deliver value.
uDeploy deploys complex, multi-component applications. For example, a simple web site consists of a web component (WAR file) and a database component (DDL). Continue reading →
In our Enterprise Continuous Delivery Maturity Model we looked at the idea of continuous deployments to production and flagged the process as “Extreme”. It’s just too much for too many teams. A decade ago, I might have said the same thing about continuous integration.
As DevOps and continuous everything have taken hold, we see more and more teams start to contemplate deploying each change to production. These are the rare teams with really great tests and little hassle from auditors.
Stepping stones image courtesy of Chris Heaton
While still the minority, the growth in these groups is definitely noticeable. Most teams don’t get all the way to production though – even if they want to. They move only to the last pre-prod environment. It’s hard to convince the business or their colleagues that fully automating the release cycle is safe.
Paul Kipp recently tackled this fear on his blog. I love his take on the topic and encourage you to read his article in full. Paul highlights that the discrepancy between how meaningful consistent success in pre-prod is (pretty meaningful) and the amount of confidence we usually gain from that success (not so much). Paul suggests a big sign indicating the number of days since we were really glad we didn’t deploy live. When that number gets large, you can have a talk about why you are leaving value on the shelf rather than getting it to customers more quickly.
Great idea. But I think that many on the business side will see the sign as analogous to a circus posting “160 days since last tight-rope walker fall.” While fewer falls is better than more, broken necks are always bad. Compliment this broadcast of good MTBF (mean time between failure) with better recovery metrics (MTTR).
Prove that you can rollback quickly. Or better yet, build for resiliency, with a cluster immune system. When you get to the point where you can’t do much damage to production even when you try, you’ll have an easy sell.