The last new feature from the v6.0.1 releases is IBM UrbanCode Release’s new impact analysis capability. Release Managers constantly struggle with interconnected applications and the threat that when some applications miss the window, others will have to be held back. The idea of the new view is to visualize the relationship between:
- Changes (bug fixes, new features, etc)
- Development Initiatives (projects, epics, etc)
- Applications in a Release
In this view, we can see the Applications in the small Release as the rows, and the development initiatives as the columns. The number of changes associated with their combination is shown along with a color code indication of how done the features are.
At a glance, we can see that no Initiatives span multiple applications. So if any Applications have to drop out of the release, the others should be relatively safe. If other Applications were participating in Initiative 1 or 4, the amount of in progress work would be worrisome.
This new impact analysis view is available from the Release overview screen.
Properties are a big deal. They help us parameterize things so that a process works on various targets or environment or versions. Ever forget the capitalization used in the property name? Or its scope? Or whether words were separated by a dash or period?
Of course you haven’t had that problem, but people like me are forgetful. After intensely lobbying the developers, they helped me out and added some auto-complete / suggestion magic when you start referencing a property. So nice.
Life just got a little easier.
We include a built in package repository, CodeStation, with AnthillPro, uBuild and uDeploy. They also integrate with third party repos. We place much emphasis on this capability because it is critical for safety, governance and audit.
To explain this, let’s first look at a simplified build and release lifecycle:
Developers submit their work to a source control system. A build is generated from that source and any dependency libraries retrieved by a tool like Maven or CodeStation. That build is submitted to test environments and certified. Finally, it is sent to production.
The following questions are important to be able to answer:
- What is in production?
- Was it certified in test environments (and by whom)?
- How do you know your answers to #1 and #2 are true?
To be able to answer 1 and 2, you need an inventory of what version of something is in an environment or at least logging that indicates the version number. But to truly know that what is in production is what was tested, you have to ensure that not just are the file names the same, but that the exact same files were moved into each environment.
In order to know and prove that the files are the same, one must validate that they are bit-for-bit identical by comparing digital signatures or hashes at deployment time. It helps to actually have the original file around in a tamper resistant location. A good package repository, like the one in uDeploy, will provide that location, the automatic signature generation at build time, and the automatic verification so that you know that what is in production is exactly what was built from known source, and tested in the prior environments.
For more information on package repositories, view our on-demand best practices lesson: 'The Role of Binary Repositories in SCM'
We’re often asked where to start when organizations want to standardize their deployment processes across environments. Starting with the deployment to development environments is common. Developers extend their continuous integration platforms towards continuous delivery organically, deploying to dev test environments for simple functional tests. And later to QA environments using similar approaches.
Many of these organic efforts stumble when they get closer to production. The production deployment often differs so much from the development deployments that the automation can’t be tweaked to meet the challenge. Common tripping points include the inclusion of clusters, load balancers, backups and databases (where dropping all the tables isn’t an option in production). This is so common that most continuous delivery efforts stop at a test environment and aren’t used for staging, production or disaster recovery environments.
Having seen teams reach this stumbling block over and over again, I’m increasingly convinced that for a common process to be created, you have to start with the production deployment process and work backwards from there. For databases, that means that development deployments should use incremental updates to databases rather than a drop and recreate approach. Having a load balancer in QA would be nice, but if it isn’t present, the automation should have a switch in it for, “If there’s a load balancer….” otherwise skip this part of the deployment.
Essentially, the goal is to move from a situation where dev and production deployments are “different” to one where the dev deployment is treated as a simpler version (a subset) of the production deployment. Once a common process is agreed upon (even if its just on paper at this point) standardization and automation efforts can begin in earnest with a good chance of success.
While I can not take the credit for coming up with Continuous Integration (it can be traced back to timeless practices) or for even coining the term, I can take the credit for creating one of the first Open Source (later turned commercial) CI servers — Anthill, released in July of 2001. That has given me an almost unique vantage point for watching the evolution of CI and seeing it take the industry by storm. One of the most interesting phenomenon I have come across was seeing our customers use AnthillPro (back when it had build only functionality) to automate deployments back in 2004. Keep in mind that this was back in the days before the term Continuous Delivery was coined. At first it seemed to me that the wrong tool was being used for the job. But then I realized that the features present in any implementation of CI really advanced Automation from the invention stage to the innovation stage. Continue reading