Slides from our most recent webcast are available on Slideshare.
I recently presented: 'Managing Build Time Dependencies.' If you want to learn more about this topics, you watch it now.
Earlier in the week, I presented a webinar on managing build time dependencies which is now available as a recording. In it, I showed a fancy dependency diagram and suggested that any of my AnthillPro customers could generate that by loading in a custom report.
Naturally, you all emailed me. Our wiki has it. There are two reports to be found here: http://wiki.urbancode.com/AnthillPro/AnthillPro_Velocity_Reports both starting with “Interactive Dependency Report” in the name. The VML one is Internet explorer specific. The SVG one works for everyone else.
To apply the report, go to System > Reports and create a new velocity report. Be sure to check the build life integration box, and paste the code into the main report area.
The example below, in order to build the very small program “unique” main.o and strset.o need to be built. Those in turn rely on the input files main.c, strset.h and strset.c.
Understanding this graph is important for performing an incremental build. Continue reading
We include a built in package repository, CodeStation, with AnthillPro, uBuild and uDeploy. They also integrate with third party repos. We place much emphasis on this capability because it is critical for safety, governance and audit.
To explain this, let’s first look at a simplified build and release lifecycle:
Developers submit their work to a source control system. A build is generated from that source and any dependency libraries retrieved by a tool like Maven or CodeStation. That build is submitted to test environments and certified. Finally, it is sent to production.
The following questions are important to be able to answer:
To be able to answer 1 and 2, you need an inventory of what version of something is in an environment or at least logging that indicates the version number. But to truly know that what is in production is what was tested, you have to ensure that not just are the file names the same, but that the exact same files were moved into each environment.
In order to know and prove that the files are the same, one must validate that they are bit-for-bit identical by comparing digital signatures or hashes at deployment time. It helps to actually have the original file around in a tamper resistant location. A good package repository, like the one in uDeploy, will provide that location, the automatic signature generation at build time, and the automatic verification so that you know that what is in production is exactly what was built from known source, and tested in the prior environments.
For more information on package repositories, view our on-demand best practices lesson: 'The Role of Binary Repositories in SCM'
I’ve spent most of the last decade working on problems in build, deployment and release management. While automation has been a focus of mine, the hard part in these domains have always been around dependency management.
A release day for many enterprise IT groups sees a number of application systems get updates. There’s a great deal of coordination required to make sure each phase of the release is executed by the right people, with dependencies between the applications accounted for.
These applications in turn, are increasingly made of multiple runtime components. These leads to dependency management challenges when service oriented architectures and the like fail to deliver on the promise of being able to upgrade just a small piece of the system at a time. Instead a change to one web service often cascades into updates to those that call it. Tests no longer validate that a single version something works. Rather, they validate that a web of dependent services are delivering desired functionality. Deciding what to promote across environments requires being very dependency aware. Likewise, at deployment time, the various pieces and parts of the system must be released in a coordinated fashion with infrastructure changes.
As we continue to look closer and closer. Our attention turns to the makeup of the runtime component of the application. Breaking it apart, we are unlikely to see just a simple standalone chunk of code that was compiled. Rather, each runtime component is made of a combination of the source code and dependencies on libraries, assemblies or headers. These dependencies can be on versioned system libraries, open source components, commercial libraries, or internally built components designed for reuse. The source code itself has interdependencies that are handled by the compiler/linker or a build script.
Despite what our friends and family tell us techies, we are good hiding complexity. The truth is, the systems we release today are extremely complex. But everywhere I look at see the same pattern repeating itself. We mask that complexity by creating composite projects. The hard part of many of our build and releases activities is keeping track of the components that make up the larger system. What version of this works with what version of that? What do we actually have in some environment? How do we get things delivered by different teams to work together? Over the next few blog entries, I’m going to look at a build, deployment and release dependencies in turn.
An interesting question came across the AnthillPro mailing list a few days ago: How do you put in place a quote that limits how many builds and tests a single developer can currently run in our build farm? When builds/tests are somewhat costly, it’s reasonable to want to keep individuals from monopolizing a shared pool of machines.
In AnthillPro, the ideal way of restricting access to some resource is with a lock. Usually, a lock represents a shared resource like a database, or network deployer that can only be used by one (or a handful) of processes at a time. In this case, what was needed was a restriction to “my current processes”.
What ended up working was creating an AnthillPro “Lockable Resource” per user. Each user could then be assigned an individual maximum number of current workflows to execute. The build and test workflows are then assigned a scripted resource lookup:
The script would be something like:
return BuildRequestLookup.getCurrent().getRequesterName() + "-quota";