Enterprise organizations looking to adopt or scale their application of DevOps methodologies often have questions about how automated approvals could work in their environment. Fears around compliance, accountability, and visibility arise from years of managing these problems directly within the context of fully manual processes. But, those fears can be directly addressed with a data-driven approach to change evaluation and approval.
A healthy CMDB, including an understanding of your business capabilities and service offerings aligned to ServiceNow’s Common Service Data Model, lays the foundation for fully auditable and compliant automated approvals; with this in place, integrations between ServiceNow DevOps Change Acceleration and your toolchain become your structure, and the data they produce, your keys.
The traditional process of approving a change leans heavily on manual involvement: time is spent manually reviewing what is changing; assessing risks; organizing dependencies; scheduling around maintenance windows and change freezes, in addition to sending communications, creating justifications, and providing other contextual information required to allow a change to proceed.
However, in an agile, CICD pipeline-driven world, not only is this incompatible with the pace of development that becomes possible, but it’s simply unnecessary as the activities themselves can generate all the information required to make these decisions. With consistent policies, this data can drive those decisions automatically.
First, the application in question is onboarded using an automated walkthrough to associate the project planning boards, code repositories, and pipelines used for its development. Next, this organizational layer, called the DevOps App, is tied to a Business Application configuration item. This one-time step establishes the integration between the application’s toolchain and Service-Now.
In Jira, the defined work items and their links to broader Epics form the basis of a change’s justification. The need for this change is outlined in our project planning tool, and its position within the business’s strategies should be made understandable in this context. Webhooks will send these activities into ServiceNow, creating a direct link to the project’s plan.
Next, a developer begins their work in a Git repository. While committing their code changes, a reference to the Jira work item is added to the commit message, creating a reference to the story. The commits are sent to ServiceNow, and via this reference, each commit is directly linked within ServiceNow to the Jira work item it supports.
When the developer updates the codebase, this will run a build in a Jenkins pipeline. With the ServiceNow DevOps Jenkins plugin, each step is tracked within ServiceNow, and the commits related to the build (as reported by Jenkins) are validated to ensure that records of the work exist in ServiceNow. Artifacts or artifact packages are also sent from Jenkins as the unit of work to be deployed.
Now, we’re ready to deploy our new code. However, instead of requiring someone to log into ServiceNow, manually create a change request, and provide complete and accurate information, the developer can run their deployment whenever the code is ready. The deployment pipeline will communicate with ServiceNow to create a complete and accurate change request using the data generated throughout the development process, pause the pipeline for approval, and move the change to production.
Because we have all the data on exactly what’s changing and why, we can understand the risk of the change and evaluate whether it should proceed with an automated or manual approval based on defined consistent policies.
These policies generally include basics, such as all commits being linked to work items, unit tests, and code quality scans passing with appropriate grades, as well as more nuanced determinations: The business criticality of the application service can be measured against that of the business service it supports, configuration item relationships, and incidents and vulnerabilities within that stream can be considered, and key/value application and infrastructure configuration data can be directly assessed against anything from enforcing accurate hostnames and HTTPS to complex custom scripted evaluations against proprietary business data with DevOps Config.
Depending on the outcome, a change may be able to go to production immediately, holding the pipeline for only as long as it takes to process policies and generate approvals, or it may require manual approval at one or more levels or be held until a maintenance window; best practices and the needs of your organization determine these paths.
Even if starting with complete change automation isn’t feasible in a specific environment, building the connections and data sets that allow it to happen are essential steps toward gaining a holistic understanding of the changes required to build a solid foundation for robust automation. Using Change Registration, you can collect the full suite of DevOps data and create a record of the deployment without impacting the legacy change process, which allows an organization to see how policies and approval rules can be shaped strategically around trends and requirements that arise.
At RapDev, we have years of experience automating change in tightly controlled environments and have seen what leads to success and the benefits these increases in velocity can bring. More data, more traceability, and faster, more agile change workflows result in faster MTTRs, less reliance on tribal knowledge, and a better end-customer experience as your applications can evolve rapidly to meet their needs.
Building your foundation with a knowledgeable implementation partner like RapDev is the first step down the path of effective and agile automated change management, and we hope to help solve this for your organization as well.