Want to build quality software in a rapid, continuous and repeatable manner, all while saving time and money? Then it’s time to adopt DevOps.
While the amalgamation of software development and operations in itself sounds simple, merging the two parts of the business alone is not enough.
If you want to reap the full benefits of switching to a DevOps way of working, including reducing testing and development times (by as much as 80%), significantly cutting overall costs (by as much as a third for one part of the project alone) and improving the overall quality of the software and time-to-market, then you need to combine more than just the Dev and the Ops.
A successful DevOps programme is about changing the mindset of the whole business. It brings together formerly siloed teams, such as developers, quality assurance (QA) teams, operations and other members of the business, to work as part of an integrated team to build quality software in a rapid, continuous, and repeatable manner.
This new mode of delivery requires all parties to shift their model of work to keep up with this fast-paced, innovative, and automated environment and to learn to take combined responsibility for both the process and the outcome.
This is obviously a big undertaking, and the key to success is understanding not only the roles that everyone has to play in the process, but also ensuring that the end goal is clear to all.
In reality, a successful DevOps programme widens the roles that all participants play in the software delivery lifecycle (SDLC), when compared to traditional waterfall or agile environments, and makes them all enablers of the DevOps delivery programme.
It removes the notion that QA, Dev or Ops are separate functions in the overall development lifecycle with their own set of roles and responsibilities and necessitates that the integrated teams sync-up and work together to achieve common goals.
Role of QA
With DevOps, QA is integrated within the cross-functional team and is involved with every aspect of software delivery, from requirements-gathering and system design, to the packaging and releasing of the software.
QA is no longer a discreet component of the SDLC chain solely focused on finding defects. Instead, its work is spread throughout the SDLC with quality practices embedded within each phase of software delivery.
This is a major shift which makes the role of QA more critical than ever before.
In changing the traditional role of QA, DevOps expands the remit to include operational behaviours such as examining each requirement for completeness.
QA must start addressing the questions ‘are we building the right feature?’ and ‘are we building it correctly?’
For example, in addition to the functional aspect of a requirement, QA will also examine how that requirement could affect system performance or be monitored in production. It will then design the appropriate cases to test that behaviour.
This means QA will play an integral part in ensuring system reliability, stability and maintainability, and will incorporate the necessary test coverage from the onset of the project.
In addition, QA will be involved in defining and executing the non-functional tests alongside Dev and Operations teams. This is a departure from traditional QA practices where non-functional testing would typically be out of QA’s scope.
With DevOps, QA will need to push the automation boundaries from just functional regression testing to automating everything that will facilitate code passing more quickly through the pipeline.
The QA team must look to automate as many processes as deemed feasible, including test case generation, test data set creations, test case execution, test environment verifications, and software releases.
All pre-testing tasks, clean-ups, and post-testing tasks, should be automated and aligned with the continuous integration cycle.
Any changes related to software or system configuration should flow through a single pipeline that ends with QA testing.
Automation should also apply to all non-functional tests such as performance, stress, and failover and recovery. Additionally, the automation framework should be flexible, easy to maintain, and allow for integration directly into the development workflow.
Role of the business
The same goes for members of the business. Traditionally the business unit was responsible for making strategic investment decisions and managing the financial aspects of projects and programmes.
While implementing DevOps doesn’t change this, it does require the business to change the way it operates.
To implement DevOps successfully, the business needs to change its approach towards production, budgeting and funding projects, and even adapt its team structure.
Yet another example that DevOps is about a change in mindset, not just the adoption of a new methodology. In the traditional software delivery framework, the business works to produce the project scope, develop the project management plan, and define the business and functional requirements.
This work is done at the start of the project and follows a strict and sequential process across three distinct phases: project concept, planning, requirements gathering.
The output of the first phase is used as the input into the following phase, and each phase needs to be complete before the project can progress further through the SDLC.
The challenge with working within the traditional framework is that the business presumes that everything it defines at each phase will still be applicable throughout the entire lifecycle of the project.
It leaves little flexibility for the delivery team to adjust to any unexpected changes of the project’s parameters and locks release cycles and deliverables, which if missed, could have financial consequences.
By contrast, within the DevOps framework, the business becomes the product owner and moves from working independently to working collaboratively within an integrated multi skilled DevOps team.
As product owner, the business takes ownership of the end-to-end software delivery process and sets the direction of the team in such a way that ensures workflow runs smoothly.
It also retains the responsibility of ensuring that every member of the delivery team has a common and clear understanding of the features being built and how each feature delivers value to the end-product.
Spreading the responsibility
As already discussed, one of the biggest changes with DevOps is that each party is involved in the entire release planning process, and takes equal ownership, to develop a single release plan that aligns the teams’ workstreams.
The dynamics of release planning changes from Dev complete and QA complete, to sprint complete, where the sprint is only considered complete once all of the tasks within the sprint are successfully tested.
One of the key things is to ensure all parties are involved in defining the project strategy and implementation roadmap.
In the past, each group would typically develop strategies catered specifically to their own responsibilities (Dev strategy, QA strategy, Ops strategy), while with DevOps, the product owner is accountable for producing a common strategy and implementation roadmap.
The team collaboratively sets the project’s goals, determines actions to achieve the goals, and mobilises the resources to execute the actions.
A good strategy creates a common (unbiased) view of what the team is trying to implement, how they are going to execute, how each member of the delivery team will be involved in the implementation, and the associated timelines.
It pushes each member of the team to think critically about their role in implementing the strategy and instils accountability for each in meeting their goals.
The reality is that things will change throughout the project and as such, the product owner must build flexibility into the strategy to seamlessly adjust the implementation roadmap along the way.
Rather than writing complete detailed specifications and then reviewing them with the delivery team, the business produces a backlog of feature stories and works collaboratively with the entire DevOps team to define the desired and non-desired behaviour of each feature.
Each story must include examples, pre- and post-conditions, acceptance criteria, and non-functional requirements that guide coding and testing. Having all this information captured as part of the feature leads to fewer coding defects and improved-quality software, while also shortening software development cycles.
When defining feature stories, it is essential that every member of the delivery team has all the information they need in order to do their jobs. This means broadening the scope of the features to be defined to include non-functional requirements that address the needs of operations, compliance, and information security.
Requirements should cover system reliability, performance, scalability and supportability.
The analysis process goes from defining all the requirements up-front, to the ‘just in time’ concept, where requirements are only defined as they are needed; just in time is an analysis approach of lean methodology and helps ensure there is no waste in the analysis process.
With DevOps, work is managed through a Kanban board which provides full transparency of the project work, including work in process, project status, team progress, and team capacity.
This level of visibility allows delivery teams to rapidly adjust to any changing requirements. All change requests are filtered through the Product Owner, which allows for a single, central, and controlled change process that prevents any change in priorities from affecting the team’s objectives.
This differs from the traditional method where requests would usually come in from multiple sources, cause conflicts between deliverables, and contribute to resource constraints.
Automation & continuous testing
DevOps relies heavily on automation and continuous testing, and the level of automation directly corresponds to reducing the overall time-to-market cycles. Continuous testing is a means of identifying and reporting business risks associated with a software release candidate as quickly as possible.
This means that with DevOps, testing starts early and is executed continuously throughout every stage of software delivery in-order to prevent issues from propagating to the next stage.
It forces developers to test every piece of code within the development phase, quickly identifying and remediating issues which would otherwise be undetected until later in the SDLC.
It also means that automation tasks must be embedded within each aspect of the software delivery pipeline, including building, packaging, configuring, releasing, and monitoring code.
The scope of automation testing should be expanded to include deployment scripts, backout scripts, configuration management, non- functional tests, checkouts, and monitoring components.
While having a comprehensive automation testing suite reduces the testing cycles, continuous testing is equally important because it provides early feedback on the overall quality of the software deliverable.
While both automation and continuous testing are critical to the DevOps practice, it’s the manner in which the two frameworks are integrated that generates the best results.
In particular, automation tools can be applied to remove dependencies that prevent unrelated tasks being carried out alongside each other.
For example, provisioning different testing environments to support parallel testing efforts, so that different groups can conduct their testing without having to wait on environment availability, or developing test harnesses (test simulators) that mimic components to allow system integration testing to be carried out much earlier in the SDLC.
Although these might not be considered automation in the traditional sense, they each provide a means of aligning software delivery teams to a shift-left way of working, which means carrying out tasks as early as possible in the development process and therefore reducing the risk for costly and time- consuming fixes late in the cycle.
It is unrealistic to automate everything all at once, so the overall automation strategy should factor in an approach to prioritise automation projects by considering the return on investments (ROI) and the metrics that will be used to evaluate the actual returns on their automation investments.
The advantage to this approach is that automation improvements can be gradually phased into the backlog, evaluated and prioritised based on needs.
The DevOps value proposition lies in its ability to improve the overall service delivery of a product to the end user by integrating cross-functional teams into a collaborative delivery team with common goals.
As part of the process, the whole business transitions from working in a rigid framework, where the entire scope of the project is defined up front, to an approach where the scope is defined in small targeted sections.
This enables work to be prioritised as appropriate, reduces the time taken to complete tasks and allows the earlier discovery of problems, making them simpler, cheaper and less time-consuming to fix.
Put simply, DevOps removes walls, gates and transitions to increase accountability for the full end-to-end software development process. It requires cooperation and collaboration from across the whole business and in turn brings a raft of benefits, including:
• Removing the potential for human error by increasing automation
• Producing results quickly and clearly • saving significant amounts of time and money
• Avoiding potential reputational damage from delays and errors.
It originally developed as a backlash to the extreme segregation which stemmed from fears of cross-contamination between the different phases and expertise levels of the software development life cycle, and in particular, concerns over regulatory restrictions and issues with some individuals having access to systems they should not.
In the past, it had been known for software to be released into production unchecked and content to be updated by individuals without the necessary expertise, which when combined, caused serious errors and lead to trading losses.
These isolated working methods meant that no one was accountable for the full end-to end process and it eventually became apparent this was lengthening the time taken to get products to market, increasing costs and raising the likelihood that issues and defects would be found once the programme was in production.
By contrast, the ethos of ownership and accountability instilled by the DevOps methodology helps to create a need to develop the best software system as quickly and efficiently as possible.
By adopting the DevOps approach, banks and other institutions can save themselves considerable amounts of time and money and rest assured that the software development is of a highly superior quality because the whole team is fully aware of what is happening at each stage of the process.
Other advantages of implementing the DevOps methodology, include:
• An increase in confidence in delivery as less defects are introduced into production
• Integrated teams requiring less man-power
• Lower costs to rectify bugs as they are identified earlier in the lifecycle • More time for project resources to focus on delivery
• Faster releases thanks to standardised and reusable automated tests
• Ability to divide releases into modules, giving less opportunity for error.
To conclude, while the act of combining development and operations sounds relatively straightforward, the DevOps methodology cannot be properly integrated within a business, and therefore reap the available benefits, without a change in mindset and the promotion of accountability across the whole organisation.
To put it simply: you build it; you break it; you fix it. With DevOps there is no place for passing the buck, represented in this case, by the software development life cycle.
The successful implementation of DevOps means the successful integration of business users, developers, test engineers, security and operations into a single delivery unit focused on meeting customer requirements.
Iya Datikashvili, director, Brickendon