Testing and quality assurance is rarely seen as an investment. This makes a conversation about return on investment (ROI) seem somewhat inappropriate. In the majority of best-case scenarios, testing is often managed as an expense. In worst-case, it is seen as a grudge purchase, much like insurance. However, it should be noted that this ‘expense’, according to the World Quality Report, accounted for 26% of the total expenditure on IT budgets in 2017. Testing is finding its rightful place in the sun. The business challenge now is how to demonstrate the value that can be derived from a testing budget.
One important question to be considered is: ‘what can we, as a testing community, do to not only show the ROI of QA but also to demonstrate improvements in the effectiveness of the QA process?’
I believe a key driver of value creation in QA is test automation. Correctly applied, test automation has the range and reaches to demonstrate ROI in the QA space.
Despite its value benefits, test automation is currently under-exploited in QA and testing, with the World Quality Report citing the average level of automation for test activates at around 16%.
Testing budgets to show ‘return’
One possible reason for the lack of automation take-up is that there is no clear and articulate way of demonstrating the ROI against the spend for automation. How do we define return on investment if we cannot quantify any revenue directly related to this activity? To put it slightly differently, in a cost centre where there is no profit realised, how can there be a clear return? Or are we simply referring to savings that can be made in the testing budget to show ‘return’?
In my opinion, there are three primary ways that clients perceive value in the context of testing. The first is to pay less for the same volume of output. The second is to get more work done for the same budget. And the third is to pay more and get more.
The first step in defining ROI is to start looking at the elements that make up the base cost for testing. These are the costs incurred during environment set-up and licencing for test technology. However, the bulk of the testing cost still lies in the resources, the test analysts and the test leads.
At this point, it should be noted that the true cost of a tester does not only include their direct salary or contracted rate but also indirect items: desk cost, leadership cost, capacity management and downtime, upskilling and continued training. This is where we can make a saving through carefully planned and executed automation. Enter automated testing.
So, is this where we lose our jobs?
To be clear, automation does not replace testers. It needs to be seen as a tool to help make testers more effective, much like the tractor did not replace the farmer. It allowed the farmer to become more effective and farm over a much wider area.
Automation aims to reduce the manual or repetitive work associated with testing. This repetitiveness is found mainly in the regression component of the test process. So, in the current test life cycle, the tester in a sprint needs to not only test new functionality but also has to work backwards through the entire system each time (regression testing) to ensure no bugs have been inadvertently introduced. Through automation, however, we can mechanise the regression testing component, freeing up capacity to focus on newer functionality in each sprint.
This aligns perfectly with agile concepts such as DevOps and Behaviour Driven Development, where the tester is now seen as a fully functioning part of the Three Amigos – business analyst, quality assurance and developer.
Consider the ROI at this point:
- Through automation, the tester can now focus more on new functionality
- The reduced focus by functional resources on regression enables the overall development cycle to move forward, bringing fresh energy and maximising the strengths of your team
- Automation of regression testing improves the quality of testing, as computers have no issue with monotony. Testers repeating the same test cases are bound to make a mistake or two through boredom or rushing to complete the regression testing to focus on new functionality
- Now that the testers have been freed up from most of the regression testing through the automation process, they have an opportunity to extend their coverage and examine a broader range of functionality through items such as exploratory testing and static testing.
Manual testing vs. automated testing
Replacing manual testing with automated testing reduces the pain of regression testing. When we automate the process, we provide consistency. Plus, it becomes feasible to run the regression, more often, for a lower cost.
Automation improves overall delivery by reducing test cycle times, leading to shorter sprints in the DevOps world, which ultimately leads to improvements in the time-to-market for software. We are able to overcome challenges in the regression cycle of repetitiveness and testing, which is generally harder to manually execute.
By automating our testing, it is entirely possible to provide an increase in test coverage without increasing test cycle times, scaling testing across large numbers of browsers, devices and platforms without increasing associated costs.
If the benefits of automation – improved delivery, improved quality and improved coverage – are to be realised, this will be achieved at an initial cost. The cost of automation is the writing of the test scenarios in a scripting language. This is usually carried out in the context of a development framework, which compiles and runs these scripts on command for regression purposes.
ROI = the benefits
The cost of the automation scripting needs to be offset against the benefits quoted above. This cost can be seen as the function investment component of an ROI calculation. The overall ROI = the benefits of the automated regression testing minus the cost of investment to script the tests in the automated language.
Let us also pause to remember that not all ROI can be quantified. Qualitative ROI describes the hard-to-quantify benefits of technology for an organisation – and for that organisation’s customers – and always should be taken into consideration. In the case of test automation, aspects such as customer satisfaction, improved efficiency, time-saving, internal perceptions of the automated testing – ‘will this be good for our business?’ are as important as the quantifiable aspects.
There is a clear recognition that not all testing can be automated. Its value driver is the reduction of regression testing. In addition, and critically, the tester’s performance and throughput will improve substantially when they utilise automation as one of their primary tools.
With automation correctly implemented, its quantitative and qualitative ROI cannot be ignored. With regression testing, processes and systems become more testable over time, delivering a better quality, faster-to-market product. If we offset delivery improvement, quality of testing and extension of the coverage against the time and cost to implement the automation, there is a strong case for this component of quality assurance.
DVT is a software development and testing company that focuses on digital transformation technology solutions for clients globally. Its services include custom software development for mobile, web and traditional platforms, software quality assurance, automated regression testing, UX/UI design, cloud application services, DevOps consulting and data & analytics solutions, as well as Agile training and consulting. Founded in 1999, DVT has grown to over 700 staff with offices in the UK (London) and South Africa (Johannesburg, Centurion, Cape Town and Durban). Visit DVT at www.dvt.co.uk
Written by Bruce Zaayman, Director & Client Engagement at DVT