90% of nothing valuable is still nothing, yet there is some misguided notion that 100% coverage is some marvellous ideal to achieve with, especially with automation coverage, forgetting actual hands-on testing.
Everything is rarely relevant all the time, unless 100% of the application is undergoing change every time, but the reality is, typically speaking, unless it’s some structural or fundamental change to the way an application behaves, it’s probably more likely a figure much closer to 5% (with another 5% for other areas it may also cross over with).
So, achieving that remaining 90% in these situations in a speedier fashion means you’ve achieved potentially nothing, however, just a whole lot faster, but is that really efficiency (especially whilst something is still going through the iterative process of development)?
My version of efficiency puts risk management at the forefront of what to cover, and contextually analyses what is appropriate, and then utilises this. What that usually translates to is covering first and foremost the area under test, and then any related areas that may have been touched by the changes performed. However, this may then translate to only needing to cover about 10% of what your application does.
The result from this approach? The areas with the most notable risks are covered, issues are identified, and regressions are minimised. Does it eliminate risk? No, but in reality, even those theoretical ‘100% coverage’ suites will still have gaps, after all, they are still just as fallible as the code it’s being run against.
Contextually, where it is a major feature (with many touch points), this may then expand out to 30-40% (typically) of the application, and thus the coverage area selected should reflect this. Only where it’s something more at an ‘engine’ level of an application it is likely to require a greater level (…and even then, may only require 90% coverage, not 100% of the application to cover all its touch points).
The less time spent by the automation suite executing, the faster the feedback loop for developers. Likewise, the more focused the maintenance or development to just what is related, the less time that is consumed.
When it comes to release time, sure, you can run your whole suite then if you want that added reassurance, and it may have picked up some satellite regression in some completely unexpected area of the application, but it means you haven’t otherwise cost yourself that time on an ongoing basis, with a low probability of there even being that added value.
If you have any reasonably sized application, and your typical CI/CD pipeline, then when you consider the time spent waiting for each build to complete, and times it by the number of times something is built, and you start to see the amount of time you might just be wasting due to this lack of a focused approach.
I am aware that in certain regulated industries, having the discretion on what to focus on may not be compliant with certain regulatory requirements, at which point, the reliability and execution speeds of the code, infrastructure related improvements, and better education may be the only options at your disposal.
I am not suggesting that the above measures aren’t valuable in their own right, they are, but process-related improvements can trump code and infrastructure related improvements, too.
Likewise, process improvements don’t just sit within our grasps, but with the team in general, so whether this is communication issues, understanding issues, or something else. This is where direct collaboration (between devs and testers) can help to expedite how quickly an issue can be turned around, and provides a direct route for clarifying any misunderstandings.
I also believe that in making quality something owned by the team, then the returns from that alone can be substantial. A developer who takes greater ownership over the quality of what they are writing, and who has a better and more in-depth understanding of the system they are working with, can end up producing code that is more reliable / less bug prone, and which then requires less time to test and ultimately release, too.