When your test suite takes longer to run than the actual application: 'Something's not right here...'

As our forum users’ experience in testing is different, test efficiency can be an awesome topic for you to support others or vice versa.

How have you optimized your testing strategy to improve efficiency without sacrificing quality? What challenges have you faced, and what tactics have you found most effective in balancing these priorities? Can you share any specific examples of tools, processes, or approaches that have helped you achieve this balance?

1 Like

Sorry, I don’t see how greater speed leads to greater efficiency.

The best ways to achieve efficiency gains:

  1. better educate test engineers (and programmers)
  2. reduce TCO–“total cost of ownership”. i.e. reduce infrastructure and number of pipeline “moving parts”
1 Like

Ahhaaa … ok.
So, when I started as a QA engineer, I was part of a theam counting 7 members, pairing with a team of other 7 members.
QA and devs mixture.

But how the project evolve … less money more responsabilities due to the project management
Ofcourse my (personal) target was devops stuff, since QA was boring but i was qa at the moment of hiring since that was the available position.

So after one year we had a team of 3 in QA, handling everyting of the above, mostly myself doing the pipelines shit and whatever else CI/CD related.

But for the same money as into my initial contract.
So I move to another company.

The solution?
Valuable members of your team shoul be properly motivated.
Period!

4 Likes

I am not a tester myself so I don’t think I am the right person to comment on topics such as these, but many of our members have expressed their own take on the concept of Testing Efficiency in our recent giveaway.

For example, @guy.mason’s take on Testing Efficiency is rather insightful…


@guy.mason Feel free to share more with us about how you have implemented your own take on Testing Efficiency within your testing projects and strategy… Or maybe just efficiency in other aspect of your worklife :sunglasses:

1 Like

Is that a problem of team management that mixtures the role of QA and devs? I am also studying a new role (my prof suggested me) to be Software Testing Engineer. Will that be more interesting?

1 Like

Nope, that is fine.
To have close relationships between QA and dev team is highlly recomended.
(It also depends on the team size, in Agile, particularily scrum, having a team larger than 7 people can became hard to manage so if is the case better split it across two - three team leads, as needed). They still need to have close relationship so skilled team leads is a must, to act as a glue across teams.
Sitting togheter they can solve issues faster, than rising a ticket, wait for the feedback etc etc.
Since we were ~14 people at the begining, we splitt in two teams, one for backend (api, db’s etc) and one for ‘product’ (frontend), both mixed QA and devs.

The actual problem was from BA and above.
Mostly organizational and management issues (e.g lack of a Product Owner, cost reduction therefore team reduction but the workloads increase or at least is the same and so on)
This put additional pressure on both dev and QA teams.
Nobody can be efficient in such environment.
So … I just quit (some other devs and qa engineers too)
Few months later the project died … which was predictible.

The only good part in this story is, team reduction motivated me to implement automated tests (when i was hired we did only manual testing) without asking for approval and waiting for a certain framework to be agreed by managers. Katalon was one of the tools helping me to achieve this. And being familiar with AQA opened the door for me at the new company.

1 Like

90% of nothing valuable is still nothing, yet there is some misguided notion that 100% coverage is some marvellous ideal to achieve with, especially with automation coverage, forgetting actual hands-on testing.

Everything is rarely relevant all the time, unless 100% of the application is undergoing change every time, but the reality is, typically speaking, unless it’s some structural or fundamental change to the way an application behaves, it’s probably more likely a figure much closer to 5% (with another 5% for other areas it may also cross over with).

So, achieving that remaining 90% in these situations in a speedier fashion means you’ve achieved potentially nothing, however, just a whole lot faster, but is that really efficiency (especially whilst something is still going through the iterative process of development)?

My version of efficiency puts risk management at the forefront of what to cover, and contextually analyses what is appropriate, and then utilises this. What that usually translates to is covering first and foremost the area under test, and then any related areas that may have been touched by the changes performed. However, this may then translate to only needing to cover about 10% of what your application does.

The result from this approach? The areas with the most notable risks are covered, issues are identified, and regressions are minimised. Does it eliminate risk? No, but in reality, even those theoretical ‘100% coverage’ suites will still have gaps, after all, they are still just as fallible as the code it’s being run against.

Contextually, where it is a major feature (with many touch points), this may then expand out to 30-40% (typically) of the application, and thus the coverage area selected should reflect this. Only where it’s something more at an ‘engine’ level of an application it is likely to require a greater level (…and even then, may only require 90% coverage, not 100% of the application to cover all its touch points).

The less time spent by the automation suite executing, the faster the feedback loop for developers. Likewise, the more focused the maintenance or development to just what is related, the less time that is consumed.

When it comes to release time, sure, you can run your whole suite then if you want that added reassurance, and it may have picked up some satellite regression in some completely unexpected area of the application, but it means you haven’t otherwise cost yourself that time on an ongoing basis, with a low probability of there even being that added value.

If you have any reasonably sized application, and your typical CI/CD pipeline, then when you consider the time spent waiting for each build to complete, and times it by the number of times something is built, and you start to see the amount of time you might just be wasting due to this lack of a focused approach.

I am aware that in certain regulated industries, having the discretion on what to focus on may not be compliant with certain regulatory requirements, at which point, the reliability and execution speeds of the code, infrastructure related improvements, and better education may be the only options at your disposal.

I am not suggesting that the above measures aren’t valuable in their own right, they are, but process-related improvements can trump code and infrastructure related improvements, too.

Likewise, process improvements don’t just sit within our grasps, but with the team in general, so whether this is communication issues, understanding issues, or something else. This is where direct collaboration (between devs and testers) can help to expedite how quickly an issue can be turned around, and provides a direct route for clarifying any misunderstandings.

I also believe that in making quality something owned by the team, then the returns from that alone can be substantial. A developer who takes greater ownership over the quality of what they are writing, and who has a better and more in-depth understanding of the system they are working with, can end up producing code that is more reliable / less bug prone, and which then requires less time to test and ultimately release, too.

2 Likes