Retesting strategies best practices

Are there some best practices for retesting strategies for dealing with test flakiness that can be easily implemented in Katalon Studio? Please share!

For example, I have a test suite consisting of cca 15 test cases. What usually happens when I run the TS is that two or three TCs fail because of network load issues or timeouts but when I run them for the second time they pass. I saw that console mode has the option of repeating only the failed tests. So, my idea would be something like this:
- run the TS, let’s say 3/15 TCs fail
- re-run the three failed TCs, say 2/3 fail
- re-run the 2 remaining cases, those that fail are really failed

What do you guys think?

1 Like

Hi Mate

Great question – stuff I’ve been thinking about, too.

I did try the repeat failed setting for a short while… but it seemed odd to me. It created entirely separate reports, which was weird. I didn’t really need to have it working so I moved on without finding any resolution (it may have been something I did wrong - it was my early days :wink:

But yeah… I’d be interested in any and all response to the same question. Anyone out there care to speak up?

Russ

Even i am facing the same issue

Not much happening here…

Here’s one idea to discuss:

Would it be possible to read the final log message after TC failure, and then - depending on the message content - to decide what to do with the test?

E.g. test fails, message log reads something like “Unable to click the element because of error code xyz123…”. My script somehow reads the log, finds a familiar failure reason, and than somehow restarts the test.

Any ideas on the _somehow_s? :smiley:

You mean parse the logs?

Could be done. But for me right now, that’s too time consuming. Certainly doable though…

Maybe the test listeners could come into play? don’t know how much complexity they can handle.

IMHO, the decision to re-execute or not the tests should be implemented at the executor …
E.g jenkins have the possibility to repeat a failed test before to decide it has finally failed.
Of-course, will run the entire suite.
But, it will keep I think the logs for each attempt, so you can inspect which one has failed during each run, and in the end you will get a full report with all testcases.
The drawback is, it will increase the execution time, so for smaller suites (10 - 15 tescases) this approach may be ok.
For larger test-suites, re-running only failed test may be better.

Other approaches can be made … like parsing logs to see which one failed and build a custom test-suite on the fly containing only those … that depends only by the scripting skills and the patience of the tester.