Currently, Katalon retries failed test cases in a new test suite for several times.
I hope Katalon add another rerun pattern for failed test cases. That is, if a test case fails, the it retries again after it fails, just within the same test suite.
There are several advantage for this pattern:
1. The test report will be easy to read. Because although there are failed test cases, they are in the same test suite, which means there will be much less report folders.
2. Using after test suite hook to monitor whether a test suite failed will be easier. If the failed test cases run in another test suite, after test suite hook will run for several times, it is hard to say whether the test suite failed.
Currently, Katalon retries failed test cases in a new test suite for several times.
I hope Katalon add another rerun pattern for failed test cases. That is, if a test case fails, the it retries again after it fails, just within the same test suite.
There are several advantage for this pattern:
1. The test report will be easy to read. Because although there are failed test cases, they are in the same test suite, which means there will be much less report folders.
2. Using after test suite hook to monitor whether a test suite failed will be easier. If the failed test cases run in another test suite, after test suite hook will run for several times, it is hard to say whether the test suite failed.
Hi Andy,
Thanks for this suggestion. I will forward it to the team for further discussion.
Definitely vote for Andy’s proposal. It would be great if we have this feature in Katalon Studio - will be easy to get test statistic in one place and to upload it to CI server like Jenkins.
Even better - it would be nice to have an option where if the test case passes at least once within the test suite - it is marked as “passed”.
I am currently testing a website where I get a lot of false positives mostly due to the page lag. If you rerun the test again it will mostly likely pass, rerun it 3 times - and I only get 1-2% fails, which are genuine.
As it stands - I am having to rerun a test suite containing 97 test cases three times and then cross-reference the pass/fail results manually to weed out any false positives and find any real errors (which incidentally are none! )