I have made a GitHub repository:
Problem to solve
Let me assume that I have a Katalon Studio project with a Test Suite TS0 which consits of 5 Test Cases TC1, TC2, TC3, TC4, TC5. These Test Cases are unfortunately flaky. They usually pass but occasionally fail. I know, however, the failed test cases are likely to pass if I run the TS0 more times. Therefore I want to run the failed tests only next time. In other words, I want to skip those test cases that once passed.
Running the failed tests only — this scenario looks beneficial for the case where the component test cases take long time to run (10 minutes, 30 minutes, more than hours …). I do not like to wait long for the test cases that already passed. I want to skip the passed test cases.
How can I achieve this time-efficient scenario in Katalon Studio?
Solution
There is a page in the Katalon Studio documentation:
This page introduces a method skipThisTestCase() of the class com.kms.katalon.core.context.TestCaseContext. The skipThisTestCase() method is useful for implementing my scenario. Unfortunately, the sample code in the official document is not very useful. It does not provide any inspiring usecase. So, I will present a runnable demo project here.
Description
I will describe a sample project where I can run the failed test cases only while skipping the already passed testcases.
Test Cases
I made 5 test cases: TC1, TC2, TC3, TC4, TC5. All of them have similar Groovy code
This script randomly passes or fails. If the current UNIX Epoch is an even integer, it passes. If the epoch is an odd integer, it fails. These test cases are designed simple intensionally in order to make the demonstration easier to understand.
Test Suite
I made a test suite TS0 which just binds the 5 test cases.
Test Listener
I made a Test Listener Proctor which does a black magic.
import java.nio.file.Path
import java.nio.file.Paths
import java.nio.file.Files
import com.kms.katalon.core.annotation.AfterTestCase
import com.kms.katalon.core.annotation.AfterTestSuite
import com.kms.katalon.core.annotation.BeforeTestCase
import com.kms.katalon.core.annotation.BeforeTestSuite
import com.kms.katalon.core.configuration.RunConfiguration
import com.kms.katalon.core.context.TestCaseContext
import com.kms.katalon.core.context.TestSuiteContext
import groovy.json.JsonOutput
import groovy.json.JsonSlurper
class Proctor {
static Path note = Paths.get(RunConfiguration.getProjectDir()).resolve('result.json')
def jsonObj
@BeforeTestSuite
void beforeTestSuite(TestSuiteContext context) {
def slurper = new JsonSlurper()
if (Files.exists(note)) {
jsonObj = slurper.parse(note)
} else {
jsonObj = slurper.parseText('{}')
}
}
@BeforeTestCase
void beforeTestCase(TestCaseContext context) {
if (jsonObj[context.getTestCaseId()]) {
if (jsonObj[context.getTestCaseId()] == "PASSED" ||
jsonObj[context.getTestCaseId()] == "SKIPPED") {
context.skipThisTestCase()
}
}
}
@AfterTestCase
void afterTestCase(TestCaseContext context) {
jsonObj[context.getTestCaseId()] = context.getTestCaseStatus()
}
@AfterTestSuite
void afterTestSuite() {
String jsonStr = JsonOutput.toJson(jsonObj)
note.text = JsonOutput.prettyPrint(jsonStr)
}
}
Please read the source code of Proctor to understand what it does. I wouldn’t explain much about it here.
How it runs
1st run
I ran the TS0.

3 test cases passed; 2 failed. I got the result.json file as follows:
{
"Test Cases/TC1": "PASSED",
"Test Cases/TC2": "FAILED",
"Test Cases/TC3": "PASSED",
"Test Cases/TC4": "PASSED",
"Test Cases/TC5": "FAILED"
}
2nd run
I ran the TS0 for the second time.

The 3 test cases TC1, TC3 and TC4 that passed in the 1st run were skipped in the 2nd run. The remaining 2 test cases TC2 and TC5 failed again.
I got the result.json file as follows:
{
"Test Cases/TC1": "SKIPPED",
"Test Cases/TC2": "FAILED",
"Test Cases/TC3": "SKIPPED",
"Test Cases/TC4": "SKIPPED",
"Test Cases/TC5": "FAILED"
}
3rd run
I ran the TS0 again.

The TC5 passed this time.
I got the result.json file as follows:
{
"Test Cases/TC1": "SKIPPED",
"Test Cases/TC2": "FAILED",
"Test Cases/TC3": "SKIPPED",
"Test Cases/TC4": "SKIPPED",
"Test Cases/TC5": "PASSED"
}
4th run
I ran the TS0 again.

The TC2 failed. Others were skipped.
I got the result.json file as follows:
{
"Test Cases/TC1": "SKIPPED",
"Test Cases/TC2": "FAILED",
"Test Cases/TC3": "SKIPPED",
"Test Cases/TC4": "SKIPPED",
"Test Cases/TC5": "SKIPPED"
}
5th run
I ran the TS0 again.

The TC2 failed. Others were skipped.
I got the result.json file as follows:
{
"Test Cases/TC1": "SKIPPED",
"Test Cases/TC2": "FAILED",
"Test Cases/TC3": "SKIPPED",
"Test Cases/TC4": "SKIPPED",
"Test Cases/TC5": "SKIPPED"
}
6th run
I ran the TS0 again.

The TC2 failed. Others were skipped.
I got the result.json file as follows:
{
"Test Cases/TC1": "SKIPPED",
"Test Cases/TC2": "FAILED",
"Test Cases/TC3": "SKIPPED",
"Test Cases/TC4": "SKIPPED",
"Test Cases/TC5": "SKIPPED"
}
7th run
I ran the TS0 again.

The TC2 passed! All of 5 test cases have passed.
I got the result.json file as follows:
{
"Test Cases/TC1": "SKIPPED",
"Test Cases/TC2": "PASSED",
"Test Cases/TC3": "SKIPPED",
"Test Cases/TC4": "SKIPPED",
"Test Cases/TC5": "SKIPPED"
}
8th run
I ran the TS0 again.

All of 5 test cases were skipped. This time, the TS0 finished very quickly (in a few seconds).
I got the result.json file as follows:
{
"Test Cases/TC1": "SKIPPED",
"Test Cases/TC2": "SKIPPED",
"Test Cases/TC3": "SKIPPED",
"Test Cases/TC4": "SKIPPED",
"Test Cases/TC5": "SKIPPED"
}
How to bring back
As long as the result.json exists, the Proctor will skip the once-passed test cases forever. However you can remove the result.json file manually or by shell script. Then you can bring the project back to the initial state. Without the file, the Proctor will let every test cases run equally.
Conclusion
I presented a demo project where the skipThisTestCase() method of the class com.kms.katalon.core.context.TestCaseContext is utilized. This demo shows a runnable case of “run failed tests only”. The technique might be useful if your project is flaky and takes long time to run.