Skipping Test Case using @SetupTestCase

Hello together,

I’m looking for a solution of the following situation: I’ve got a TestSuite with many test cases and all of them have the precondition that the shown table includes at least one data row.

Of course, I could add sth like at the beginning of each test case.

if(CustomKeywords.'MyKeyword.checkNrOfRows'() > 0){
    // fency code
}else{
    println 'Test case skipped'
}

I’d rather do it within the @SetupTestCase method of the Test Suite - so it would be like

@SetupTestCase(skipped = false)
def setupTestCase() {
    if(!CustomKeywords.'MyKeyword.checkNrOfRows'() > 0){
        // Skip test case execution
        println 'Test case skipped'
    }
}

Does someone know what to replace the comment with to skip the execution of this test case. Probably in Warning state and not Failing?

No, unfortunately. The hook methods are not setup in the way you would like. It would be nice if there was something like this – perhaps a hook that allowed a boolean “allowed”/“disallowed” return.

@TestCaseAllowed
def boolean tcAllowed(TestCaseContext testCaseContext) {
  if(some_condition) {
    KeywordUtil.markWarning(testCaseContext.testCaseId + " skipped")
    return false
  }
  return true
}

The markWarning could even be handled by the framework.

1 Like

Thanks for your answer @Russ Thomas. I’m going to create a method like this and add it to each test case. Not really nice, but okay. The annotation you used does not exist yet right?

Maybe @Vinh Nguyen could add this to the feature improvement list? :slight_smile:

The annotation you used does not exist yet right?

That’s right - just my idea/proposal at how it might be achieved.

Perhaps you could post it (or something like it) in the Suggestions forum.

I just wanted to avoid creating a second ticket for the same issue.

Can you just tell me how to get the TestCaseContext at the test case itself? I saw that the Listeners have it as an Argument by default but how can I get it from outside of a Listener?

I trap it at a listener and store it in a static variable (you could also store it in GlobalVariable.TC_ID or similar).

Sorry, Paul, I missed this:

I’m going to create a method like this and add it to each test case. Not really nice, but okay.

No, that’s not very “nice”, as you put it.

How would you feel about using the Keywords mechanism to define your own classes? That way you can gain back some control. Imagine that all your tests inherit from a base class that contains a method that does that validation/condition (in your case a row count). Then your test would only continue if the base class allowed it.

Once again, you’re not quite getting the decision at the appropriate time (like our proposed @TestCaseAllowed would provide) but it’s better than adding the same method to every TC script.

Understand that the Custom Keyword mechanism is your only way (natively in Katalon) to add your own classes/methods. Further, you don’t need to create actual custom keywords using @Keyword - you can just add a package of classes and write all manner of goodies that you can import into you TCs. (And yes, I do it all the time).

This missing feature means that you really can’t have data driven tests. The Test Suite code should allow for a test case to be dynamically added or removed

@User_123 this has nothing to do with data driven, but with dynamic test suites. do not confuse them. but agreed, will be nice to have such feature, altough workarounds do exist. feel free to search the forum for solutions

@Paul_Schmidt you find the way to skip the test case?

For what I find you can set testCaseContext.skipThisTestCase() but my problem is I want to skipped from the SetupTestCase() in the test suit script and testCaseContext can’t be call from there. Should be used on the listener of the test

I made a proposal about this being supported in beforeTestCase but unfortunately it didn’t get official support.

boolean beforeTestCase(TestCaseContext testCaseContext) {
  if(some_condition) {
    return false // stop/skip execution of this test case
  }
  return true // continue execution of this test case
}

Could also be used in a notional setupTestCase(), too.

1 Like

@Russ_Thomas Yeap, that would be great but I think that skipp condition should be run on the set up of the following tests, because If my test get faild I will be skipping that fail.

The best and simplest way should be to have some keyword we can call or a check functionality (like the “Retry failed test case only” in excecution information section) that we can set in a test suit that stops when 1 test get failed

Hi guys, I think I have a solution to this.

So it’s not my favourite because I have to call this on every SetupTesCase() when I should prefer to stop all the suit immediate but least didn’t run the entire test

First of all, save the status at the finish test case run creating a string global variable

Listerner
	@AfterTestCase
	def AfterTestCase(TestCaseContext testCaseContext) {
		if(testCaseContext.getTestCaseStatus()!="PASSED"){
		GlobalVariable.TestCaseStatus= testCaseContext.getTestCaseStatus();
		}
}
Test Suit Script

@SetupTestCase(skipped = false)
def setupTestCase() {
// Put your code here.
if(GlobalVariable.TestCaseStatus==“FAILED” || GlobalVariable.TestCaseStatus==“ERROR”){
KeywordUtil.markFailedAndStop(‘ERROR: The previous test fail’)

	}
}

I have to use in the listener that condition because when I mark as Failed or as Error in the previous test that test Fails RIGHT! but his status is still PASSED when I get testCaseContext.getTestCaseStatus() so that’s the reason (I think maybe this is some error by katalon) :confused:

@Russ_Thomas let me know If you have some improvement in your mind, that’s should be great

I did something similar without using setupTestCase. It worked fine.

The AUT has a build number. I wanted each suite run to examine the build number and store it somewhere. If the suite found that it was re-testing a suite it had already passed, it would bail.

The problem came ratifying “what is a passed suite?”. That’s not easy. Flakes and other “ifs and buts” generally meant the last-known-good was effectively (if not literally) unreliable.

@Brandon_Hein I’d be interested in your feedback here (I know the above is weakly defined, but the gist is pretty much clear, I think).

Honestly, I’ve never implemented logic like this, nor do I use test listeners. Sorry :expressionless:

I do agree that the test listeners need some improvement. Currently, they are quite inflexible and have a very narrow scope.

2 Likes

true that.
i think the root cause is because katalon, as it is implemented at this moment it is sort of a wrapper top of junit with a bit of testng technologies.
to actualy improve this with current approach wont be easy.
someone may have to re-define the concept of ‘hooks’ implementation.
a good source of inspiration will be a sneak-peak into pytest ‘pluggable’ modules.
i don’t expect sudden improvements to came into the light soon, on this matter, unless the base of katalon is reconsidered (disconect from already prepared solutions)
will be a lot of work but i am confident it can be done. with small steps

1 Like