How to forward Root Cause Message for Test Step Failure to Test Case Level?


I have structured my Test Suite so that the names of all main Test Cases to be called one after the other are passed to a Custom Keyword as a list. This Custom Keyword method calls each passed Test Case (via a for loop, using WebUI.callTestCase()) within a try-catch block (see below). I assumed that this would give me the opportunity to determine in real time the “Root Causes” Katalon Studio describes in the logs for failure of single test steps and to decide whether I should be notified as test manager or the developers or product managers responsible for the tested app:

for (def testCaseName : TestCasesFlow) {
    try {
        WebUI.callTestCase(findTestCase(testCaseName), [:])
    catch (StepFailedException e) {
        println "Test Case \"${testCaseName}\" failed with following message: ${e.message}"
        notifyAlert(['ProductManager'], testCaseName, ${e.message})
    catch (StepErrorException e) {
        println "Error in Test Case \"${testCaseName}\" with following message: ${e.message}"
        notifyAlert(['ProductManager', 'AppDevelopers'], testCaseName, ${e.message})
    catch (Exception e) {
        println "General Exception in Test Case \"${testCaseName}\" with following message: ${e.message}"
        notifyAlert(['ProductManager', 'TestManager'], testCaseName, ${e.message})

My problem is that with this construction I only get the message that the called Test Case failed, but not which test step was specifically responsible for it. The Katalon Studio logs will also output this test step failure message at a later position, but how can I get this actual Root Cause message if I don’t want to put every single test step into a try-catch block?

Thanks + regards

Firstly, I love this idea… excellent approach. Unfortunately, I don’t think it will work.

Your try-catch is outside the invocation (invokeMethod) construct where the control you’re trying to seize is happening. Not even a Test Listener will give you what you need. I do something like this, but I ended up with my try-catch inside the test case – it’s the only place I could find that allowed me to seize control and make my own decisions as to what to do with the TC result (pass or fail).

In brief, my Test Cases look like this:

class Test extends my_page {
  Test() { // constructor!
    // test steps here

try {
  new Test()
} catch(Exception e) {
  failThisTest(e.message)  // now you have the root cause!
  throw e // pass error back to Katalon

Perhaps one of the @“Katalon Team” will see this and realize there’s a need for an additional hook point so that this construct can be abstracted out into a single file somewhere.

1 Like

Hi Russ,

thank you for your contribution. Meanwhile, I’m afraid you’re actually right. I tried a few things yesterday because I couldn’t come to terms with that fact.
In particular, I believed that changing the settings for the failure handling would help me in such a way that although the failed test step should still cause the test case to terminate, the test case call itself should not throw any errors, so that really only the message of the actually triggering test step failure would be catched. So I tried this:

try {
    WebUI.callTestCase(findTestCase(testCaseName), [:], FailureHandling.CONTINUE_ON_FAILURE

And for each test step within that Test Case I used STOP_ON_FAILURE of course.
But unfortunately it doesn’t seem to make any difference whether I use CONTINUE_ON_FAILURE or OPTIONAL within the try-block: The catch-block is not triggered at all in either case. Which, somehow I have to admit that, makes a little sense to me at the end of the day… :confused:

As a workaround, I think I will now write the name of each individual test step into a global variable before it is executed, which I can then read out again in the catch block for the test case call if necessary.

But @ Katalon-Team:

That is anything but optimal, please find a solution for such an obvious requirement! It makes no sense to me at all to overload the test cases with the logic for the necessary failure handling, if you imagine at the same time that Test Cases should also be able to be created manually by non-programmers.

So it should be possible to solve this somehow else, as Russ already mentioned: Either by extending the options of the Test Listeners or by a globally available object that provides information about the test step that originally failed.

Thank you + regards!

I left a feature request at

Just now I wanted to ask if and how I could solve my follow-up problem that I am bothered by the verbose / superfluous entries in the console log, which are created by my test step variable assignments. I wondered if there might be an annotation switch like “not_log:” which could (similar to “not_run:”) prevent actions from creating log entries.

So I just tried out my idea and - see there: The annotation switch “not_log:” seems to actually exist, and it works!

Is that already documented somewhere? I didn’t find anything at all.

I must apologize: Apparently, I made a mistake. It’s not working. But it would have been nice, wouldn’t it?
Should I create a new feature request? :slight_smile:

I think you might be interested to Upvote and add your voice to this:

Thanks, Russ, I left my upvote there.

But to be honest, it is still not quite what I would wish for (additionally). Unlike apparently you, I often work with the detailed console log during debugging and I am basically very happy with it. Only in cases like my above approach of inflating my Test Cases with tons of actions that are not relevant for log evaluation at all, I want to prevent them from ever creating those log entries that nobody needs.

I left a new feature request here:


My proposal was to leave those noisy messages in place (who knows for sure when they might become useful?) but have additional levels under our control ABOVE them.

I have achieved something similar by using the log viewer and co-opting the KeywordLogger APIs to suit my purposes.

Level 1 - These are the noisiest messages that come from Katalon itself and the WebUI.comment methods (controlled via the “Info” button in the log viewer)

Level 2 - These are slightly less noisy messages which I send to log.logNotRun (controlled via the “Not Run” button in the log viewer)

Level 3 - These much less noisy and reflect actual Test Case Steps which I send to log.logWarning (controlled via the “Warning” button in the log viewer)

For both Level 2 and 3, I prefix six spaces to the messages so that they are suitably indented from any genuine not-run or warning messages that might be issued by Katalon. For example, here is my Level 2 method:

import com.kms.katalon.core.logging.KeywordLogger

   * Prints a message to the KS log. The message is also returned to aid
   * chaining, if needed.
   * @param msg (String) The content of the message.
   * @return msg (String) The message
  static String comment(String msg) {
    if(GlobalVariable.COMMENTS) {
      KeywordLogger log = new KeywordLogger()
      log.logNotRun('      ' + msg)
    return msg

With these two new levels in place, now I can control the log viewer (note - NOT in tree view mode!) using the buttons to the left of the log viewer panel.

Hope this helps while the Katalon devs figure out a logging API that really serves the purpose better.


1 Like

Ah, I understand, in any case a very interesting workaround approach for using the Log Viewer, upvoted! But I have to admit, the main reason why I prefer to use the console log is that I copy its output into an editor to be able to search and analyze it better there. That means: If I could search the logs in Katalon Studio, and especially within the Log Viewer, directly, I would probably use it, too. So stuff for another feature request? :slight_smile: I think there’s enough for this week.

In another thread kazurayam presented an ingenious way to manipulate every executed test step at runtime. This finally allows me to catch all relevant root cause error information at the right moment without overloading every single test case with a try-catch block. See my new workaround here: How to Highlight Test object in each and every step