FailureHandling Change breaks legacy code (was STOP_ON_FAILURE does not stop a test when verifyElementPresent returns false)

Typically, “warnings” are not treated as aborts or assertion failures.

You can try this:


The other possibility is to trap the boolean from one of the verifyElement* APIs and use KeywordUtil.

I’ll ask this question another way. What must happen in order to stop a test case containing the following code?

@devalex88 @ThanhTo I think this may be a bug. As you’re aware, I don’t run tests like this, so treat this report with due distrust :wink:

Tested in Katalon 7.0.8

@anbanbanb Okay, here’s my Project Setting:


Here is my script (wam is just my way of writing comments - safe to ignore).

    wam 'Nav to Katalon "About us" page'
    wam 'Check some links are accessible'
    WebUI.verifyLinksAccessible(["", ""])
    wam 'Create a failing test object using makeTO'
    TestObject to = makeTO("input.russ")
    wam 'Try to verify the fake test object - THIS SHOULD FAIL!'
    WebUI.verifyElementPresent(to, 5)
    wam 'Test should NOT reach this line!'

The test reaches the last line but should not.

This also does not stop the test:

    wam 'Try to verify the fake test object - THIS SHOULD FAIL!'
    WebUI.verifyElementPresent(to, 5, FailureHandling.STOP_ON_FAILURE)

This works but should not be necessary:

    wam 'Try to verify the fake test object - THIS SHOULD FAIL!'
    if(!WebUI.verifyElementPresent(to, 5, FailureHandling.STOP_ON_FAILURE)) {
      KeywordUtil.markFailedAndStop("Element not present")

This works. But I have many lines with checking objects, and it is inconvenient to use three lines of code for verification instead of one.
Are you sure that Katalon developers initially considered
as warning and not as failure?

That’s the point of the bug report. Sit back and wait for the devs to check it out.

Hi @anbanbanb

So as documented, the behavior of verifyElementPresent should return true or false, indicating wether an element is present or not. It throwing an exception and failing the test was an explicit behavior, but making it harder for people who simply want to proceed with their logic, at least without try catch (which is bulky when used multiple times). Therefore we adjusted the behavior to match the expectations against the documentation.

If you want to fail your test, then there are two ways:

  • Implement a custom keyword that checks for false value returned from verifyElementPresent and throw the exception.
  • Use:
assert present == true // this will fail the test if verifyElementPresent returns false

This adjustment is meant to balance the frequently observed use cases.

@ThanhTo @devalex88 @duyluong

Guys, keep in mind when you read this, I don’t use these APIs, so it doesn’t really matter to me, but, as someone trying to help others, now I have a problem:

Thanh - the expectations are going to vary based on two use-cases:

  • The boolean aspect
  • The FailureHandling aspect

What we have is one “thing” (verifyElementPresent) documented as offering two things[1] but failing to do one of them:

  1. Return true/false based on presence/absence. (PASS)
  2. Support FailureHandling as per the docs. (FAIL at least when STOP_ON_FAILURE is used)

FailureHandling, as I understand it, is meant as an interface on to Katalon’s TestCase state management (they end up as throws in many cases, right?) But…

If verifyElementPresent now ignores STOP_ON_FAILURE then it seems fair to say the FailureHandling argument has no purpose, or worse, only partial purpose. If so, then it doesn’t “match the expectations against the documentation”. You have a TestCase step that cannot fail - have I got that right?

I took a look at VerifyElementPresentKeyword.groovy and WebUIKeywordMain.groovy but my mental closure parser threw a StackOverflowException :nerd_face::dizzy_face: To totally grok it I’d need to step it through properly.

Therefore this is still buggy if only because the documentation is now wrong. If I’ve said STOP_ON FAILURE as both…

  • a parameter to the method
  • AND as my default handling setting

and NEITHER can stop a test case, that’s a 100%, bona fide, de facto bug.

Repeat. I don’t use this stuff so it doesn’t really matter to me, personally. But now I can’t support/help users understand what STOP_ON_FAILURE even means.

My advice, for what it’s worth: deprecate WebUI, begin work on WebUI2 and fix all these irksome pain points. :sunglasses:

Sorry if this was a ramble (been here 19 hours!)

[1] It’s rarely a good idea to have one thing do two jobs: Yeah, I know, we can all end up with tangles like this but that’s what the route cause is here – you have tension between a boolean and a throwable. It’s an arms race no one can win.

1 Like

The purpose of Failure Handling is to handle all types of failures, not just the “considered-as-failure” of verifying element presents or exists. There are many exceptions which the implementation of verifyElementPresent may throw that may have nothing to do with the intention of the keyword (i.e NullPointerException). Specifying Failure Handling strategy, then, is for obtaining more/less control over these scenarios.

To quote the documentation:

Failure handling settings allow users to decide whether Katalon Studio will continue running or not in case of errors occurs during execution.

While I agree that the expectation regarding the Failure Handling, in the context of this topic, would usually be about handling the case where verifyElementPresent determines that element is not present, it is not a “failure” per se, but more of a valid piece of information to base your test execution further (deciding what to do if element is not present).

To decide to fail the test given this piece of information is a decision that would need to be made explicitly by using assert, or implicitly (if you have reasons to do so) by implementing a wrapper custom keyword. Nevertheless, this decision is entirely up to the users.

Of course, the behavior switch from the old version to the new version may cause frictions because with more power to make decisions come more responsibility (to add alert/custom keywords), but I think returning a boolean indicating the state of presence/existence for that element is a good balance between the use cases.

1 Like

imho … i will split this behaviours between two different keywords, e.g:

  • verifyElementPresent - like it was beffore, check and assert -> throw exception if failed. Failure handling will make sense for it, the logic should be any expected exception should throw stepFailed, any other (NPE or whatever) should re -raise it.
  • a new keyword, isElementPresent or checkElementPresent which should catch any exception and return boolean. Failure handling is not needed for it, will be up to the user how to handle the logic.

for me, verify always implies an assertion (which should always throw exceptions, treated or the original)
just saying …


+1 … To my mind, “Verification” is stronger than simply “checking”, and semantically implies the possibility of an Exception, should the verification fail.
Also, please don’t break backward compatibility without a very good reason to do so, especially not when it would be simpler and clearer to introduce a new keyword (like the suggested “isElementPresent”).

1 Like


I 100% agree. Let me be more clear (hey, I got some sleep! :wink: ) The problem was there before and still exists now. Here it is, stated less academically,

FailureHandling is a mickie-mouse way to plugin users’ guidance as to what katalon should do with errors (and/or exceptions). But…

It should never have been there.

I don’t see any real change. You switched one old buggy behavior for another new buggy behavior.

And always should have been and always should be. My guess: trying to support the Manual view presentation of code somehow invited approaches like FailureHandling.

Ignoring the fact that the method was considered buggy before and now is considered fixed (or, at least better) that is how the method always worked. I don’t see how that can be used as an argument for “why this happened”. Seriously. It was broken before (it didn’t honor a false return), and now it doesn’t support STOP_ON_FAILURE. Both are bad. Both are broken. -> WebUI2 !!!

Here’s my take, based on what I said before: You removed the tension between the boolean result and FH by effectively ignoring* FH and honoring the boolean result regardless of value. I t could be argued that you are choosing the better bad over a worse bad. I think I could be persuaded of that.

* Certainly in terms of STOP_ON _FAILURE, I did not research/evaluate the other possibilities.

Now, hopefully to get us back on the same page:

  1. The documentation is wrong.

It now seems to be offering a poor-man’s entry into the try-catch nature of Katalon’s exception handling unrelated to the purpose of the method - which is a bad way to implement error handling. Not only that, it doesn’t really work. The assert(s) you (Thanh) mentioned are way, way better than this.

  1. The project setting, Default FH for Test Step is broken (or has no meaning/purpose in the context of this API at least)

  2. Users would be better advised to treat this API as an IF-constuct and ignore FH.

I’d be remiss for not mentioning the impact on Web recorder/Manual view, but I don’t use them so I really can’t comment. But yes, someone needs to ensure the output of the tool reflects well the intentions of the API(s) and the best advice given here on the forum. :confused:

@Ibus, WebUI2, and see Fix Poorly Named/Broken APIs

I don’t suppose @anbanbanb had any idea the can of worms he’d be opening up when he reported his problem :rofl: but hey, when the subject matter matters, so be it :nerd_face: :sunglasses:


@Russ_Thomas hey … if developers will be perfect, we (QA engineers) will be jobless :smiley:

1 Like

as for this … no. is even worse. most probably under the hood with the new implementation is an ‘catch any’ mechanism.
wich is also known as ‘bug hidding’ tehnique.
therefore my previous suggestion.

Well, there’s a finally, but seriously, dragging out the implementation details of the innards here is unnecessary. The API, as documented, doesn’t quite work, depending on your approach (what I meant by aspect above). Both boolean values are now supported, FH has been cast adrift into a blackhole.

1 Like

that’s why i jumped to python. finally is an abomination in that world :slight_smile:

I jumped back to version 6.3.4.

That’s not helpful and a bad idea.

Off-topic discussion below

Seriously, the thread may sound like a gripe-fest but it’s essentially about improving Katalon. Feelings can run strong but casting a down-vote by doing something like that and telling the world about it doesn’t help anyone - including you.

What will you do when the activation subsystem for v6 is turned off?

(Please don’t answer that, it’s a rhetorical question)

my ‘jump’ was a comparison between some coding paradigms (some people may read it as a way to move forward in the sense of how to approach the code, withouth being blocked by specific keywords/language implementations)
your ‘jump’ was just a give up.
just sayin …


I’ve just noticed my statements, like this one…

make it sound like I was missing one of your points above. I didn’t miss it. I’m saying that the documentation does not make it clear. I’m saying the FH argument does not handle all errors produced during the execution of the step.

I offer the following as evidence - if I’ve misunderstood, please show me where I’m wrong

Firstly, the documentation:

My code:

    wam 'Nav to Katalon "About us" page'
    wam 'Check some links are accessible'
    WebUI.verifyLinksAccessible(["", ""])
    wam 'Create a null test object'
    TestObject to = null
    wam 'Try to verify a null test object (1)'
    if(!WebUI.verifyElementPresent(to, 5, FailureHandling.OPTIONAL)) {
      wam "null object, OPTIONAL"
    wam 'Try to verify a null test object (2)'
    if(!WebUI.verifyElementPresent(null, 5, FailureHandling.CONTINUE_ON_FAILURE)) {
      wam "null object, CONTINUE"
    wam 'Try to verify a null test object - THIS SHOULD STOP THE TEST! (3)'
    if(!WebUI.verifyElementPresent(null, 5, FailureHandling.STOP_ON_FAILURE)) {
      wam "null object, STOP (shouldn't see this)"
    wam 'Test should NOT reach this line!'

Result -> I see ALL lines executed. I see warnings but I don’t see errors or failures.

(Now you see how my comments are using KeywordUtil warnings)

Somebody needs to document clearly the definition of “failure” and “error”. I tried forcing errors as you can see. I went further, passing in strings for the timeout argument. They all caused exceptions but not errors proving the FH arg is pretty much meaningless.

Am I wrong?

If it’s the way I conduct (and manage) testing and reporting, please let me know. And if so, I’d be interested how the essence of the code above works for others.

1 Like

I jumped back to version 6.3.4 waiting of results of this discussion.
Problem of version 7.0 is wider and not only in failure handling STOP_ON_FAILURE. In my code I verify existence of web elements on pages using WebUI.verifyElementPresent and CONTINUE_ON_FAILURE failure handling. In v 6.3.4 program reported if element was absent and continue to run. Now (in version 7.0) program returns boolean value but doesn’t report failure. Now I need to use artificially created CustomKeywords to catch failure like:
def verifyElementPresent (String objectName, Integer n){
def present = false
present = WebUI.verifyElementPresent(findTestObject(objectName), n)
if (present == false){
KeywordUtil.markFailed(“Web element with id '” + objectName+"’ not present")

So now I need to fix thousand lines of code to use version 7.0. Is it good approach?