This is a companion discussion topic for the original entry at https://docs.katalon.com/katalon-studio/docs/test-listeners-test-hooks.html
This is a companion discussion topic for the original entry at https://docs.katalon.com/katalon-studio/docs/test-listeners-test-hooks.html
How to call Method call statements in Test Listeners.
Hi Katalon Studio team !!!
At test listeners, i was connected to Mantis system to upload issue. Although issue had uploaded to Mantis, but Log viewer show a error:
āorg.codehaus.groovy.runtime.InvokerInvocationException: Error Type: SYSTEM NOTICE,
Error Description: Undefined variable: t_messageā
I couldnāt find variable t_message anywhere in my code.
Please tell me how can i fix it.
So many thank !!!
Can you show us the details of the function mc_issue_add. Are you certain that the variable t_message does not exist in your code ?
Also, does this Exception indicate a failure ?
It it does not mean anything and is irrelevant to your project, consider Try Catch this particular exception and ignore it.
Cheers !
@ThanhTo @Russ_Thomas @devalex88
Based on documentation (and observation) test listeners are executed and if there is a failure within test listener (ex. BeftoreTestCase()) tests will continue with the execution regardless of the pass/failed status of the listener. In our case, (and I assume in most), test case fails when the BeforeTestCase fails⦠but ideally I would assume that in case of a test listener failure test case is actually skipped and it does not get executed.
Existing solution can extend the execution time unnecessarily (can be couple of minutes/test case before it fails due to BeforeTestCase() listener failure) and also increases debugging time.
Is there a particular reason why test cases are executed even if there is a failure of the listener?
Any other opinions about this?
Thanks,
Rasko
Thereās no solid basis to say that a test case should be skipped if a test listener fails, because thereās a chance that the test listeners arenāt crucial to the test execution. It really depends on how you construct your tests and business-specific requirements / priorities.
In principle, listeners should only listen and the process ( in this case test execution) should be executed independently and regardless of the status (failure/pass) of what is listening to it. I hope that answers your question.
Cheers !
Apparently not, because you then go on to say:
So it doesnāt depend on how users construct tests ā the Katalon framework simply does not ālistenā to listeners, period.
In principle, any hook could be implemented like so:
if(!hookCall(something))
throw ...
Which would handle this scenario:
boolean beforeTestCase(...) {
if(isProblem())
return false
...
return true
}
What I mean is that test listeners should only listen to events such as ābefore test case is executedā and āafter test case is executedā, and therefore it doesnāt make sense for the test to fail automatically because something is listening to it fails due to whatever reasons.
When I say it depends,
I mean the basis for reasoning how the test listeners should behave in conjunction with the test varies, not how Katalon framework is implemented. I get that it canāt satisfy everyoneās needs, but then hardly anything can. If you have any idea on how it should be then please suggest and weād definitely think about it.
Yeah. This.
Iām having difficulty understanding the technical reasoning behind āshouldā. Who says? What is the spec? Is there one? Who designed it? Can it be changed?
If itās just a matter of supporting legacy code then support two flavors, void
and boolean
Trust me Thanh, there are many times Iāve wanted to abort at the listener. Ideally, Iād have a hook point much earlier in the executor thread and abort there.
Better than a boolean, let me return a string. If empty ("") continue. If not emptyā¦
return "Aborted because ..."
Which Katalon would guarantee to throw and get me out of the executor thread.
I agree with @Russ_Thomas and that was my expectation too but I think I understand the design now and I think this is not covered clearly in the documentation and I think most of us use it wrongly (and is also implemented partially?) so I would recommend some updates to clarify this. (The chart looks good but documentation does not describe whats happening there).
TestListeners are supposed to be listening for specific events (like events OnTestStart, OnTestSuccess, OnTestFailure, OnTestFinish, etc.) and the primary use for them should not be setting up the test cases or test suites but for some extra logging and reporting. Those actions are extra and should not impact the status of the test case execution.
Under each TestSuite (script view) we can use setUp() and tearDown() for the given TestSuite and setupTestCase() and tearDownTestCase() for each test case within the test suite. When we use these methods instead of test listeners then in case of a setUp failure test case execution is skipped (test steps are not executed) and test is reported as Error in the report.
Within Katalon Studio it looks like we can create listeners only for OnTestStart and OnTestFinish events so that is what is making it confusing and why I had my expectations. Can we create listeners for any other event? Is this maybe planned in the future?
Here are screenshots on Code and Report:
Code:
Report:
I think this needs to be explained better in the official documentation.
Thanks @Russ_Thomas and @ThanhTo
Rasko
@ThanhTo
Also the logic in example that is on this page I believe does not belong in the listener but in the setupTestCase since it does impact test case execution and can introduce the failure
Thanks you. i used Try Catch to ignore it.
seems execution profile > test suite would be the best practice, then i would agree that listeners would be mostly for extra logging of events ect.
Is this best practice if you plan on running tests on multiple env?
Test Listener? such as dev, staging, qa
Global Setup / Teardown in single suite file?
Or should we just make multiple suite files such as
DevSmokeTest
StagingRegression ect.
Even if it doesnt belong in this discussion: I would not make a new Suite for every environment. You would repeat yourself too much.
Testlisteners are only to Listen on event. Like in Case of failure donāt start test.
I was lazy and put in BeforeTestSuite() { WebUI.openBrowser("") and WebUI.setViewportSize(1300, 1600)
I am glad I choose global variables Dev, Stage ⦠and so on.
There you can define always the same Variable like Campaign1URL, Campaign1Password, etc. and every test is the same if you call just another global setting.
If there is different behavior on a certain environment you just give each global variable a different identification variable e.g. environment = āStageā, environment = āQAā and call for example.
if ((GlobalVariable.environment.equals(āStageā) || GlobalVariable.environment.equals(āQAā)) || GlobalVariable.environment.equals(āProdā))
{
WebUI.authenticate(GlobalVariable.Campaign1URL, āusernameā, GlobalVariable.Campaign1Password, 5, FailureHandling.OPTIONAL)
}
else
{
WebUI.navigateToUrl(GlobalVariable.Campaign1URL)
}
This is working code from my project. Itās for navigating to different environment with either no basicAuth or with BaiscAuth. I think everybody needs this, as there will be exactly this behavior on every project, where production URL isnāt protected. I recommend to write a costum class for encryption tho, otherwise you put readable passwords in your project. Hope this helps anyone.
Is that possible that I call a Web Service test object/ send a Web Service Request at the AfterTestCase test listener?
Hi @MaKatalon
Yes, it is possible.
Hi Thanh To,
i have created a variable in test listener script as belowā¦
testname=testCaseContext.getTestCaseId()
i wanted to use this in my test case during run time, but i couldnāt do it.
i tried to declare as GlobalVariable.testname=testCaseContext.getTestCaseId() in test listener but it didnāt work.
Could you please help.
Also is there a way that i can import the āmethodsā i created in Test listeners and use somewhere else in my test cases?
How does it not work ? Please provide console log and your relevant scripts.