We are excited to announce that Ask Katalon Anything (AKA) June 2025 is now LIVE! Whether you’re exploring the Katalon platform, running into challenges, or simply looking for tips to enhance your testing workflows – We are here to answer all of your questions.
We’d like to introduce Ms. Trân Lê, Product Manager at Katalon, who leads the development of end-to-end AI-driven workflows across our platform. Her expertise spans TestOps, automation orchestration, and QA data intelligence. Trân is passionate about helping teams work smarter with data and automation and always looking for ways to bring greater insights and efficiency to the testing process. If you’re interested in optimizing your TestOps workflows or exploring AI-driven testing, @tranleez is the perfect person to engage with during this AKA!
Guidelines
If you have any questions about Katalon TestOps, please raise them directly under this thread. Ask one question at a time by replying to this topic, using the Reply button AT THE BOTTOM of this topic.
We will provide a weekly Q&A summary of Katalon Studio discussion in a separate thread. Please read through the existing questions in the summary to see if your question has already been asked by another member, before posting your own.
Please don’t reply to anyone else’s post. If you’d like to discuss a related topic in more detail, you can Create a new topic
After posting your question, please give me from 1-3 days to answer your questions as I may also be busy with other projects.
And lastly, don’t forget to have fun!
Posts/replies that do not follow these guidelines may be removed to keep the AKA session flowing smoothly. Thanks!
it captured the defect testcases in that, how its related to application issue, why it states for unreliable testcase.
so here the most unreliable testcase mean, the testcase or testscript needs to be improved or the testcase that is failed most and the defect needs to be fixed on priority
In Katalon TestOps, a flaky test case is a test that both passes and fails from time to time with no code modifications.
When Katalon TestOps identifies flaky test cases as “most unreliable,” it refers to test cases that exhibit inconsistent behavior across multiple executions. This unreliability can stem from: Test Script Issues (as you mentioned), Application Issues (Actual bugs in the application under test, Performance inconsistencies, etc.), or Infrastructure Issues.
Recommended Actions when looking at this Flaky Test Case Report:
Investigate each flaky test to determine if it’s a test issue or application bug
Fix test scripts for timing/environmental issues
Prioritize application bugs revealed by consistent failure patterns
Use flakiness % to prioritize which tests need immediate attention Bottom line: Treat flaky tests as diagnostic tools that need investigation rather than reliable indicators of application health. Focus on stabilizing your most flaky tests first to improve overall test suite confidence.
In Katalon Testops, under Organization management, Katalon platform shows 3500 quota for the specified billing cycle reports upload. after uploading 2000 reports, can i reset this to upload again. Can this quota be reset. If so how we can do that
in the attached screenshot, it is 112 reports uploaded, can i reset this. Because previously it was 2000+ reports uploaded out of 3500 but now it was 112. is tat it got reset.
This quota showing the limit of test results that you can run and in short, you cannot reset this by yourself because it will be automatically reset to 0 every month.
I’ve heard TestOps has some new manual testing features. Can you walk me through how the AI-powered test case generation works when I have user stories or requirements? Does it actually understand the context and create meaningful test cases? Thank you
Is there any way to showcase or present the katalon automation testing performance by presenting test execution metrics at high level across all projects or at individual testing project level via Power BI. Any integration available to connect testOps dashboard to Power BI so that all progress can be tracked automatically not manually like through API integration available, not manually feeding test result reports to Power BI. details like how many testcases executed, how many passed, how many failed, how many bug/defect found etc…
When I run a test suite collection with the large number of test cases in a local agent, sometimes it stops with exit code 1 or 2, end the execution in TestOps will be terminated with status “incomplete”. Then, I cannot know how many passed/failed or skipped test cases. Especially, I cannot generate any useful exports and waste time.
So why TestOps reset all test results instead of keeping all data at the time the execution is terminated?
I also think of a workaround. I realized that I can backup the reports in folder tmp///Reports. But when the execution suddenly terminated, it deleted all data in tmp folder. Then, I did not have enough time to backup the reports. Could I can config or change any settings to prevent from deleting data in tmp folder after the execution (I will delete manually)
Or do you have any other advice or solution?
Thank you for you help.