We are excited to announce that Ask Katalon Anything (AKA) June 2025 is now LIVE! Whether youâre exploring the Katalon platform, running into challenges, or simply looking for tips to enhance your testing workflows â We are here to answer all of your questions.
Weâd like to introduce Ms. TrĂąn LĂȘ, Product Manager at Katalon, who leads the development of end-to-end AI-driven workflows across our platform. Her expertise spans TestOps, automation orchestration, and QA data intelligence. TrĂąn is passionate about helping teams work smarter with data and automation and always looking for ways to bring greater insights and efficiency to the testing process. If youâre interested in optimizing your TestOps workflows or exploring AI-driven testing, @tranleez is the perfect person to engage with during this AKA!
Guidelines
If you have any questions about Katalon TestOps, please raise them directly under this thread. Ask one question at a time by replying to this topic, using the Reply button AT THE BOTTOM of this topic.
We will provide a weekly Q&A summary of Katalon Studio discussion in a separate thread. Please read through the existing questions in the summary to see if your question has already been asked by another member, before posting your own.
Please donât reply to anyone elseâs post. If youâd like to discuss a related topic in more detail, you can Create a new topic
After posting your question, please give me from 1-3 days to answer your questions as I may also be busy with other projects.
And lastly, donât forget to have fun!
Posts/replies that do not follow these guidelines may be removed to keep the AKA session flowing smoothly. Thanks!
it captured the defect testcases in that, how its related to application issue, why it states for unreliable testcase.
so here the most unreliable testcase mean, the testcase or testscript needs to be improved or the testcase that is failed most and the defect needs to be fixed on priority
In Katalon TestOps, a flaky test case is a test that both passes and fails from time to time with no code modifications.
When Katalon TestOps identifies flaky test cases as âmost unreliable,â it refers to test cases that exhibit inconsistent behavior across multiple executions. This unreliability can stem from: Test Script Issues (as you mentioned), Application Issues (Actual bugs in the application under test, Performance inconsistencies, etc.), or Infrastructure Issues.
Recommended Actions when looking at this Flaky Test Case Report:
Investigate each flaky test to determine if itâs a test issue or application bug
Fix test scripts for timing/environmental issues
Prioritize application bugs revealed by consistent failure patterns
Use flakiness % to prioritize which tests need immediate attention Bottom line: Treat flaky tests as diagnostic tools that need investigation rather than reliable indicators of application health. Focus on stabilizing your most flaky tests first to improve overall test suite confidence.
In Katalon Testops, under Organization management, Katalon platform shows 3500 quota for the specified billing cycle reports upload. after uploading 2000 reports, can i reset this to upload again. Can this quota be reset. If so how we can do that
in the attached screenshot, it is 112 reports uploaded, can i reset this. Because previously it was 2000+ reports uploaded out of 3500 but now it was 112. is tat it got reset.
This quota showing the limit of test results that you can run and in short, you cannot reset this by yourself because it will be automatically reset to 0 every month.
Iâve heard TestOps has some new manual testing features. Can you walk me through how the AI-powered test case generation works when I have user stories or requirements? Does it actually understand the context and create meaningful test cases? Thank you
Is there any way to showcase or present the katalon automation testing performance by presenting test execution metrics at high level across all projects or at individual testing project level via Power BI. Any integration available to connect testOps dashboard to Power BI so that all progress can be tracked automatically not manually like through API integration available, not manually feeding test result reports to Power BI. details like how many testcases executed, how many passed, how many failed, how many bug/defect found etcâŠ
When I run a test suite collection with the large number of test cases in a local agent, sometimes it stops with exit code 1 or 2, end the execution in TestOps will be terminated with status âincompleteâ. Then, I cannot know how many passed/failed or skipped test cases. Especially, I cannot generate any useful exports and waste time.
So why TestOps reset all test results instead of keeping all data at the time the execution is terminated?
I also think of a workaround. I realized that I can backup the reports in folder tmp///Reports. But when the execution suddenly terminated, it deleted all data in tmp folder. Then, I did not have enough time to backup the reports. Could I can config or change any settings to prevent from deleting data in tmp folder after the execution (I will delete manually)
Or do you have any other advice or solution?
Thank you for you help.
Youâre right that the new version of TestOps has supported manual testing workflow, especially together with automation testing capabilities, all-in-one-place.
About the test case generation, this is one of the AI capabilities we introduced in the new TestOps. With your user stories, tickets, or requirements, our AI-assisted feature analyzes the acceptance criteria and business logic to automatically generate comprehensive test cases that cover various scenarios - including positive flows, negative cases, edge cases, and boundary conditions. The feature also provides the place for you to input custom requests to ask the AI to do. This dramatically reduces the manual effort in test case creation while ensuring thorough coverage of your requirement.
For the first issue, when i run test suite in Katalon Studio Enprise, the execution or new test run will automatically appear in TestOps > Execution > History, right? But if I terminate (the execution has not completed yet), the corresponding test run will be set to terminated and incomplete. Then, the total, passed and failed test cases will be 0.
So I have the same above issue with:
Run local agent
Run test suite collection with more than 200 test cases
Exit code 1,2,3
If the test suite collection was being executed but then was terminated (by some errors?), the error appeared in terminal of local agent and was not synced in TestOps. If the test suite collection was completely executed and then had some errors, the session log was below