Could I generate report where I can manage total execution and flakiness of test case (from selected repository and uploaded data). I want to track flakiness from regression run of each sprint, so that I can know the efficiency of my maintenance plan for test case with high flakiness.
@cgrandin Thatās a great point.
To elaborate more, in the failure assertion (in an individual test case), it embeds verification metadata (like key=value pairs) within the error content (stacktrace) and the test case itself. This metadata becomes part of a larger matrix structure. Analyzing the test execution results within this structure can provide a much clearer picture, leading to more effective remediationāfor instance, enabling us to assign more accurate prioritization and severity to the failure.
What are your thoughts?
@linhmphan Thanks for your observation.
I found out that there are two places where i can observe flakiness (and total of execution of one test case). The first place is on Katalon TestOps > Tests > Test Case > Select one folder > View the column Flakiness. The second place is on Katalon Studio > Test Suite > View the column Flakiness
But the number at two places are not the same! So I wonder:
- How Katalon define these numbers?
The reason the numbers in Katalon Studio and Katalon TestOps are different comes down to the scope of the item each platform is tracking. The flakiness number is an aggregated result of multiple executions (the number of unstable/failed runs divided by the total number of runs). However, the definition of the ātestā that this metric applies to is different: TestOps focuses on the health of the individual Test Case, while Studio reports the stability of the Test Suite as a whole. Since they are aggregating data for two different entities, the final result will naturally vary.
Let me know if you have any other questions!
- What is the correct and efficient way to observe the flakiness ? For example, the acceptable percentage for flakiness.
Itās a great question. I think the answer depends on the quality operational metrics that the team wants to measure its automation test. For instance, to measure over app versions, the numeric manner (like Katalon is using) sounds like a suitable approach. However, the % manner (Test Case X flakiness number / total flakiness number) looks good to measure the automation stability
What do you think?
When executing test cases, based on the error message, sometimes I know the failed test scripts will be passed if I rerun them.
Not sure I fully understand the question. What if I go with a very naive way: on a particular test step statement (that I know may fail on 1st run and be successful on subsequent runs), I just use a simple retry mechanism (like my own custom keyword) there. Sorry if it sounds silly, but I think I missed something on your problem
Duy
Thank you for your provided solution.
The error comes from unstable network and time out. So I think creating a retry in particular keywords can save my effort.
I want to track flakiness from regression run of each sprint, so that I can know the efficiency of my maintenance plan for test case with high flakiness.
I may miss something here. The TestOps report supports a date range that may fit your needs. Is it what youāre looking for?
In case you only have test execution data in Studio (no TestOps license). I think it only gives the flakiness on the test suite level.
Duy
@linhmphan There are 02 methods for executing tests: in Studio (e.g. the IDE) or command line (e.g. KRE). I assume the test execution method in both scenarios is the Studio.
Within that context, I think the āauto uploadā config is the right option. Now itās just a matter of which is the default upload behavior (always uploading or manual uploading). Here is the manual upload.
P/S: In case you have KRE in your CI, you can consider explicitly uploading through the KRE command line (here) and turn off the auto upload (in project settings)
Hope it helps
I want to download a Flaky Test Cases report (csv) like that: View test case reports in Katalon TestOps (Legacy) | Katalon Docs . That report should include flakiness.
Moreover, to see the Test Case Execution Status report, my role must be Tester and Tester Lead. But I cannot find any pages to update those roles. I only see owner, admin, member as project-level roles.
And even my role is owner/admin of the project, I cannot see the section or navigation link to Test Case Execution Status report
@Shin I have noticed that Katalon Studioās built-in Git Staging tab feels quite limited. For example, I often have to switch to Git Bash to perform common Git operations like resolving conflicts, cleaning up tracked files, or viewing detailed error messages.
This breaks the flow of my work and forces me to leave the Katalon environment, which tends to be quite distracting for me.
Are there any plans to revamp or enhance the Git integration inside Katalon Studio to make it more comprehensive and user-friendly ? For example, adding full-featured Git operations, Intuitive UI, better conflict resolution tools, or more descriptive error messages?
It would really help our team immensely if Katalon could offer a smoother and more intuitive Git experience right within the tool ā Is that something the product team is considering for future releases?
I understand Katalonās Git functionality is built on top of Eclipseās EGit , is this dependency a limiting factor or is there room for Katalon to innovate beyond that?
Iāve noticed that when executing a test case in Katalon Studio, the browser launch time is inconsistent. Sometimes the selected browser opens almost instantly, but other times it takes several seconds before the WebDriver session initializes and the browser appears.
This variability becomes noticeable (and time-consuming) when debugging or re-running tests multiple times in succession.
Could you please explain what factors contribute to this inconsistent browser start up time ā for example, whether itās related to driver initialization, memory usage, or internal resource loading and whether there are any best practices or upcoming optimizations planned to make browser launch times for test runs more consistent and predictable?
Another Question
When viewing test execution results in the Log Viewer, we can see numbered steps (Step 1, Step 2, ā¦) that correspond to the actions in Manual Mode.
However, when a step fails ā say Step 20, itās not easy to locate the exact line of code that corresponds to Step 20 that caused the failure in Script Mode. The numbering doesnāt map to code line numbers, and we have to manually search for the relevant keyword or statement and that is highly cumbersome and time consuming if I have to scourge through hundreds of lines of Code.
Would it be possible to enhance Katalon Studio so that a user can lets say potentially double-click on a log step to automatically open the associated Script Mode line or region?
This would greatly improve debugging efficiency and make it much easier to switch between Manual and Script views when diagnosing failures.
Hi @athifk,
Thanks for raising this. Youāre right that the existing integration has several limitations when it comes to conflict resolution, detailed error messages, and advanced Git operations. Weāve acknowledged these issues, and improving the Git workflow to make it more intuitive and reliable is part of our upcoming plan.
Thanks again for sharing your feedback. It helps us stay focused on the right priorities. ![]()
Glad to know. It would make our lives incredibly easy if you are able to iron out the Git workflow and make it smooth and effortless so that we donāt have to reply on an external Git client or tool for git operations.
Iāve noticed that when executing a test case in Katalon Studio, the browser launch time is inconsistent. Sometimes the selected browser opens almost instantly, but other times it takes several seconds before the WebDriver session initializes and the browser appears.
This variability becomes noticeable (and time-consuming) when debugging or re-running tests multiple times in succession.
Could you please explain what factors contribute to this inconsistent browser start up time ā for example, whether itās related to driver initialization, memory usage, or internal resource loading and whether there are any best practices or upcoming optimizations planned to make browser launch times for test runs more consistent and predictable?
Also, How about the inconsistent browser launch times issue for test runs that I had highlighted as part of this thread? It would be great to know your thoughts on that @Shin
Good spot! Iāve noticed the same behavior while performing the UAT test as well.
The inconsistent browser launch time can come from several factors, including OS resources (CPU, memory, file, and network), driver initialization and management, or the way the OS handles process waiting and startup. These variations can sometimes make the launch feel unpredictable, especially during frequent test reruns or debugging sessions.
Weāre planning to revamp the execution management in Katalon Studio 11, where weāll perform detailed profiling and benchmarking to identify the root causes and address them more comprehensively.
Thanks again for sharing your feedback. We really appreciate it! ![]()