raise also the Xms to at least half of Xmx to became faster available for JVM
well, let’s don’t forget that as long as the test is still running, is collecting data for the final report.
(which, apparently, it happens in memory)
we cannot say that this is a pure memory leak but more like unhappy design (if that is the case).
to confirm this, some voodoo debugging may be needed.
perhaps the dev team may want to review that part and find better solutions for such cases (.e.g flush the data to disk)
the main issue is caused due to the large dataset
but yeah, unhandled network resources can create also strange issues
which is free, a proven profiling tool for any JVM applications.
A sample how Visual VM looks:
This screenshot shows a case in my project, not yours of course. Have a look at how the figure of “Heap size” and “Used heap” are moving. Please note the “Heap size” goes up and stay to 7GB (as I specified -Xmx7g), while “Used heap” is going around 2GB; there is 5GB of Heap space once used and left un-freed. In this situation, Garbage Collection becomes too busy trying to free the heaps space, then application processes do less and less, eventually stops. I mean this phenomenon a “Memory-leak stuff”. This memory-leak stuff was caused by a fault in my test script; it left a lot of i/o buffers unclosed.
I don’t think this request is practical. When a test script is faulty and causes this sort of “Memory-leak stuff” in the Java VM layer, I believe, all of application processing on a Java VM process will stop working. The code of Katalon Studio in the process will also stop functioning.
Exactly!
So, up to now you just have assumptions on what is the actual root cause (as we do also).
Therefore I propose:
@david.casiano should take an in depth look into the code and debug it, with the hope it may be able to identify the actual “leak”.
the hints from @kazurayam will be of a great help, provided the tools are used properly.
@duyluong the dev team should consider the proposal from @Russ_Thomas an make the reporting part better.
Until then, we are just running into circles.
Good enough?
Then just wait for a (configurable) auto-refresh or hit F5.
I don’t see any engagement though, and that’s a shame. Probably not sexy enough – and doesn’t involve TestOps, either, which is where most of the development effort seems to be going these days. But really, devs, it just needs to work.
The cognoscenti will recall how my reporting system works, documented in bits around this forum. I could implement this in my stuff in an afternoon (or two).
Updating this thread. It seems that the approach to use “Test collection” seems to work better. Our observation seems to reducing the entries in the “Report” folder decreases the memory. As I move out the previous reports(created by Katalon after test finishes) to another directory seems to reduce the memory(still big at over 1.2 GB of memory at startup with the project - no tests have been run).
Also, with the “Test Collection” approach to distribute the data across multiple test suites increased performance as I can process quicker thru the data files. However, this time, it got stucked at the end (‘frozen’ UI after 3 days). But the good news was I got the reports for the test suites that finished.
you can view any files in the Katalon project. Not only the built-in reports, you can view any custom-made reports in browser through this local HTTP server.
You do not have to transfer report files to remote servers (like TestOps), which takes long time. As soon as the custom reports are generated locally, you can view it at URL http://localhost. Quite handy.
to stop the local HTTP server, type CTL+c in the commandline
you can change the listening port from #80 to any you want by command line parameter. Also you can change the base-directory by parameter.
I find this local HTTP server is useful for quick researches. For example, I used this server to witness a bug in Katalon Studio’s RequestObject class.
In this Katalon User Form, many people report problems about WebService tests. However, quite often it is difficult to reproduce problems without a RESTful/SOAP server counterpart up and running. This local HTTP server makes it easy to mimic the problem with test fixtures (JSON, XML files) manually prepared in the project directory locally.
Then, you would want to decrease the size of files in the “Report” folder. That may result less memory occupied.
If you have a payed license of Katalon Studio Enterprise, then you can disable “log executed steps”, which will significantly reduce the size of report files. The “step execution logs” occupies approximately 90% of bytes of report files; but that information is not useful (I suppose you do not need the step execution logs). See the following post also:
well, of-course, one may use python too, provided is already installed.
just do:
$python -m SimpleHTTPServer <port_number>
in whatever folder you like.
no need to grab any third-party script, this module is already provided with a standard installation.
but the idea of the current topic was that the reporting part has to be improved, in regards with memory efficiency.
note that the final html report it is not generated until the test suite is ended gracefully.
Precisely. (He gets it !) We want it as it is being constructed, not wait until it’s flushed to disk - that’s the point. This requires new code in Katalon to flush the steps to disk, step by step (not TC by TC and certainly not per suite, which is what we have right now). I’d even settle for simple “outcomes” - pass/fail/error.
Great post though, @kazurayam. I’m sure it will help somebody. But like @anon46315158 said, browsing local folders is easy. I use file:/// stuff all the time – I have seven tiddlyservers (node) open right now.
The content of execution0.log file is exactly the same as what we see in the Log Viewer in the Katalon Studio GUI. This means, the execution0.log file contains the on-going results as a test suite is running.
Katalon Studio transforms the execution0.log file to what we call “Basic Report as HTML” and “JUnitReport as XML”.
Therefore, if you want a means to access on-going results as a test/suite is running, then you want 2 things:
You want to trigger Katalon Studio to do execution0.log → HTML report transformation intentionally even when a Test Suite has crashed for some reason.
When you forcefully stoped Katalon Studio because the Test Suite got frozen, then the execution0.log file might become a mal-formed XML document. For example,
the file may miss the closing </log> tag at the end of the file
a <record> tag may be broken at the end of the file.
Possibly, the execution0.log file is generated when a Test Suite has finished (regardless with failure or not). I am not sure how the log file will be generated when the OS process was stopped irregularly.
Do you want Katalon studio to be capable of transforming a mal-formed execution0.log to a well-formed Basic HTML report? I do not think it is possible to implement.
A super-skillful Java programmer may be able to write a purifier tool that accept a mal-formed execution0.log file to make it a well-formed XML. Then the magic may become possible. — sorry, I am not capable.
In addition to what has been said above, perhaps I have to be a bit more precise.
When I am saying ‘reporting’ I refer also to the log viewer.
And that i know for sure it is memory hungry, since it is populating the GUI ‘on the fly’ but seems like it is not releasing the memory until the test finishes (or sometime not at all).
So the log viewer definitely need some love.
A Katalon Studio Enterprise license holder can disable the “step execution logs”, but this option is not available to the Katalon Studio Free license holder. I am uncomfortable about this design decision. In Dec 2020, I wrote that Katalon should change this option to be free.
But the team has not taken this seriously.
I am afraid, I should advise those who want to drive a large scale test (like you @david.casiano) should not use Katalon Studio Free version. They need to pay for the Enterprise license. Otherwise, they should choose other UI-testing software.
And this is were we are starting to converge.
This option it is available only for enterprise users, as you correctly pointed.
And, even if this will be made available for free version, the user may still like to have an option to watch the reports ‘on the fly’, particularly for such cases with large datasets / long running suites.
So, no matter if i am an enterprise user or free user, I should have the option to turn off the log viewer, but refreshing the browser some time to time (I won’t mind doing this manually)
This is why both requests should be considered.
Of-course, running the tests with KRE may be a solution (since the console runner shouldn’t bother with the log viewer) but KRE … guess what, it is yet another licensed only feature …
I am already on the ‘dark-side’ (pyhton + robot framework).
And actually … funny, the main reason for this was not the development inconsistencies and licensing schema … but something more serious for me … the Linux support.
Yet again, starting from 7.9.0, the linux version is broken, at least on Fedora, due to the eclipse upgrade most probably.
But I won’t bother to submit another bug report, looks like the Linux usage it is simply ignored … some people are thinking that Linux is running only in the cloud …