Performance of logging execution steps

Hi,

I’m aware that the ability to disable the logging of execution steps is an Enterprise feature; however could the performance of logging to the Console be reviewed? I have a test suite that completes after 3 minutes; however 15 minutes later, the test finally completes within the Console.

My test does involve a large number of variables, calling other test cases within various API calls and if statements; however there must surely be some room for improvement in the performance of this logging of test steps within the console.

Is this something that can be reviewed for a future release?

That’s just weird.

Are you using any kind of integration?
Are you running on the bare metal or through a VM?
Headless?

Lastly, what version of Katalon?

Sorry, should have included the Katalon version - 7.8.0.

Katalon is running on my laptop, no VM. The test suite runs as a web service test.

The only ‘integration’ of sorts, is with Katalon Test Ops, nothing else.

For reference, the ‘execution0’ log file contains 94,546 lines of data (3.2 MB), so fairly large, but the difference in time between the test completing and the logging completing shouldn’t be so different.

My logs are terminating at ~10MB so I think we can discount that possibility.

Are you running the suite in studio? What does it say it’s doing while that 15 minutes is ticking by…?

Any clue in the log?

Yes, it’s running in the Studio application.

Whilst running, I can see the test steps being added to the Console log as it progresses; however after 3 minutes or so, the ‘job progress’ indicator will show as ‘passed’ (albeit ‘1/6’ for example) and the Console log will continue for another 15 minutes or so, until it completes.

I’ve attempted to click the ‘delete all terminated launches’ button in the ‘job progress’ pane; however Katalon just becomes unresponsive and crashes.

A minor amendment to the above, it’s the ‘Log Viewer’ that I should have been referring to, rather than the ‘Console’.

@duyluong Any ideas?

Quoting from Log Viewer results significantly delayed - #2 by kazurayam

the “Log Viewer” gets significantly delayed in relation to the actions being performed on the website

This may occur when you have too many messages from the executed Test Case, eg. by WebUI.comment(msg) or by “Step Execution Log”. …

You have an alternative way of decreasing Test Cases’ “Step Execution Log” messages.

Katalon Studio would not emit any verbose “Step Execution Log” messages for Keywords. I recommend you to transfer your codes in the Test Cases (Groovy scripts) into the Keywords (Groovy classes). Especially you should transform the codes which are repeatedly executed. Once you have implemented it as Groovy classes, you will replace the original codes in Test Cases with codes that invoke methods of the new Classes. Then the amount of “Step Execution Log” will become less. Less messages will make Log Viewer responsive.

Thanks for your feedback in regards to Keywords. They’re not something that I’m familiar with, but I’ll aim to move some of my code from the test cases to keywords.

Ideally I’d like to send an API request within a test case and then verify values within the keywords, do you happen to know whether this is feasible, i.e. do they work in this way?

Back to my original point, it would be helpful if Katalon could review the logging performance (if it is something that they can control), as although this keyword approach may be a workaround (and perhaps make my code more efficient), it does seem an area that could be optimised to improve the product.

Prior to the version 7.0, an option was available by which you can disable Step Execution Log so that you can decrease the log verbosity. However that option was changed to be available only the Katalon Studio Enterprise.

@Daniel_Wilkinson

If you are a Katalon Studio Enterprise user, you can try the option “disable logging executed steps”

I agree with you.

@ThanhTo
@devalex88
@duyluong

In this couple of weeks, I found number of posts in this forum raised performance issues, like this. The verbose Step Execution Log affects them badly. Let me remind you that, as your products are developed, so are the users as well. They will try larger scale projects, and they are likely to find “Katalon Studio is too slow”. This reputation would harm your business.

I would suggest to you Katalon team to change your mind. The “Enable/Disable logging executed steps” should be available to all users free of charge.


@Dung_Ngo5 @Dung_Ngo6 @testleonhart1234 @testleonhart324

Dung Ngo,

Where are you?

1 Like

@kazurayam - unfortunately I’m not a Katalon Studio Enterprise user; hence the request to see whether the logging performance could be addressed as a future change.

I’ve started to look at keywords, I just need to understand how I can make use of local variables!

Dear Kazu,

Sorry for my late reply. We’ll look into this and get back to you later.

Best regards,
Young Ngo

any update on that?

No, I haven’t heard anything about this issue.

We are still working on that and will update soon.

1 Like

It’s crazy when you use Data Driven Testing… I’m looking forward for this issue to be resolved.

@gfortier

I understand that your “Test Cases/Gestion/ProcessExcelLine” took long (129,110s), and you seem to be expecting that your Test Case will run faster if you could disable the logging of Step Execution. But I think it wouldn’t. I suppose there is some other factor that makes your Test Case to take long. I would suggest to you to look at your case from different point of view.

As @Daniel_Wilkinson reported :

I have a test suite that completes after 3 minutes; however 15 minutes later, the test finally completes within the Console.

His Test Case ran in 3 minutes, and logged as it took 3 minutes. But Katalon Studio took more 15 minutes to flush the bulky buffered message texts to the LogView GUI. The additional 15 minutes would not appear as figures recorded in the log at all.

@kazurayam
I don’t think so because you can clearly see a pattern in the execution log. Plus the time is near exponential.
Also, when I look at the Run itself, I see the execution of the script doing like a “thread sleep” but the Log viewer continue to write… and once the log viewer is near the test case execution, then the execution continue perfectly. And they all do nearly the same test, the only difference is the Data and some verify, but the core of the test is the same. I guess, I’ll have to write another bug?

PLS have a look at this topic: