Docker - Test duration much slower when using collection

Hi,

I have question about collection behavior.
We are using Katalon Docker with collection of test suites. Recently after we changed our new VM we notice that the duration of test case takes more time when running within a collection.

We did few measurement with a test_suite_3 that contains 29 test cases, running it in a collection that contains only test_suite_3n, it takes ~1h40m

Then we split the test_suite_3 into 2 parts:

  • part1: test case 1 to 19
  • part2: test case 20 to 29

Then we rerun the 2 parts independently within a collection:

  • within collection with only part1: it takes ~26min
  • within collection with only part2: it takes ~12min
  • within collection with part1 + part2: it takes ~1h40min

Why do we have ~1 extra hour when putting together the part1 and part2 in a collection? I can’t understand. Looks like more we run test case/test suite in a collection and more it takes longer.

Maybe there are few settings I missed when starting Docker. Below the main parameter we use when running Docker:
-m 4G --memory-swap=4G --shm-size=256m

Do you think it could be related to a memory issue somehow?

Thanks for your help.
Phung

Thank you for your post and all of the great details. This is very intriguing and I am escalating internally to see if we can get more eyes on this issue.

Best, Sara

@bionel any ideas?

mhm … does KRE have a katalon.ini file like KSE?
if yes, what are the heapsize settings?
it may be related to the JVM settings but not particularly with the resources allocated to the container (or a combination of both)

there are any noticeable differences between the previous VM and the current one?
e.g. hypervisor/cloud used (esxi, aws), mem/cpu allocation, disk allocation, filesystem type, OS installed etc.
may be related also to some IO issues.
if the previous VM it is still available and pretty similar with the new one, a good hint about the host performance is by monitoring the system load during the test run.
e.g. use something like watch -n 1 uptime and look if noticeable differences for the load values.
this metric includes various characteristic of the host, including IO requests which can be further introspected by using other tools like top, iotop etc

ref: https://www.brendangregg.com/blog/2017-08-08/linux-load-averages.html

Can you find at which step of processing your test suite run takes long time?

If you could spot at which step of processing takes long time, then you would be able to look into the system in depth for the reason.

You should be able to read the log through in detail. Please analyse the log.

The other people in the forum can not not see the log of your test suite run. :sunglasses: