KRE crashes saying " There is not enough space on the disk", but the disk is not full


We run KRE on a Windows VM.
We periodically make sure that this VM has enough space.
We did it a couple of days ago: we have 14+ GB available on the (only) C:\ drive.

Our test base includes test scripts requiring explicit usage of temporary stuff in the temporary directory:

Path userDataDirectory = ChromeDriverUtils.getChromeUserDataDirectory()
Path tempUDataDirectory = Files.createTempDirectory(“User Data”)
Path tempProfileDirectory = tempUDataDirectory.resolve(profileName)
FileUtils.copyDirectory(profileDirectory.toFile(), tempProfileDirectory.toFile())

(Many thanks to @kazurayam !!)

Last night, it failed at this very place… with a nice stack trace explaining:

org.codehaus.groovy.runtime.InvokerInvocationException: There is not enough space on the disk
at (…)
Caused by: There is not enough space on the disk
at$ Source)
at kazurayam_inspired_code.ChromeDriverFactoryImpl.newChromeDriverWithProfile(ChromeDriverFactoryImpl.groovy:166)
at (…)

Again, this machine had 14+ GB disk space available.

Our hypothesis: We have hit a limit, not on the overall disk space, but on the space allocated for the Windows temporary directory (aka %TEMP%, aka C:\Users\userLogin\AppData\Local\Temp>), which is where Katalon operates upon Files.createTempDirectory().

Hence, the message should have been “There is not enough space in the temporary directory”.

Has anybody already hit such a specific limit? Where is it set? Is it related to the JVM, or is it Windows?

The solution is trivial: As part of our VM monitoring/hygiene, periodically cleanse the %TEMP% directory.

– Michel

Windows has a limit of number of files in a folder:

How many files do you have under C:\Users\userLogin\AppData\Local\Temp ?


Thanks for the feedback.
Unfortunately, we had not noted the number of files at the moment of the incident.

A typical run of our entire test base leaves behind ~130 K files. So assuming it’s multiplied by a few number of runs since the last cleanup, we may had up to a few hundreds of thousands of files, but no more.

The FS is NTFS, so per the link you provided, “Maximum number of files in a single folder: 4,294,967,295”

It’s an AWS EC2 VM, maybe this adds some extra constraint?

Hopefully, we will never reproduce this weird incident.

Many cheers,

I hope so as well.