Sometimes some test steps are failing

Sometimes some test steps are failing which was working fine earlier, but when I record the script again then it is working fine, what is the issue I am not able to understand.

Hi @garimasingh431

It’s hard to diagnose from too little information. Please provide us with a concrete example and the log file (available under Help > Error Log) when you reproduce this issue.

Cheers !


When I have experienced this issue it is usually related to the object properties, possibly xPath changed etc. I would save the object you want to test and compare it to your current object and check that the properties/attributes are the same.

May be useful to search via text only for the object or some other attribute if it’s more reliable.

Its what we call a flaky test. If im brutally honest, if your only using the recorder, you are more than likely to run into this issue more than once. Now im not sure what kind of AUT you have but if its anything like mine, its constantly being updataed and things are being changed.

Try learning a bit of Xpath so you can find a solid way to locate each element on the page. Just using the recorder will do the job but will make a brittle test that can and most probably will break after one change to the page. If you take the time to learn Xpath or another way of locating an object like CSS, it will massively pay off in the future.

Good idea @hpulsford , what’s your experience with using the recorder and then choosing to search for that object via the saved Xpath? Is the Xpath likely to change if an object is updated?

So when i first started Katalon i was solely using the recorder and for the first build i had the tests ran fine, as soon as there was a slight change to the page the tests would break, especially if it was a location/format change to the page. So the majority of the time, you can just get away with modifiying the way it searches for the object by changing the attributes it looks for such as text, etc.

Or you can leave the recorder alone and just create your own objects writing your own Xpath to locate the element. For me, i much prefer just writing my my own Xpath as im sure of how its going to be located and can adapt it to the likely changes to be made for my specific application.

There is also the option of CSS to locate elements, I have not, as of yet, dabbled in the world of CSS as much but i know that @Russ_Thomas is a avid user.

Now it totally depends on whether its using the absolute or relative path. If its using the absolute then any changes on the page will cause it to fail as it wont be able to locate the element.

Have a read of this - XPath in Selenium: How to Find & Write Text, Contains, OR, AND

Thank you, some really good info there that’ll help us for sure after devs have made some changes, will look into the links you’ve sent over. Appreciate the help.


1 Like

Not a probelm :slight_smile:

Actually i am saying that suppose

  • I recorded a testcase ,it is working fine next again i run the same test case but one step is failing suppose click is failing.
  • Then again run the same test case,it is working fine.

What was happend last time because of that the testcase was fail.

Even on same screen resolution etc? I am not sure why this would occur sorry.

It may be down to a loading or timing issue, please share your script and the entire exception log

As others have stated, we need a concrete example of this, with your test code and errors. There are many common reasons for a test being flakey, but without more information, it will be impossible to isolate one.

I have the same problem and I do understand what you have to deal with. In my case can be that out of 10 executions of One test case 6 are successful and 4 fail. (only using the same browser) So I retouch always the Objects and some of the times I need to change from Xpath to CSS and vice-versa but basically the tests are really flaky and this is a huge problem for me. One day I left work and I had managed to build and run in all browsers 20 test cases. The next morning without having to test a new build of the feature, no new code involved, 60% of the cases failed. It was really embarrassing as the previous day I had announced that all tests run fine and the next day I had to present them.