[Extended] Ask Katalon Anything - Sep 6 to Sep 22, 2023 šŸ’¬

Hello @xuan.tran : Why it is taking too long to get the below issue resolved? The ageing of this issue is more than 180 days now.

Hi @xuan.tran,
I am using KRE 7.2.1 for executing some test cases in one of my Test suites in Katalon studio project on the same machine. There since each of the Test case needs some data set up before execution we need to work on few mins on data set up for each of the TCs. And if for some reason any test case fails, the data set up has to be done again from scratch for it.
So in case of failure of a TC in a suite, all subsequent TCs should not be executed or skipped so as to investigate the failure and also to avoid corrupting the test data by NOT executing subsequent TCs unnecessarily. I tried a feature of MaxFailedTCs argument in KRE v8.5 as well, but does not help in my case.

In event of failure, i was trying to kill the Katalonc.exe, but even after doing it, it keeps launching subsequent TCs in suite as per the sequence. I want to manually terminate the KatalonC.exe in case i observe some TCs failed.

How can i do it or is there any way, please help me.

Question: User Unexpected behavior I noticed in Katalon Studio. When using the methods WebUI.verifyElementNotPresent or WebUI.verifyElementPresent with the following syntax’
TestObject export_button = findTestObject(ā€˜imging/action_export_button’)
button_state = WebUI.verifyElementVisible(export_button) and expect a returned boolean.
If the element is not present, it fails the test at this step. Even though Katalon indicates that it will return a boolean value based on element’s presents or not.

The only way I was able to get this to work is by using the Failure Handling and it has to use the OPTIONAL option not CONTINUE. Then even when using OPTIONAL I still see a warning level event when the element is not visible. WebUI.verifyElementVisible(export_button, FailureHandling.OPTIONAL)

What I expected with these methods is a cleaner exit of returning a boolean if the element is present or not and the step to continue. I didn’t expect the verifyElementPresents to be a hard assertion failure and stop the test. Is this the correct usage of these methods? I’ve seen other Selenium implementations where there were methods that can check the element presents and return a boolean and allow the test steps to continue -versus- an assertion method that will hard fail the test step.
Best,
Mike W.

Many TestRail customers are switching to alternative tools such as testmo or Qase, because the product quality there is apparently much better and the tools are also much more modern. In particular, the migration from TestRail to Testmo is said to be relatively easy, because Testmo’s roots are in TestRail. Are further plugins planned to support integrating Katalon to these or other test management tools?

1 Like

Google has not put chromedriver version 116 in the chromedriver download endpoint, and the mismatch with chrome browser causes tests not to run as the browswer fails to open. What can we do to mitigate this issue?

What Katalon Studio Version are you on? Maybe an update to Katalon 8.6.6 helps?

hi @albert.vu meant github copilot

Hi @guy.mason

Thank you for reaching out. We appreciate your inquiry and value your interest in Katalon Cloud Studio. Please find below the information you requested:

  1. Feature parity between Katalon Cloud Studio and Katalon Recorder:
  • Katalon Cloud Studio offers a user-friendly test authoring capability with test steps presented in plain English, comprehensive keyword suggestions, and descriptions. This makes test creation and review easier for users.
  • Additionally, as a web-based tool located on TestOps, Cloud Studio offers convenient access from any web browser, allowing you to work on your tests anytime and anywhere.
  • You can freely execute Cloud Studio test cases with Katalon TestCloud and conveniently view the test reports on TestOps, providing an all-in-one platform experience.
  • The product is still in the development stage and requires further improvements to match the functionality of KR-KS4. We highly value the sincere feedback from our valuable users, as it helps us make Cloud Studio significantly more useful and efficient for you.
  • While we have planned release timelines, they are subject to changes based on new findings and user feedback. We assure you that we will keep our users informed of every new update.
  1. Compatibility of existing KR test scripts/suites with Cloud Studio: This is not available for now, we have it in our backlog to consider. We will inform KR users when we have any specific plan for it.

  2. Scripting support:
    At the moment, our Cloud Studio doesn’t support scripting, which means you won’t be able to apply your scripting knowledge.
    While we understand your interest in scripting, we cannot provide a definite confirmation regarding its introduction in the future. If scripting is considered at a later stage, it is important to be aware that there may be slight differences in the way you write tests due to the use of different frameworks.
    We appreciate your understanding in this matter, and we will make sure to keep you updated on any further developments regarding scripting capabilities in Cloud Studio.

  3. Pricing: The product is currently in the Beta phase, and during this period, it is completely free to use. We will provide further information about pricing as we near the general availability launch.

We appreciate your loyalty as a KR-KS4 user. Your feedback is important to us as we strive to make Cloud Studio even better. If you have any further questions, please feel free to ask. Thank you for choosing Katalon.

2 Likes

Hi Albert,

Thank you for the links. I’m going to check each one and contact back if need any further assistant.

Cheers,

Chi Hung Le

~WRD0001.jpg

1 Like

How do you get Visual Testing Up and running. It seems no matter how many times I walk through the documentation I cannot get a baseline image to be set up for comparison, even though it exists in the baseline collection. One image just constantly says missing. Missing what? When the first image is opened its on the baseline side with nothing on the other side to compare it to. Then the second image when opened, the baseline is empty even though there is a file in the baseline. I am completely lost on this. I have gone through the documentation about 10 times now uploading an image then running a suite that captures and image as a checkpoint, but no change. Then tried to just have it generate one for me, click pass and save to baseline, re-run the suite and try to compare, again no luck. Is anyone using this successfully?

1 Like

Hi @atul.rai,
Apologies for the delayed response and thank you for your questions. I will address them in the following text.

Could you provide more specifics about the issues you’re experiencing with the Safari browser? Understanding your needs better will help us make the necessary improvements.

In relation to the issues with Windows 11, following a period of investigation, we have determined that the problem originates from our system. We are currently addressing the issue and I will get back to you once it has been resolved.

1 Like

Hi @Testinator-X,

Thank you for your valuable insights. Katalon has plans to support integration with other test management tools such as Xray, which was released by Katalon in 2023. However, the implementation timeline will depend on the priority of integrations, so Katalon will have an official announcement if an integration will be released in the near future.

2 Likes

Hi @Monty_Bagati,
Sorry for the late reply. I’m a bit confused, related to Drag and Drop keywords, we have these webUI keywords:

I’ve got our dev to check and these two are still working fine. Can you provide more details on the scenario you are testing? If possible, can you also share the web page that you’re testing?
I’m looking forward to hearing more to see if I can help.

For WebServices (API) , Katalon Studio Hangs

2 Likes

Hi @digvijak,

We don’t have any option to manually terminate KRE in the middle of the execution. I guess the most suitable way for your scenario is to use the command -maxFailedTests, which requires Studio version 8.1.0 onwards. For more details on how to use this command, you can see this doc: Terminate execution conditionally | Katalon Docs.

For example, if you want to stop the test suite after 4 test failures → Set T = 4. The test suite is terminated once the number of failures becomes 4. → The execution ends, and the rest test cases do not run. So I was wondering why this command does not help in your case. Can you elaborate a bit more?

1 Like

Hi @mike.wardrop,

The reason here is that when you use the WebUI.verifyElementVisible keyword, Studio needs to make sure that the element is present on the DOM before checking if it’s visible in the UI or not. If the element is not present in the DOM, then this step status will be marked as false. The keyword returns false if the element is present and NOT visible. I guess we have to update our docs :two_hearts:

Hope this is clear to you :joy:

Hi folks, :wave:

Thank you very much for participating in our Ask Katalon Anything session! We hope you have been able to get the answers you need and also get to know our Product team members more.

We will be creating a new topic and organizing all the questions asked into their respective category soon. In the meantime, you can tell us what you would like to see in our next AKA by replying to this thread.


If you are still waiting for answers for your questions, then we suggest you to create a new topic on our forum, and then proceed to tag the respective Product team member into your topic for better support.


Have a better one, :sunglasses:
Albert from Katalon Community team

2 Likes

Hi all, :wave:

Please find a recap of our Ask Katalon Anything session below, in which all of the questions has been grouped into their respective Katalon modules i.e. TestOps, Studio, etc. for easier navigation :point_down:


Hi @josh.meeks, I noticed that your question has not been answered yet, perhaps it would be better if you could create a new topic with said question so that it can get more visibility from other members on our forum?

Thanks!

1 Like

Hi Josh,

Visual Testing will compare a checkpoint image with a baseline image, which are captured from the automated script, based on the following criteria:

  1. Image name
  2. Image resolution

After being compared, the following possible results are returned:

  1. Match: The checkpoint and the baseline are matched
  2. Mismatch: The checkpoint and the baseline have differences
  3. New: No baseline, which has the same name and same resolution with the checkpoint, found to compare with the checkpoint
  4. Missing: No checkpoint, which has the same name and same resolution with the baseline, found to compare with the baseline

I’d give you an example of how Visual Testing basically operates:

  1. Run an automated test to capture an image
  2. Within TestOps, the image is auto-set as a baseline image in a baseline collection
  3. Run the same automated test (in step 1) with the assumption that the test script’s content has not been changed
  4. At the second attempt, the captured image is treated as checkpoint and is used to compare with the baseline image (in step 2)
  5. Assuming that the comparison result is ā€œMismatchā€, meaning that the checkpoint is different from the baseline.
  6. After reviewing and you accept the differences of the checkpoint, you need to click ā€œMark as Passedā€ and then click ā€œSave to baselineā€. This action will update the baseline by the new checkpoint, and then version of the baseline collection will be upgraded.

Regarding your concern about ā€œMissingā€ result, it’s likely due to differences in resolution between the checkpoint and the baseline even though their names are the same.

1 Like