Thank you for the feature requests, I think they are very helpful. We will add them to our backlog.
Regarding you question, you can pass a list of redacted objects, e.g. Mobile.takeScreenshotAsCheckpoint('checkpoint_name', [findTestObject('Application/android.widget.TextView - App')]). The feature currently does not work as expected on some Android models.
In the future, we will allow specifying redacted regions on Katalon TestOps UI (maybe by draw a rectangular shape on baseline images).
The reason why I use the same checkpointId twice is because there are 2 pathways of user actions that leads to this screen. I want to make sure that the user sees the pdf view (blob recap page) through path A and the same pdf view should open when going through path B. I could make a blobVisualRecapPage-1 & blobVisualRecapPage-2, but both would be exactly the same. For maintenance purposes Iâd prefer to be able to use the same checkpoint twice, otherwise when the screen changes Iâll have to update 2 baselines instead of 1.
If you feel my use case is low on your prio backlog, Iâll go with creating two checkpoint Idâs (luckily I donât have 47 pathways )
Great news on the intention to integrate visual validations results into the executions!
Is there a limit to how many screenshots can be processed for an execution?
I have a test case that generates full page screenshots for 80 urls. I established the baseline images in stages. When I ran the test with all 80 urls to test against the baselines, testOps looked like it was still processing the results for over 24 hours before it seemed to simply give up.
Is that just too many images to process at once or should it work?
The images are full page screenshots of long scroll pages and there are 80 of them. The test simply goes to the page and takes the full page screenshot as a checkpoint.
Yes. It seems to simply take a very long time to process. My last run was 8 days ago and itâs still showing no results.
But, I note too that though the test run interface as shown in the screenshots above, listed the expected number of runs (80), the visual test results shows 142 unmatched images.
Rather than matching the images from this run against the baseline, the results are saying that all of the baselines are missing and all of the screenshots from the run are new.
If you look at the results here: Katalon TestOps
The first image should match against one on the 6th page of results (about #66 I think), but instead the baseline says there was no checkpoint for it and the new testâs checkpoint image says thereâs no baseline for it:
The baseline unique is the combination of filename and image resolution. In your case, two checkpoints have the same filename but different in resolution (2400x5036 vs 992x2550) so the checkpoint and baseline do not match.
Thank you. I hadnât realized that. Of course it makes sense that the resolution matters.
Is there any issue with leaving that test run as is, without going through and marking them all as failed? Does that affect subsequent processing at all?
I donât know what monitor I used for the February 2nd run which is still not processed. Is it worth running the test again on the baseline monitor to see how it does? Or should I wait at least until the Feb 2 run is complete?
I check in database, this execution doesnât have any report file, maybe it has some issue when running this execution and submiting report files to testops, so the testops processing doesnât return anything.
Excuse meďźI have a problem with WebUI API âtakeAreaScreenshotAsCheckpoint()â.
I already watch the Official ducumentďźbut still fail⌠(takefullScreenshotAsCheckpoint() is success.)
Hi. I was wondering: Would it be possible to allow some kind of tolerance level that the screenshot may deviate from the baseline?
Use case 1: I do a visual check on a page which has an external map component (i.e. a google map variant). This map slightly changes over time, nothing big, itâs almost undetectable with the naked eye, but the katalon eye spots the difference and fails the visual test.
Use case 2: In some cases I noticed there is a slight difference between browsers on how they render an image.
So it would be nice to be able to pass on an âtoleranceâ argument in the visual check commands which then define the âSamenessâ. Example: 1.0 = perfect match & 0.0 = totally different, which than can be offset by this tolerance parameter.
Another feature requests for the Visual Testing with TestOps (if this post is not the right place, let me know):
A parameter to allow us to say whether the image is mandatory to be there or not.
Use Case: Perhaps there is a condition to check against image A or B. Currently, if no image taken against e.g. A it will show in TestOps as âMissingâ, failing that test run.
A versioning parameter.
Use Case: To allow test runs against different environments. Example: An image might be there in release 1.0.0 on TST. All our tests on TST turn green, so we deploy it on our ACC environment and make sure we run our tests on TST and ACC on a daily basis in the CI pipeline. Now, with a new iteration, a new development/feature has been introduced on TST (v1.1.0) which makes that our image is (slightly) different. We update our baseline to make it match the current version in TST, but as a result our stable tests that run against ACC suddenly have a visual mismatch as our baseline is matching v1.1.0 instead of the v1.0.0.
I think I understand the base-lining workflow now - thank you.
Do you have any news on when this solution will work for mobile applications? I tried the âTake Screenshotâ mobile keyword, but just got errors (sorry canât recall the specifics, but I presumed itâs because itâs still under development)? 192.168âŚl00.1
Let me know if you want me to provide more error details if this isnât something youâre expecting.