Looking for tools to measure code coverage

Can anyone recommend a tool that will measure the code coverage of my Katalon tests? Can a tool like Clover be used with Katalon?

Appreciate any suggestions.
Thanks!
D

1 Like

Code coverage make sense with unit testing.
Katalon is designed for functional testing, e.g ā€˜blacbox testā€™, trying to be abstract wrt AUT code

Iā€™m looking at it from the QA side of things. We use Katalon extensively to test our web application - all user scenario based test cases. The executive team always questions the code coverage of our testsā€¦ As a Manager, I would love to be able to provide that for them let alone have a way to truly evaluate our automated test suite(s) not so much the application.

1 Like

well ā€¦ the executive team is just executive, they use concepts with no sense from functional testing point of view :slight_smile:

Coverability, with functional testing, is given by how many test scenarioā€™s you have for a certain feature/module: positive workflows (happy path), negative scenarios, boundary tests, data driven tests and so on.
Also, by how many suite types: new feature/module test, smoke, regression, integration tests, end-to-end etc.

code coverage is the measurement of how many line of code the unit test covered (e.g, how many method involved in the unit test code). And the test that Katalon executed is called E2E testing that test on the system (the black box) itself, not the code. So there is no way that you can calculate the coverage percent in traditional way.

But you might, in the worst case that the executive team insist on having the coverage number, you can define new way to calculate the coverage by how many module thatā€™s been tested by your automation suites.

1 Like

How about coverage from Katalon itself to make sure when we run the full regression we donā€™t have dormant test code that is not being executed ā€¦ this will help in cleaning the test project.

@nhussein

i think the test report generated after one execution it is showing the total number of executed testcases. compare this with the amount of existing ones.

on the other side, even with unit testing, there is no way to tell if some code is dormant or not.
the coverage report will only tell how many line of codes are covered by the written tests, but not how many lines of codes are actually used in the running app (e.g you can have classes in certain packages, which have unit tests giving a 100 % coverage ā€¦ but those classes are not imported in the main code)

to detect dormant code, usually it is a job for linting, but that it depends on many factors: what is the app language, what is the linter in use, if will detect unused classes or only imported but not used ones, and so on.
I donā€™t think Katalon have such feature.

L.E. In fact i faced a similar problem (for python but does not matter, should work for groovy/java too): how can i tell if a certain class it is in use by some test scripts.
so i simply used grep to search for the class name in all scripts, if it shows nothing, then it is safe to mark it as deprecated/remove it. But, if the class it is in use, will not tell you how many methods from the given class are actually used in the scripts.
here you may expand the method to collect also all defined methods names and search for them, but may not be accurate (you can have more than one class with methods sharing same name, so you have to detect also how the class is imported)
Also, will not tell if the package containing the class it is used in other projects ā€¦ so take care about the workflow of your entire team (e.g. some people may use shared packages for custom keywords)
And again, the class may be imported but no method belonging to it used ā€¦ here it is only a matter of good practice when designing testcases. Regular use of Ctrl_Sfift_O will get rid of the unused imports (for python we use flake8 in a bamboo job, if an import is unused, the linter will ā€œscreamā€ and mark the verification build as failed when the code is pushed to repository, so the PR cannot be merged until the issue is fixed)

So, read again the topic from the beginning and find another way to define the coverability, to suit better to your needs ā€¦ or have a brainstorming with your bosses and the dev/devops teams to search for alternate ways to ensure the code is clean.

@anon46315158 what I meant is having something like this:
https://www.jetbrains.com/help/idea/running-test-with-coverage.html

@nhussein

as already explained, this is not applicable for katalon, but for unit tests (tests written in junit, testng etc, which are ā€˜gluedā€™ with the application code).
see: https://www.jetbrains.com/help/idea/testing.html

and here some intro into the terms: https://dev.to/chrisvasqm/introduction-to-unit-testing-with-java-2544

Katalon is a tool for blackbox testing (End-to-End)

If you wish, you can write unit tests for your katalon project if you have a lot of custom keyword classes ā€¦ i think there are few discussions on the forum on ā€˜how to use junit with katalonā€™ ā€¦ but what will be the relevance of such report? Will only double the work of the QA team with no added value for the AUT.

Functional tests should be kept as simple as possible. The ā€˜coverageā€™ term as in ā€˜code coverageā€™ is not applicable here, the main indicator for this type of testing is given by how many test scenarios you have for a single functionality (e.g. a ā€˜Loginā€™ page, or an API endpoint): positive tests, negative tests, boundary if applicable and so on ā€¦

if you are looking for api coverage, here is one tool that could suffice.
https://support.smartbear.com/readyapi/docs/testing/coverage/index.html

Is there a Katalon equivalent?

Iā€™m looking for a similar solution: the main problem is that I need to test simple things like displaying a menu item with a unit test, but the end2end test in the catalogue can use the menu item if it is already displayed. (if it hasnā€™t appeared, then the end2end test will break by default) So the unit test is redundant in this case, if the surface or end2end test alone guarantees that that part of the code (happy path) is executed correctly. And thatā€™s exactly what it would be good for, if you could determine how much of the code base the end2end test used. A coverage report would tell you exactly that.

Iā€™m actually looking for a solution to the above problem, not an argument about what to call what.

I have no idea what you mean by this. Is it a feature of Katalon Studio?

An ā€˜end to endā€™ test (I assume you mean using UI) wonā€™t tell you %age code covered. You could look at ā€˜features coverageā€™ or ā€˜requirements coverageā€™ as a metric but unit tests are different - they are able to measure coverage because they use white box techniques. Katalon is a black box solution (at least at the UI side I guess). Thatā€™s my thoughts anyway

Pretty much mine too.

not true.
the purpose of unit tests is to detect failures at the early stage of development and to ensure the developer does not break the code later when implementing new features, classes etc.
running unit tests usually it is done at the code deployment (e.g when PR is created) and should be fast (an entire run should take less than 15 minutes) to not delay the pipeline.
The purpose of the end2end test came later in the pipeline and are meant to verify different things, like the deployment is successful, the features are working fine on various browsers, integration tests etc.
So, pairing end2end test with the code usually make no sense.
You can create letā€™s say a ā€˜feature catalogā€™ based on the code or development stories, however the scope of the end2end tests may be wider than the scope of unit testing (you can have happy flow, but also negative scenarios, boundary tests and so on) to reach more portions of the code under the hood.
More appropriate for this is to use various test tracking tools (katalon testops, jira plugins, testrail) and create a draft testuite with all features subject to test.
When you start a session you create a test run based on the draft suite.
Mark the test not implemented as ā€˜skippedā€™ or ā€˜not executedā€™ or whatever the tool offer and you will have an overall picture of what is tested and what not.