Is there a way to label executed test suites?

We run a collection of test suites every day and we want to tie them back to a sprint version. Is there any way to change the ID or add a label?

Thank you!

First thing that pops into my head…

There is a description field, but modifying it/reading it at runtime… not sure the APIs exist to do that. You would probably have to use a listener before and after to do it. It is possible, I do something similar in my test cases (I store JSON data in the TC description field). Non-trivial though.

Thank you! We will take a look a this and see if it works for us.

Wait and see if others have better ideas.

why do you want to do that from the application running the test?
in my understanding, this is a matter of collecting history.
i supose you don’t run the tests manualy, being a repetitive task, but from certain CI tool.
the way i did: in the jenkins job, after the task is executed,i collect the status/exit code, based on it will send emails with certain message, grab the reports and upload to an internal apache server etc.

We currently execute our test suites manually. We tried using Jekins but we couldn’t get it to work consistently and person working on that was assigned to a different project.

proposal:
let’s focus on doing it properly.
put your choosen CI alive2 (jenkins, travis whatever), run the tests, grab reports with the name you like.

who/what is blocking you?

don’t get it wrong, but from my experience, it will take less effort to learn a new tool and use it to grab togheter some pieces in the pipeline, compared with the willing to modify a tool to do everything your boss was dreaming past night.

so … let’s sit togheter and have a small brainstorming:
what are the requests, resources, goals … and limitations?

Moving to Integrations.

Well, there is a longer story. Basically we have been writing manual test cases for a long time across all of our products. We switched to automating with Katalon a little over a year ago and started by automating test cases that were all ready created.

My team is now evaluating the process of why we write manual and automated tests for the same test case. Test cases in TFS have areas that you can query and create KPI reports (for managers). I understand that Katalon Analytics has reporting but they would like to have something connected to one location (I actually created a different thread for this). They also were interested in seeing if there was a way to display an iteration number in Katalon Analytics, which is why I am asking about a label.

well … first you will need a certain machine to play with, where to collect the reports.
preferably a linux one with docker installed, can be any distribution of your choice (centos ubuntu, debian, whatever you like)

one simple setup can be to just setup an apache server.
it is not very complicated to set it up, because i am lazy i just used a docker image

see: https://hub.docker.com/_/httpd

i used the alpine one because the of the small size

having this, you can run your test via a certain script, at the end grab the generated report, rename it the way you like and upload to the apache folder. will be a bit tricky to keep the ‘index’ of it, but a simple solution will be to add the current date in the report name.

a better option will be to use jenkins. running jenkins it is not very complicated, again docker to the rescue.
The doc is pretty explanatory on how to set it up:

for a simple setup you don’t need the

-p 50000:50000

option, that is required only when using remote executors, so the command to start jenkins will be sort of:

docker run -p 8080:8080 --name jenkins --rm -v jenkins_home:/var/jenkins_home jenkins/jenkins:lts

having jenkins set up, now you have two options: publish the generated reports to your apache server (it can be hosted on the same machine with jenkins) < the benefit compared with running the test via a script is that you will have available also the BUILD_NO in addition to date of run, through the jenkins environment variables.

Also, instead of using an apache server, you can publish the generated reports using the HTML Publisher plugin, see:

https://wiki.jenkins.io/display/JENKINS/HTML+Publisher+Plugin

That’s one way to do it, I was using jenkins just because it was already available for our team, but you can use any other CI tool, there are plenty available

Other dirty options:
I was looking a bit into the html code of the generated report.
Looks like the title (and some other properties) of the report are generated by the following snippet:

<body>
------
<div id="header"></div>
<div id="statistics-container"></div>

<script type="text/javascript">
$(document).ready(function() {
    try {
        var topsuite = window.testdata.suite();
    } catch (error) {
        addJavaScriptDisabledWarning(error);
        return;
    }
    initLayout(topsuite.name, 'Log');
    //addStatistics();
    addErrors();
    addExecutionEnvironmentInfo(window.environment_info);
    addTestExecutionLog(topsuite);
    addLogLevelSelector(window.settings['minLevel'], window.settings['defaultLevel']);
    if (window.location.hash) {
        makeElementVisible(window.location.hash.substring(1));
    } else {
        expandSuite(topsuite);
    }
    
    if($("s1").attr('class') == 'element-header closed'){
		toggleSuite('s1')
	}
	$("#s1").find("div[id*='s1-t']").each(function(){
		toggleTest(this.id);
	});
    
});

the topsuite var is built using the

window.testdata.suite()

function which returns something like:

and later some other strings are appended to it.

the window object seems to be populated by the following script:

<script type="text/javascript">
window.output["strings"] = window.output["strings"].concat([
"*","*New Test Suite","*","*","*Test Cases/SampleTestCase FAILED because (of) Unable to verify response status code (Root cause: Expected status code is '200' but actual status code is '301')","*Test Cases/SampleTestCase","*Test Cases/SampleTestCase FAILED because (of) Unable to verify response status code (Root cause: Expected status code is '200' but actual status code is '301')","*Statement - requestObj = \&quot;New Request\&quot;","*define the test object","*Statement - response = com.kms.katalon.core.webservice.keyword.WSBuiltInKeywords.sendRequest(com.kms.katalon.core.testobject.ObjectRepository.findTestObject($requestObj))","*","*Send request successfully","*verifyResponseStatusCode","*","*Expected status code is '200' but actual status code is '301'","*Unable to verify response status code (Root cause: Expected status code is '200' but actual status code is '301')"]);
</script>

So, I changed a bit the code of the script to look like this:

<script type="text/javascript">
window.output["strings"] = window.output["strings"].concat([
"*","*My Custom Title for Suite:","*","*","*Test Cases/SampleTestCase FAILED because (of) Unable to verify response status code (Root cause: Expected status code is '200' but actual status code is '301')","*Test Cases/SampleTestCase","*Test Cases/SampleTestCase FAILED because (of) Unable to verify response status code (Root cause: Expected status code is '200' but actual status code is '301')","*Statement - requestObj = \&quot;New Request\&quot;","*define the test object","*Statement - response = com.kms.katalon.core.webservice.keyword.WSBuiltInKeywords.sendRequest(com.kms.katalon.core.testobject.ObjectRepository.findTestObject($requestObj))","*","*Send request successfully","*verifyResponseStatusCode","*","*Expected status code is '200' but actual status code is '301'","*Unable to verify response status code (Root cause: Expected status code is '200' but actual status code is '301')"]);
</script>

The rendered report now looks like:

So … after the test is executed and report generated, you can modify it using a certain parsing method and get a custom one
How to do that … that’s another story, feel free to use your imagination.

1 Like

Nice @Ibus. However, you should be aware that the reporting system (and probably therefore the templates used) are being rewritten.

In my view, if you need an alternative reporting system, you would do better to write and rely on your own – or wait for the devs to implement something like mine:

You can Read about it here (and @devalex88’s response immediately after)…

@Russ_Thomas yeah … i know some stuff will change, so i just did a quick dirty hack in the final html … because i had nothing else to do (or not in mood to do the tasks assigned to me by my wife)
mostly, my intention was to show an possible workflow, using developer tools to hack into the result.
once the new reporting system will see the light, if the time afford it, i will hack deeper straight into templates :slight_smile: unless the team will surprise us and enable certain feature to customize them

personaly, i am not using katalon anymore at work … because i changed my job, so now i am on the python side.
but is still play with katalon in my free time, just for fun