How to create a Data Driven Test Suite

I’ve created a data driven test case, but it iterates through each person before moving to the test case. I need the entire test suite to cycle. Here is my scenario.

I need to to login to the site.
I need to “impersonate” a user so that I have all their data and information.
While impersonating that user, I want to execute multiple test cases (these are separate files).

I have a test case to Login. We always log in as the same user.
I have a test case to Impersonate a user (this would be analogous to a login)
I have additional test cases I want to run such as validating links, reading sales data, populating forms.

My Impersonate User test is set up with a CSV, but that test runs dozens of times, for each user in my CSV file, before moving in to the next test.

So, it runs Test 1, then Test 2 (15 times), then runs Test 3.

The way I want it to run is:
Test 1, Test 2 (with a specific user), Test 3
Test 1, Test 2 (next user in CSV file), Test 3
Test 1, Test 2 (next user), Test 3.
I need the entire test suite to loop, once for every user in the CSV file.

How do I create a test suite that runs multiple times, once for each person I have in a CSV file, rather than running that single test for each user in the CSV file, before moving on to the next one?



Your testing flow is nice and I personally agree with it. Unfortunately, we do not have data-driven implemented at scope of TestSuite, which is needed to support your scenario. I will move this request to New Feature/Suggestion to have it voted.

With your approach, currently we have a work-around solution. Just in case you haven’t got the idea, I think you could add Test 1, Test 2, Test 3 into a SINGLE TestCase and then add the CSV file to that TestCase. It might help in the meantime.


There are some other things we should consider are:
- TestSuite is a collection of TestCases which should be independent to each other as a good practice.
- If you execute a TestSuite with a data-file, then at management level, what do you expect in the report? If a TestCase fail at a specific row of Test-data, could we consider as a TestSuite failed?

1 Like

Hi Trong, thanks for the information!
I was thinking along the same lines as you mentioned, a Test Case that calls other tests cases rather than a Test Suite.

It was easy to implement and works quite well.
Within my “Impersonation” test, after I get the user from the file, I call a dozen Test Cases using:
WebUI.callTestCase(findTestCase(‘Test Cases/Test Case Location/Test Case Name’), [:], FailureHandling.CONTINUE_ON_FAILURE

It now iterates through the entire Test Suite per user, then circles back and does the test suite over again using the next user.

It works quite well for now.


Hi Peter, How do you handle Data Sheet for individual user? You cannot differentiate data for each user right?

Hi Peter,

I guess I have a similar problem to yours, but with a little difference:

I have 2 TestCases:
1 Login into a Webpage
2 Select a record

I have a xls-Sheet with three columns:
username | password | record_id

I wanted to run both TestCases in a TestSuite for every iteration.

So in first iteration the username, password and record_id are taken for the first row.
Second iteration the username, password and record_id are taken for the second row.

After reading your post I created a third TestCase where the T1 and T2 are excuted.

WebUI.callTestCase(findTestCase(‘Generell/Login’), [:], FailureHandling.CONTINUE_ON_FAILURE)
WebUI.callTestCase(findTestCase(‘Record specific/Open Existing Message’), [:], FailureHandling.CONTINUE_ON_FAILURE)

The T3 is executed in an TestSuite where the mapping to the test data is set up.

But at execution time no value of the test data sheet is getting mapped to the testcases.

What did I wrong? How did your mapping worked?

In the meantime I solved this issue by doing the whole work in one TestCase:

data = findTestData(‘xlsx/valid data’)

for (def index : (0…data.getRowNumbers()-1)) {

// Login Test Case Section
‘Open Browser’

‘Maximize current browser window’

‘Navigate to URL “http://localhost:8080/myapp/”’

‘Enter Username’
WebUI.setText(findTestObject(‘Page_Login/input_j_username’), data.internallyGetValue(“username”,index))

‘Enter Password’
WebUI.setText(findTestObject(‘Page_Login/input_j_password’), data.internallyGetValue(“password”,index))

‘Click on Login button’‘Page_Login/button_Anmelden’))

‘Check if Filter Button is present to verify GUI is full loaded.’
WebUI.verifyElementPresent(findTestObject(‘Page_Login/Filter_Button_GUI’), 5)

// Open Record TestCase Section

String dynamicId = data.internallyGetValue(“record_id”,index)

String xpath = MySelectors.dynamicIdPath.replace("<>", dynamicId)

TestObject myRecord = MySelectors.getMyTestObject(“xpath”, xpath)

WebUI.waitForElementClickable(myRecord, 10)


WebUI.waitForElementClickable(findTestObject(‘Object Repository/Edit Existing Message/button_Abort_Edit’), 10)

WebUI.delay(5)‘Object Repository/Edit Existing Message/button_Abort_Edit’))

‘Close Browser’

It works, but I think it’s harder to maintain than using different TestCases.

For my set up, I have a main Test Suite with 2 tests inside, Login and my “Data Driven” test case which is the test case that calls other test cases.

At that Test Suite level, I have my “Data Binding”. This is a CSV file that was imported into the “Data Files” container. For each call to the Data Driven Test Suite, a new name is used to iterate through the test cases.

In essence, the Test Suite is used so it can attach to the data file and load the users through data binding. All the work is done within the data driven test case which is merely a list of 30 calls to other test cases. It acts as a FOR loop to keep iterating through the list of user names I’ve supplied.

So the setup would be:
WebUI.setText(findTestObject(‘userName’), impersonateUser)
WebUI.callTestCase(findTestCase(‘TC1’), [:], FailureHandling.CONTINUE_ON_FAILURE)
WebUI.callTestCase(findTestCase(‘TC2’), [:], FailureHandling.CONTINUE_ON_FAILURE)
WebUI.callTestCase(findTestCase(‘TC3’), [:], FailureHandling.CONTINUE_ON_FAILURE)

This looks quite similar to what you have set up for your scenario.