How to use already opened browser for script execution

Hi team,

I want to use same session (chrome browser) for my scripts execution instead of launching new session for each script. Could you please let me know the possibilities and any code if u have?

Hope these help!

Just an FYI, this is generally discouraged from a test methodology perspective. Reusing an existing browser instance adds extra unknowns to your tests, including browser cache data, cookies, etc. You can reuse an existing browser instance for debugging your scripts by clicking on a line in your script and choosing “Run from here”. But again, I would not do this for actual test executions.

1 Like

As noted, you can keep the initial browser session open and then attach to that. For example, I can run the Login script then keep using that browser session. If you click the Down Arrow next to the Run button there will be a submenu for your browser, that is the open session.

You can also set a breakpoint for your script, then launch it using the Debug button. You will then be able to step through the code or run a different script for the same browser.

The option to continue running a sequential test case using the currently open browser has been removed. Will this option be available again? I break down my test cases in small iterations and like to run them individually prior to moving them to a Test Suite. Without this option, I am having to create 1 huge test case.
Thanks

To start, let’s be clear: discouraged doesn’t mean forbidden. If it’s something that makes more sense for your case, then by all means, reuse a browser instance.

That being said, my main problems with reusing a browser instance would be:

  1. You may be introducing data caching/cookie reuse/other unknowns into your test scenario, which may/may not be desirable. In my opinion, one of the goals of any functional testing should be to reduce the number of variables you considering to as few as possible, i.e. the exact functionality you are trying to target.
  2. The biggest one for me is, from a purely methodological standpoint, you should have as few dependencies between your tests as possible (ideally 0, which would be a hermetically sealed test case). Let’s say that you have a suite of 50 test cases, and all of them pass except 2 or 3. If you’ve written completely self-sufficient test cases (i.e. each one opens a browser, logs in, etc.), you can just run those 2 or 3 scripts alone and be sure that the results are valid. If your test cases depend on each other, then you have to run most or all of the suite just to test a couple things and to be confident in the results.

To me at least, this sounds less like a normal user scenario than logging in/out between cases, but then I don’t know your application(s).

The simple act of logging in isn’t really “testing that it works”. You would specifically write a login test for that. In this case, logging in would just be a means to an end (i.e. testing functionality further within the app).

If a single user logging in/out of your application between tests is causing a load issue, you have bigger problems on your hands… :expressionless:

The suggestion was to simply disable the Login and Logout modules during a test suite execution. Then if a test case did fail, I would just re-enable the disabled modules and then I could run/validate that single test case.

That’s a good point that I could bring up. My manager would bring up the idea of how a normal user is using the system and yeah, performing a set of different actions/functions in a single login and browser instance doesn’t sound like a normal user scenario. They proposed a manual QA member scenario, not a normal user scenario.

But in either case, they seem to be leaning towards reusing a single browser instance and while I expressed my concerns with that, they still want to take this route. Could you point me in the direction of some resources for utilizing “setUp” and “tearDown” methods for test suites and collections and how to reuse a single browser instance in that way?

Sounds like a nightmare :upside_down_face:

For the setUp and tearDown stuff (they are called “Test Hooks” or “Test Listeners” in Katalon):

https://docs.katalon.com/katalon-studio/docs/fixtures-listeners.html#setup-and-teardown-for-test-suite-and-test-case

In terms of how to reuse a browser instance, you would just need to make sure that WebUI.openBrowser() is called once at the start of the suite, and nowhere else. Any further WebUI calls should re-use this instance. Also, make sure that Project > Settings > Execution > Terminate drivers after each Test Case is unchecked (should be by default).

1 Like

Hi jpotrawski,
The conversation was interesting but there is another strategy to speed up your testing but it isn’t perfect.
My 2 cents.

  1. I like your idea of reusing an open session. Login doesn’t have to be disabled but enhanced to look for an existing web browser instance and then connect the web driver to the running browser. Only if it doesn’t exist does Login perform Login. If the browser launch and login can be avoided then that saves about 20 seconds a test. 1,000 tests using an existing browser can save over 5 hours of execution time for tests that can’t be executed in parallel.
  2. Katalon supports parallel execution on the same PC. If the application allows, then run 8 tests in parallel and don’t worry about reusing an open browser. If each test is 3 minutes long then a test suite that takes 3,000 minutes (1,000x3) is trimmed to 375 minutes by running 8 tests in parallel. That saves 43 hours in total execution time. Note: The test flow and application has to allow multiple logins from one PC for the same user.
    2a) Note that video recording of failed tests and screenshots does not work correctly for parallel execution. The video is the desktop when it needs to be the browser. Not sure about the screenshots.
  3. Depending on your needs, enhancing Login or reusing a browser have benefits to your test strategy.

Apologies for responding to this old post, but I think I am experiencing some of those weird behaviors or unknowns you mentioned that may result from reusing the same browser instance.

When running a test suite that reuses the same browser instance, some tests will fail with a StaleElementReferenceException. The tests that do fail seem to fail as a result of the same web element / object (a link in a navigation panel) being unable to be clicked. But when the failed test is reran individually, it executes successfully. For example, if I have 3 tests (as a part of a larger test suite) that each click the same web element, I found that the 1st test may click the web element successfully while the other 2 will throw the exception.

I did some reading on it from /documentation/webdriver/troubleshooting/errors/ and my guess is that by reusing the same browser instance, subsequent calls to click on that same web element throws the exception because of some odd behavior in the DOM.

What are your thoughts?

However, I did just run a test suite where it opens a new browser and logs in before each test case as well as logging out and closing the browser after each test case. Some exceptions still occur though (StaleElementReference and ElementNotInteractable) so now I don’t think it’s purely the fault of reusing the browser. I think I have to improve the wait mechanics? I use lots of “waitFor-XXX” steps, but sometimes it still fails (timing-wise).

This generally has nothing to do with the browser instance you are using (or cookies, etc.).

A StaleElementReference occurs when you are using a reference to a web element after the page has changed in some way (full page reload, AJAX, etc.). As a very simple example case:

WebElement link = driver.findElement(By.xpath("//a"))
link.click()

// user is redirected, page is reloaded, etc....

link.click();

WebDriver does this because it cannot guarantee the integrity of that reference if the page has changed in any way. What if that link disappears? What if it changes position in the DOM? Then the existing reference is (probably) not valid anymore.

As a general rule of thumb, you should ALWAYS relocate any elements you plan on using after some action causes the page to change. Using our above example:

WebElement link = driver.findElement(By.xpath("//a"))
link.click()

// user is redirected, page is reloaded, etc....

link = driver.findElement(By.xpath("//a"))
link.click()

Interesting. Appreciate the explanation.

I mostly use the Object Repository and use the objects I have captured beforehand. So using your explanation, the Object holds the same reference to a web element. After the page changes in some way, that same Object used again in some test step may lose the integrity of that reference? Am I understanding this correctly?

If so, what do you recommend for incorporating the relocation of the same element along with the Test Objects found in the Object Repo?

Depends on how you are using it. If you are using the Test Object within a WebUI method (WebUI.click(), for example), then you are safe, because the WebUI methods all re-locate the element using the Test Object. If you are using the Test Object to retrieve a WebElement instance, something like this:

TestObject testObject = ...
WebElement element = WebUiCommonHelper.findWebElement(testObject, 30)

… then you will still potentially get an exception if you reuse the WebElement reference.

At the end of the day, Test Objects are nothing more than containers for locators (xpath, css) etc., so references to TestObjects can always be reused. You can only really get into trouble reusing references to WebElements.

Thanks again. I understand now.

But that is interesting because the steps that are failing are simply steps of WebUI.click() on a previously captured Test Object. I have 3 test cases in a row, where each test will be clicking the same navigation panel link each time. But I noticed that 1-2 of those tests may fail on that “click” step while one of them may work just fine.

But on the other hand, the percentage of tests failing due to StaleElement is very low. It’s just a little annoying to see that flakiness because of the false failure.