[AKA 8 Sept 2025] Web Testing

Hey Web Testing enthusiasts! :wave:

We are excited to announce that Ask Katalon Anything (AKA) is now LIVE! :tada: Whether you’re exploring the Katalon platform, running into challenges, or simply looking for tips to enhance your testing workflows – We are here to answer all of your questions.

Thuy Ngo is our Product Manager for Katalon Studio, with a mission to continuously enhance Studio to deliver more power and efficiency with every release. She is passionate about engaging with the Katalon Community and values the opportunity to hear your questions and feedback.

Duy Lam is a Software Engineer on the Studio team, dedicated to developing impactful solutions that empower software builders worldwide.

During this AKA, feel free to reach out to Thuy and Duy with any ideas or questions about Web Testing - they’re here to help!

:new: Guidelines :spiral_notepad:

  • If you have any questions about Web Testing, please raise them directly under this thread. Ask one question at a time by replying to this topic, using the Reply button AT THE BOTTOM of this topic.
  • We will provide a weekly Q&A summary of Katalon Studio discussion in a separate thread. Please read through the existing questions in the summary to see if your question has already been asked by another member, before posting your own.
  • Please don’t reply to anyone else’s post. If you’d like to discuss a related topic in more detail, you can Create a new topic
  • After posting your question, please give me from 1-3 days to answer your questions as I may also be busy with other projects.
  • And lastly, don’t forget to have fun! :wink:

Posts/replies that do not follow these guidelines may be removed to keep the AKA session flowing smoothly. Thanks! :+1:

3 Likes

@duy.lam As a Software Engineer specializing in automation testing, what are some of the key best practices you recommend for building a maintainable and scalable test automation framework, especially for larger projects? Much appreciated

Colin

1 Like

@ColinL

what are some of the key best practices you recommend for building a maintainable and scalable test automation framework, especially for larger projects?

It’s a topic that’s often discussed in the Katalon community. To make sure I give you the best advice, could you please clarify your question a bit?

Are you asking for:

  • Best practices for designing and building a new test automation framework from scratch (e.g., deciding on the core architecture, tools, and design patterns)?

  • Best practices for working within an existing framework, like Katalon Studio, to ensure your test projects are scalable and easy to maintain?

A specific example of what you’re trying to achieve would be super helpful. Once I understand exactly what you need, I can provide you with a clear and concise answer.

2 Likes

Thanks for clarifying! What I’m really interested in is best practices when working with an existing framework like Katalon Studio. At the same time, I’d also love to hear if there are a few universal principles you’d recommend when starting from scratch.

Some of my questions are:

  • How do you usually approach organizing test cases so they don’t become messy over time?

  • What are some good strategies for reusability and maintainability as the project scales?

  • And how do you deal with flaky tests in larger suites?

I think these would help me (and others here) build something that’s scalable and easier to maintain in the long run.

Colin

1 Like

Thank you for your question, @duy.lam will answer your question shortly :grinning_face:

Hello @Shin, I also have more questions which I believe many community members here might wonder: What’s the future direction or roadmap for StudioAssist? Will it evolve beyond code suggestions to become more of a true QA assistant, one that can help plan, create, and maintain tests throughout the lifecycle?

@ColinL

Thank you for asking. I’m happy to share some common practices with you.

And how do you deal with flaky tests in larger suites?
What are some good strategies for reusability and maintainability as the project scales?

For these topics, the Katalon Academy is a great resource. We have two courses that you can check out and practise. More courses there for Studio functionality in case you want to learn other aspects

How do you usually approach organizing test cases so they don’t become messy over time?

When it comes to organizing test cases, you should approach it just like you would with organizing your code—a clean and consistent structure is key. Here are a few common practices:

  • Follow a logical structure: Group test cases based on features, modules, or the application’s functionality. For example, all tests related to user authentication should be in one folder. This makes it easy for anyone on the team to find what they’re looking for.
  • Use clear naming conventions: Use descriptive names for your test cases. Instead of test1, use something like TC_Login_ValidCredentials or TC_Verify_CheckoutProcess. This provides immediate context and makes your test suite easier to navigate.
  • Leverage tags or categories: Most frameworks, including Katalon Studio, allow you to tag or categorize tests. You can use tags like @smoke, @regression, or @critical to easily run a specific subset of tests.

I hope this helps

1 Like

Love them or hate them , they will ensure their presence always

1 Like

What will be the role of AI in Katalon Studio in the future?
Will it primarily support test script creation, or can we expect a near “live” assistant in the coming years?

For example, when a large number of automated tests are scheduled to run overnight across multiple projects, could we envision receiving a multi-project execution report that highlights critical issues—with prioritization?
Such a feature would significantly improve efficiency in terms of fixing bugs, updating/maintaining existing tests, and adding new ones.

1 Like

I would like to try out StudioAssist for generating web tests, but I get the following error after I submit a prompt. How can I fix this?

com.kms.katalon.ai.core.model.exception.StudioAssistLlmApiServerException: {“code”:13,“message”:“com.fasterxml.jackson.databind.exc.ValueInstantiationException: Cannot construct instance of com.katalon.testops.restclient.GetSubscriptionExpirationResponse, problem: Invalid expirationDate format. Expected ISO 8601 format.\n at [Source: REDACTED (StreamReadFeature.INCLUDE_SOURCE_IN_LOCATION disabled); line: 1, column: 1]”}

1 Like

@dbrownlee Please give me the username (direct message) and the account name you used to sign in to the KS

1 Like

@cgrandin

For example, when a large number of automated tests are scheduled to run overnight across multiple projects, could we envision receiving a multi-project execution report that highlights critical issues—with prioritization?

Thanks for the suggestion. From an automation engineer’s perspective, we can think about this in terms of how we determine the severity of a single test case failure.

Imagine a simple scenario: A test case runs successfully to completion, but a single verification step fails. For example, a page title doesn’t match the expected value. In your experience and product domain, what criteria would you use to determine the severity of that specific failure?

This will help us understand what criteria are most suitable for prioritizing critical issues in a multi-project execution report.

Duy

1 Like

Hello everyone, this is a great time to answer this question as we’ve just introduced StudioAssist Agent Mode in version 10.3.2.

Agent Mode interprets natural-language prompts and can execute multi-step actions such as creating test cases, troubleshooting errors, and automatically accessing official documentation. It is powered by two MCP (Model Context Protocol) servers:

  • Katalon MCP Server – connects to Katalon Docs to provide accurate, up-to-date answers backed by official guides.

  • Katalon Studio MCP Server – connects directly to your project to read, create, and edit test cases automatically.

Learn more here: StudioAssist Agent mode
—
And we’re not stopping there! StudioAssist Chat Mode will keep getting better, with more tools coming soon to the Studio MCP Server.

I’m thinking… how about StudioAssist helping you troubleshoot issues, analyze failures, or even migrate your project from 9.x to 10.x - all with ease? :laughing:

1 Like

To determine the severity, my suggestion would be to place the error within a sort of multi-project risk management matrix. Indeed, an error in text may not have the same impact depending on the type of project where it was detected, the type of user it’s exposed to, whether it contains sensitive data, etc. It also depends on the feature involved in the test where the error appears—whether it’s a critical or secondary function. Perhaps such a matrix could actually be a weighted model powered by AI, considering various test reports, requirements, and specifications. There seem to be a lot of prerequisites, but with large volumes, this might be a helpful tool to move toward more efficient fixes. I wonder if such a solution could be implemented in the medium term.

1 Like

Since we’re already talking about StudioAssist Agent Mode, what if Studio brought you a Recording Agent? :thinking:

Imagine this: an Agent that not only captures your manual recording steps, but also lets you type a natural language prompt (like ‘Verify user can update profile picture and save changes’) and instantly turns it into test steps. You could even pick which MCP server handles the prompt for test generation.

Some ideas we’re tossing around:

  1. Mix manual + prompt → record part of the flow, then just type the rest.
  2. Generate test cases faster → use prompts instead of recording every single step.
  3. Domain-ready → teams can plug in your own MCP server (banking, ERP, etc.) for domain-specific accuracy.

:backhand_index_pointing_right: Feed us your idea, would this make your testing life easier?

2 Likes

I would like to bring our top contributors to share their opinions :grinning_face: @trust_level_3, your inputs are so valuable to us and we would like to hear your voices :rocket:

Recently, I want to track the flakiness of all test cases for maintenance plan.

I found out that there are two places where i can observe flakiness (and total of execution of one test case). The first place is on Katalon TestOps > Tests > Test Case > Select one folder > View the column Flakiness. The second place is on Katalon Studio > Test Suite > View the column Flakiness

But the number at two places are not the same! So I wonder:

  • How Katalon define these numbers?
  • What is the correct and efficient way to observe the flakiness ? For example, the acceptable percentage for flakiness.

Note: When I click the test case in the first place, test case path is (repository name) / Test Cases / …. But when I click the hyperlink of flakiness in the second place, its path is Uploaded Data / Test Cases / ….

1 Like

Dear @shin and @duy.lam, linhmphan might need your help here :star_struck:

My context: When executing test cases, based on the error message, sometimes I know the failed test scripts will be passed if I rerun them.

Then I want to add a conditional statement (using TestCaseContext getTestCaseStatus() and getMessage()) in built-in TestListener @AfterTestCase. However, I cannot know how to rerun it dynamically ( I only found out the method skipTextCase() ).

Or could you provide any better solutions for my context?

1 Like

I have one new test case A and one existing test case B. To ensure the dependence, I have to add test case A and B to one test suite.
Scenario 1: When I run that test suite in Katalon Studio, the test result will be uploaded to TestOps (as the configuration in Project > Setting > Katalon Platform).
Scenario 2: For regression test, when I rerun failed test suites in local / Katalon Studio, the test result will be also uploaded to TestOps
But I want only the test results in scenario 2 contribute to the flakiness of the test cases.

My first solution is turn on/off the configuration “Automatically upload all test reports to Katalon Platform” manually based on my demand.
But are there any smarter ways to adapt my need?

1 Like