@arvind.choudhary Hi there! That’s a great question. It’s helpful to think of Katalon Studio and TrueTest as complementary tools that address two different but equally important testing perspectives.
Here is a simple breakdown of how they differ:
Katalon Studio (including Recorder): This is designed from a development perspective. It focuses on the functional requirements and technical specifications defined by your team to ensure the software works as intended.
TrueTest: This is designed from a user-journey perspective. It uses AI to identify how real users are actually navigating your application.
In short, while Studio ensures your code meets its requirements, TrueTest enhances your quality coverage by revealing the actual paths your users take in the real world. By using both, you get a much more complete picture of your application’s health.
I hope this clarifies things! Does this help answer your question, or can I provide more detail on a specific use case?
when will the katalon studio come as web version or even in desktop version, there should be feature of auto update of version.
I’ve replied to a similar question here
Like for example if I am using currently 10.2.3 version and everything is working find I see newer version have come in the market, so now to shift I need to do lots of efforts, and the bigger problem is that we need to ask entire team to update the version otherwise, we get error msg that project not behave nicely if we jump from one version to another
Just remind in case you don’t know: you can have multiple Studio versions on the computer workstation as a backup method for a stable work environment
Will AI recording support data-driven testing by detecting input patterns and suggesting parameterization during recording?
Thank you so much for this great suggestion.
It is actually already on our enhancement list. When we first began developing the AI Recording Agent, we realized there are so many possibilities for AI-generated artifacts, and your idea aligns perfectly with that vision.
While I can’t provide a specific timeline or promise a roadmap date here, it is a pleasure to receive such high-quality feedback from our users. Insights like yours are exactly what help us prioritize the most impactful features for the community
Thanks everyone for joining Ask Katalon Anything (AKA)!
We loved the questions and the thoughtful discussions around test recording and agentic workflows.
Next steps:
We’ll keep responding to any remaining questions in the thread over the next few days.
Rewards will be reviewed and shared with selected participants soon.
Big thanks again to everyone who participated see you at the next AKA!
Drop your idea for next AKA, what would you like to discuss with our PMs and Engineers?
Thanks so much for the suggestion, I agree that having clearer guidelines upfront would make these sessions smoother and help everyone get value from them.
Just to share a bit of context on what happened last time: the goal wasn’t to stop anyone from asking questions. It was mainly about keeping things fair and productive for the whole group.
And thank you again for your dedication and support, we always really appreciate it. We’ll make this better from next time onwards.
@depapp
Hey hey, thanks for the questions, and sorry for joining late on the AKA. I think we’re good with the fact that AI makes work easier across the board.
It definitely lowers the learning curve for no-/low-code users, which is exactly the segment Katalon Studio, and Katalon overall, is focused on serving right now.
With the Katalon Studio Recording Agent, we’re using Studio Engine to actually perform actions on the AUT. That means captured test objects are much more accurate than using an LLM alone.
@snehass@arvind.choudhary Regarding test maintenance, we’re working on a Semantic Locator and a supporting mechanism that lets you generate automated scripts even when the AUT isn’t available yet. This will also strengthen AI-based locators, and do make AI Self-healing to another level, making maintenance far less of a concern.
I’d say Katalon Studio is designed to be very friendly for no-code and low-code users. We offer a free version with the core features QA engineers need for their daily work, so it’s easy to get started with test automation.
Whether you’re new or experienced, you can move fast and focus on testing rather than setup complexity or technical constraints. Building on that, we’re taking it a step further in the age of AI: making the automation experience even more intuitive and accessible.
Looking ahead, Katalon Studio will support natural language scripting - not just generating tests, but also editing and maintaining them through simple, human-readable instructions.
That’s the vision: making automation feel less or nothing like coding and more like communicating your intent.
We do! StudioAssist already supports this capability today.
Moving forward, all AI-generated outputs will include a diff view, so you can easily review, validate, and control any changes made to your test scripts.
Katalon keeps your testing secure and enterprise-ready with:
Private Instance — A single-tenant, isolated environment where your data and infrastructure are fully dedicated to your organization. Supports offline license activation, so teams in air-gapped or restricted networks can operate without internet connectivity.
TestCloud Tunnel — Enables secure cloud-based test execution against internal applications behind your firewall, without exposing your network to the public internet.
Compliance — SOC 2 Type II, ISO 27001/27017, GDPR, with AES-256 encryption at rest and TLS 1.2+ in transit.
The Recording Agent doesn’t directly address locator stability, since it relies on the Studio Engine to capture objects. Its main role is to improve the recording experience.
Locator stability is more tied to the recording engine itself, and that’s something we’re continuously enhancing.
In the meantime, have you tried Smart Locator? It’s a strong complement to traditional XPath or CSS and can help make your tests much more resilient.
Semantic Locator, one of new AI initiatives we’re working on, is designed to address exactly this problem.
When elements change, the agent can identify and resolve them by searching for alternatives based on their semantic meaning, rather than relying purely on traditional locators.
Instead of depending only on XPath or CSS, you can set Semantic Locator as the default strategy. This helps reduce flakiness and removes much of the concern around dynamic or changing elements.
The Recording Agent is still new and currently in beta, so it handles simpler use cases at this stage. That said, intelligently breaking long recordings into reusable components is exactly the kind of capability we’re building toward.
Your experience is a perfect example of why this matters — 700+ test steps working locally but failing on a VM with a “method too long” error, with no prior warning from the tool. That shouldn’t happen. The studio should have flagged this early and suggested modular breakdown before you hit that wall at execution time.
What we’re working on with AI capabilities is this kind of smart guidance: detecting overly long or complex flows and recommending breakdown into reusable keywords/test components during the authoring phase, not after failure. The goal is to shift intelligence left — from execution-time errors to recording-time recommendations. In the meantime, a good practice is to keep test cases under 200-300 steps and proactively break longer flows into Custom Keywords or Call Test Case steps.
Thanks for raising this! Feedback like yours directly shapes how we prioritize these AI improvements.
You should definitely give StudioAssist a try. Even if you don’t have test data ready, it can generate it for you, then you can just follow up with another prompt and see how StudioAssist handles the rest.
Great question, and I completely understand the frustration around managing multiple licenses.
Here’s the good news — we hear you. The vision has always been to bring everything together, and that’s exactly where we’re heading. Without giving too much away, keep an eye out for the True Platform — the idea is simple: pay once, get everything.
As for generative AI test script generation from manual test cases — that capability is very much part of the roadmap and the broader platform vision. The goal is to make recording, AI-generated scripts, and manual-to-automated conversion all part of a unified experience, without needing to purchase separate modules.
I shared the TruePlatform in the previous question. Our vision is to provide a fully autonomous testing workflow powered by AI — from manual test cases to automated scripts, auto-execution, and report generation, all within one platform. No context switching, no separate tools, no manual intervention in between.
Frankly speaking, AI can handle a large portion of the work based on the input, but human validation is absolutely essential. It’s always worth taking a moment to review and verify the results, since AI can occasionally make mistakes, and sometimes in subtle ways that are easy to miss. If we leverage AI capabilities with care, it can truly be a game-changer.