[KataConnect #16 Recap] AI-Powered Manual-to-Automation Test Conversion

Dear Indonesian QA Community and everyone,

Thank you for showing up with curiosity and heart. KataConnect #16 was all about getting practical with AI: what it changes, what it doesn’t, and how we can use it to test smarter without losing our quality mindset. Below is a quick recap of the highlights, then you can open each dropdown to read the original answers exactly as shared.

Key takeaways

  • Manual testing is changing, not fading. Many shared how the real shift is in how we think, learning to guide, question, and validate results rather than just execute steps.
  • Clarity matters more than ever. The clearer your test context, the better support and insight you get from any tool or teammate.
  • Self-healing and small wins count. Maintenance will always be work, but smarter setups and clear structure can make it lighter.
  • Quality still needs human eyes. Pipelines bring speed, yet intuition, curiosity, and common sense are what catch the things automation can’t.
  • Start tiny, grow steady. The most successful teams pick one repetitive area, try something new, learn from it, and build momentum from there.

Detailed questions and answers are noted below :backhand_index_pointing_down:

With AI features in place, how has the role of manual testers evolved within your team? Are there new skills they need to learn?

Manual testers skills have to evolved to work with AI augmented solutions - Instructing thru prompts, Interpreting AI suggestions, reviewing Agents results and test analysis.

AI isn’t replacing manual testers, it’s helping us level up. The key now is learning how to work with AI such as how to prompt it, review what it suggests, and still apply our own judgement. At the end of the day, tools are getting smarter, but quality still depends on how we think. So, I would suggest to level up asap.

Does Katalon’s AI provide actionable insights or recommendations based on previous test results? Could you share an example?

Katalon’s AI analyzes past test runs and compares with the latest, provides actionable suggestions on flaky tests, suggestions on test coverage, frequent failed test areas, and isolates environment issues versus script failures.

Can AI read and understand the entire library and source objects in Katalon (object repository, test data, custom keywords), enabling more accurate recommendations or automation?

It can read your existing code/files, but it would be better and accurate if you give/mention the context explicitly, rather than asking the AI to figure out or explore by itself.

How can AI help reduce the effort and time required to create and maintain test scripts, which are often one of the biggest challenges in automation testing?

One of the AI’s great capabilities is that it’s great at “writing” — exceptionally fast. But we, as QA, are still the “analysts” and “reviewers.” AI can accelerate the work, but humans remain responsible for designing, reviewing, and validating the output.

For maintenance, Katalon already provides a built-in solution called “self-healing.” You can capture the primary locator using Katalon Record or Object Spy; or add them manually.
When the primary locator breaks, the system automatically switches to an alternate locator.

To what extent can AI detect changes in the application (e.g., UI element updates) and automatically adjust object recognition without manual intervention?

This hasn’t been tested extensively yet.

Can AI analyze test execution results and provide automated insights, such as root cause analysis for failed tests?

In some cases, yes, AI can provide automated insights. However, it may still “hallucinate” root causes.
For example, it may report “element not found” when the real issue is overlapping elements.
Validation and human review remain essential.

How do you keep the QA team’s quality mindset strong when automation takes over most testing activities?

Even when most testing is automated, the quality mindset has to stay strong. Automation helps us test faster, but it doesn’t think for us.

We need to focus on curiosity, critical thinking, and collaboration. Quality isn’t just about running scripts — it’s about asking why something works, not just how it’s tested.
If we don’t maintain critical thinking and domain knowledge, automation and AI won’t be as valuable.

Many QA teams are still in the transition from manual to automation. What are the most common challenges, and how can AI-powered solutions help overcome them?
  1. Skill gap: Many testers lack programming or tool expertise. Low-code/no-code AI testing platforms reduce that barrier.
  2. Maintenance: Frequent application updates cause broken or outdated tests. AI can auto-heal tests, detect flakiness, and suggest updates, reducing maintenance effort.

Beyond skills, the hardest part is mindset — moving from manual to automation means thinking about scalability and reusability.
AI can help bridge the gap by assisting with test generation and analysis, but teams must still build the right habits.

As a new company moving to automation, does the system have to be stable first before implementing it? What’s the best way to prepare for Katalon Studio adoption?

(Discussed live during the session.)

Many QA practitioners remain skeptical of AI-powered testing, believing it’s not yet reliable. How can teams prove its real value, and what metrics show effectiveness?

To assess AI effectiveness, measure speed of execution — for example, compare how fast a bug report or test case is created using AI versus manually.

To overcome skepticism, demonstrate practical use cases:

  • Generate tests or scripts directly from requirements.
  • Track measurable impact: faster scripting, fewer manual errors, quicker fixes, easier maintenance.

Once teams see tangible improvements, acceptance follows naturally.

Katalon integrates with DevOps for regression automation. Is that sufficient to ensure quality, or is manual regression still necessary?

CI/CD-integrated regression is essential for speed and consistency but not entirely sufficient.
Manual testing still adds value for complex logic, edge cases, and visual verification.
A hybrid approach ensures broader coverage and higher confidence in quality.

What’s the best way to restart the roadmap of shifting from manual to automation when the team is already overloaded with daily work and documentation?

Start small and prioritize high-impact areas.
Begin with repetitive test scenarios and run a pilot project with clear goals and timelines.

If possible, form a small focus team dedicated to automation using low-code/no-code tools.
In parallel, integrate with a test management platform to collect results, manage requirements, and streamline documentation.

Iterate based on early outcomes — evaluate, adjust, and then scale gradually.

:speech_balloon: Please tell us

What’s your biggest takeaway from this session?
Which idea resonated most with you?
And what would you love to see in KataConnect #17?

-The Katalon Community Team