[Webinar] Revolutionize the Role and Skills of QA Testers and Quality Engineers in the Age of AI

Updated on Oct 23 with the recording of the session.


Social feature img

Hi all, :wave:

Artificial Intelligence (AI) is transforming the way software testing is done. Quality Assurance (QA) testers and Quality Engineer (QE) need to adapt to the new challenges and opportunities that AI brings.

Join us on 2023-10-05T16:00:00Z as our speakers explore and analyze how the industry’s focus will shift from viewing QA and QE as mere job titles to understanding them as specialized crafts that are influenced and enhanced by AI technologies.

Key takeaways

  • An overview of the significance of AI in the current technological landscape, particularly its role in amplifying the functions and responsibilities of testers.

  • The importance of adopting a new mindset that embraces next-generation technologies to enhance the craft of testing.

  • A discussion on the challenges and reservations professionals may have about incorporating AI and how to discern credible advancements from hype.

  • An analysis of how AI specifically impacts the activities, speed, and skills required in QA and QE.

  • Guidance on the immediate learning objectives for QA and QE professionals and what to anticipate for future developments.

Our speakers

Mahesh Venkataraman Mike Verinder Mush Lucio
Mahesh Venkataraman - Managing Director at Accenture Mike Verinder - Chief Community Advocate at Katalon, Inc. Mush Honda - Chief Quality Architect at Katalon, Inc. Lucio Daza - VP of Product Marketing at Katalon, Inc.
Mahesh is an Information Technology professional with over 35 years experience in software engineering, quality engineering and Cloud architectures. He holds 12 patents, and he is a frequent speaker in international conferences and visiting faculty in premier institutes. Mike Verinder is the Chief Community Advocate at Katalon. Mike has 20+ years of experience in digital technology. He is an expert in Automation, Cloud services, AI DevOps, Digital Transformation, and more. Mike also founded and has managed the Selenium Automation Users Community of 181K+ members. Mush is a senior engineering leader with over 20 years of experience in Quality Engineering and Agile software development. He has developed scalable test automation solutions to facilitate teams’ transition to an automation-first, quality engineering mindset. Lucio has over 16 years of experience in the software industry in multiple roles as a solution architect, product and marketing manager. At Katalon, he leads the Product Marketing team to show the value of products to enterprises around the globe.

Save your seat


Thank you very much for tuning in, please find the recording of the session below :point_down:

2 Likes

Good to see @mike.verinder on board.

3 Likes

Thanks Russ :wink:
You always bring great perpective when you go to these things too. First thing I do when I get on these things is to see if you were able to make it.

3 Likes

Well, that was interesting…

In my view, there’s a disconnect between where we need to go and what we (you guys) have planned. Like I intimated in the chat, that is not Chat-GTP - at least, not in its present form. Let me expand on what I was trying to say in the chat (which I always find hard to use – too limiting!)

LLMs, as they currently stand, do not lend themselves well to the overall aims of Testing. What I see people doing (Katalon is just one example) is a mad scramble to “plug in” some AI (like Chat-GPT) to help/aid relatively minor parts of test development and tooling. That’s fine, as far as it goes, but it’s not “the thing”. It’s not the tool itself, it’s a minor add-on that let’s the corp say they have AI tooling.

Broadly speaking, what we need, is an AI “model” that can be trained on the AUT. Its sole purpose when initially applied is to learn, understand and map the AUT, page by page, element by element, click by click, touch by touch.

The first stage is called “discovery” → what are the possible states (of each page) of the AUT. Let the model run for however long that takes. I’m imagining many, many hours.

Second stage: Ratify the states model. This is intended to have the AI learn which parts of the AUT have potential for flakiness-- in coding terms, which parts of the nascent testing model need to address “waits” and “presence”. Of course, this helps make the model become more robust. And again, let it do this for as long as it takes, long enough to expose it to variances in network response times, etc.

Once these stages are complete, write a test in the normal way, or, throw one of your old tests at the model → the model explains why your test is likely to fail, and/or, how you can improve your overall approach based on its understanding of the AUTs “querks” which it gathered during the discovery and ratification stages.

Now let’s go back to the states I mentioned earlier. I don’t propose these remain “locked up” in the model, rather, I expect them to be made available as wrapped APIs usable by any TC… I’m magining something like…

WebUIAI.gotoState("autBeforeLoginState")
WebUIAI.gotoState("autFillShoppingCart")
WebUIAI.verifyState("autFilledShoppingCart")

Of course, it’s down to implementation details as to whether those methods are better modeled as…

WebUIAI.State.apply(states.autBeforeLoginState)
WebUIAI.State.apply(states.autFillShoppingCart)
WebUIAI.State.assert(states.autFilledShoppingCart)

With that in place, it’s not difficult to imagine this…


WebUIAI.State.apply(states.A)
WebUIAI.State.apply(states.C)

Error: Cannot move from state "A" to state "C" while skipping state "B"
The AUT does not support this test scenario because it's not a possible 
user story or route.

Suggestion:

WebUIAI.State.apply(states.A)
WebUIAI.State.verify(states.A)

WebUIAI.State.apply(states.B)
WebUIAI.State.verify(states.B)

WebUIAI.State.apply(states.C)
WebUIAI.State.verify(states.C)

The above, I believe, contains the rudiments of a better AI-based tool than anything on offer at the present time. The key benefit, that everything it “learns” is offered back to testers as robust, well tested, ratified testing code is something that will prove a killer app, I’m sure.

(No, I’m not done yet :nerd_face: )

WCAG 2.0 and WAI (here and here) are not being addressed by ANY automation tool anywhere (I’m excluding in-browser plugins that help manually test tiny portions of a single webpage – tools like ANDI, for example).

I mentioned this (I think poorly) in the chat. Let me make this absolutely clear, here:

There are laws* around the world demanding conformance to WCAG 2.0. This is not a topic we can any longer brush aside. Testing and Automation software needs to get on board with this stuff NOW.

*Don’t take my word for it, just look at the list of countries/governments that have enacted laws and policies around accessibility:

“Updates in progress” ← indeed.

I repeat, there is no automation software for testing this immense area of opportunity.

And frankly, I don’t see how it can be done robustly without some kind of AI assistance.

Here is the output from ANDI for this web page – the one you’re reading:

65 alerts. Says it all.

Thanks Russ !

I agree there is more to be done. Heck our True Test (atg) solution that is about to come out of beta is a great example. But did you realize we have llm based solutions as well ?
At the end of our chat I mentioned that I think “we’ve only just begun”. I expect you’ll see ai solutions of all makes and sizes to really come out next year… my guess 300 percent increase in product offerings from the overall market ( not just from katalon either)…

I always love to hear your thoughts Russ. Looking forward to getting you on video one of these days :sunglasses:

Thanks for hanging out my friend

Hi Russ, great ideas, as always!
I would consider your suggestions as two products even though they can be combined into one. There are tools for accessibility testing. You named ANDI for checking single pages, you can use Katalon Studio integrating with Axe for better coverage (see how). They might not cover all requirements stated by WCAG and WAI, but they at least help to automate verifying a certain number of rules.
The idea of creating a map of the AUT can be done in multiple ways. Using a bot to explore the states and paths is one of them. Please check out TrueTest, our latest product that leverages user interactions to map out all application states and transitions in user journey maps, from which a bot will explore the testing environment to recreate similar paths and generate test cases to cover those paths/user flows. TrueTest will then identify gaps between the two environments in terms of user flows and traffic. We have a lot of ideas for extending TrueTest’s capabilities, which I will be happy to discuss in private conversations.
Please try out the above solutions and let us know your thoughts.
Thank you very much.

3 Likes