Recently, I observed ChatGPT expanding throughout the world and I really like to use it. I wonder if any of AI tools and technique can be applied in testing. Generally, What are some of the latest trends in using AI for testing? And can anyone share with forum some examples? I really do not know how AI has been used in testing.
Other than generating code snippets prone to errors?
Perhaps the Visual Testing feature proposed by Katalon may have more sense, altough it is prone to error too.
Shortly speaking, as long as a certain AI it is not able to (continuously) learn from (new) mistakes, it has no relevance for the testing process but only to generate more and more wrong solutions.
It is yet another trained monkey.
Trained wrong will allways give wrong solutions.
Trained âwellâ may give acceptable solutions⊠for a while.
Will never use such.
In our case, it helps us to evaluate different test cases that we can take into account with respect to the functionalities we describe.
In terms of code, they are more queries of methods in libraries or versions of the same code. To âplayâ a little bit.
But it is true that it is practical for non-technical profiles or those who are starting to automate.
Any more examples to consider?
Iâve seen a lot of shops use AI when creating Test Data. Iâve also seen a lot of shops do a very âHigh Levelâ of code reviews using AI. Chatgpt is just one aspect of AI.
Years ago and not chatgpt related, I created a system that was learning from multiple instances of of CI and jira That helped predict and future bug risks as well as future budget planning.
Iâve found the best people that can really envision solutions here come with a creative mind but have very sound engineering capabilities.
Cheers.
Exactly. Thatâs where we are right now (with ChatGPT at least).
Well, thatâs the key. Learning!
Our company uses AI technologies to analyze data comming from various networking appliances to help in debug and predict various issues.
So, yeah, AI can help with a lot of things, if used properly.
ChatGPT as it is now? I donât see any real value of it.
So I am treating such implementations only as demo version, to show what it may do.
How those will evolve, time will tell
Oh, lots of perspectives. Do you guys think it will be valuable to share some more AI tools & techniques and give people tutorials how to approach? I love to play around with those. Maybe for fun or for learning.
Sure! Maybe for fun or for learning. We can use AI as a âcomplementâ. We can view the developing at market and diferent points of views. Nowadays, Aiâs are evolving faster, it is not only abot chatgpt.
I found an article which claimed that AI will not be replacing Manual QA Engineers any time soonâŠ
Some of the points the author made seems pretty valid such as:
-
Limited Understanding of Context: While AI models are trained on large datasets, they still cannot fully understand the context of a specific application or business domain. This is in contrast with manual QA engineers, who can better understand the product and its business objective.
-
Manual QA engineers are better at understanding the usersâ experiences with their product that AI, at the moment, cannot. An engineer can put themselves into the shoes of a user and assess how user-friendly a program, service, or application is, while AI is better suited to finding discrepancies in coding and errors.
-
Manual QA engineers can understand the context of bugs which make them better at debugging than AI. Again, AI algorithms can detect simple errors and can be taught to analyze code, but they cannot comprehend why certain conditions cause issues or how the system works as a whole.
@albert.vu Agreed. All summed up by âproduct knowledgeâ and âsituational awarenessâ.
However, if ATG can be shown to provide a wealth of coverage that would otherwise take a human many man-months to develop and ratify, there may be some orgs that promote its use to get a solid head start.
Of course, a human will be involved to ratify ATGâs output.