When using AI in testing, wonder if the use of AI in testing can involve in collecting the sensitive data from user, for example, if testing a banking application. How should we ensure the privacy?
i don’t think there’s definitive answer to this question as the technology is still growing, and learning, so it will require lots and lots of data to be trained and be sufficient in whatever tasks that assign it to.
However, I think companies could still ensure that the privacy of their customers/consumers is still secure by adopting a “privacy-by-design” principle that ensures privacy protection throughout each stage of the software development lifecycle, e.g. using data encryption for example.
Thinking is something, taking action is something else.
EU is already worried about and at least one country took action:
Ericsson detailed in their blog post on how they ensured the “trustworthiness” of AI in three key interfaces: The AI input data, the AI in their network, and the AI output data, by imposing requirements on their employees based on legal obligations, customer requests, and I quote: “what we see as best practices and attempts to make these requirements aligned to the direction in which we see industry moving”.
Now the last bit was rather vague and could potentially bring up some questions i.e. whose best practices, are these best practices also within the realm of telecommunication in which they operate?
It was a lot of corporate jargon so I will let you decide for yourself
Apple is following the trend: Apple becomes the latest company to ban ChatGPT for internal use • The Register
just found this. I’m curious to see how Katalon would ensure the privacy of its customers, and even free users, with this new feature? @albert.vu @vu.tran
… after this paragraph im rather hyped up to see how this new feature stack up to Katalon’s competitors
@pedro.ramsey right now we are industry leading. With this any many more updates that will come out this year.