In this article, Greg Sypolt talks in brief about the role of AI as a software testing assistant. I chose this piece because it combines a field I’m interested in (AI and machine learning) with the content of my software testing course. I am interested in AI task automation already, so a piece that dovetails these two topics is a perfect fit. The author has chops as well — he oversaw the conversion from manual to automated testing at his company, and offers training to help other teams transition to automation.
Sypolt starts off by outlining three uses of AI in testing:
- Automatic test case generation that reduces level of effort (he abbreviates as “LOE”) while maintaining consistent standards.
- Generating test code or pseudocode from user stories; a user story is a use case or use sequence for some kind of product (software or hardware).
- Codeless test automation, or testing without writing tests as code.
He then outlines the necessity of properly training the testing bots, and some of the difficulties that may be involved:
- Identifying the proper training algorithms.
- Collecting a massive quantity of useful data.
- Ensuring that bots behave in a reasonable fashion from a given set of inputs, and ensuring that they exhibit new behavior (generate new tests) when the inputs change.
- The training process never really ends; each new set of tests and results gives the bots another data point to learn from.
I firmly believe that we are at the very start of a revolution in machine learning. Why not apply those principles to software testing? There are, of course, a couple of issues that arise which Sypolt didn’t mention: quality of tests and accuracy of tests.
A point firmly pushed by other articles and texts I’ve read is that quality is more important than quantity. There can be little difference between running ten tests and running one hundred tests if the extra ninety don’t teach us much of anything. The trick isn’t to create an AI that merrily runs thousands on thousands of unit tests; it’s to create on that identifies the important tests which reveal faults we don’t know about and confine itself to executing exactly those.
It’s also very important to ensure that the AI has learned properly and is up to standards — and that means testing the AI. Which means testing the AI that tests the AI, and it’s digital turtles all the way down.
I can take away from this article two things: Firstly, it’s reasonable to combine two fields that I’m interested in (AI and testing) and that resources exist or will exist to support the two together. Secondly, the field of testing is constantly and rapidly changing. Additional learning is crucial, just like AI systems continue to learn from each new piece of data.
From the blog CS@Worcester – orscsblog by orscsblog and used with permission of the author. All other rights reserved by the author.