David Colwell, VP of AI/ML at Tricentis attended Worldwide AI Webinar to discuss the challenges and solutions to Software Testing. Here are the highlights of his speech.
Challenges in software testing
Software testing, as David explained, is a process that's used to validate that high-quality software is ready for release.
As the software technology world started to advance and as it became easier to ship apps rapidly using cloud-based systems, delivery times began to reduce. Engineers then came to the conclusion that testing was impeding the delivery of innovative software and thus, reached for automation.
Yet, software test automation, which was being automated by robots and they were supposedly fast, encountered a different problem because robots were not good at adapting to change. A Google study found merely 1.21% of test failures were defect finding while the majority were flaky tests, which are tests that cannot learn and adapt.
The problem we were left with was a lot of these tests which automated robotic validations were static and weren't learning.
Finally, we've realized that these solutions were not good enough. We needed to extend the state of the art and use techniques that did not exist. This is where, according to David, we started to go beyond the world of functional logical algorithmic programming and reach out into machine learning.
3 recommendations for a better project
David Colwell had three suggestions for any company that was looking to develop and test its own software:
Be careful of what you research
David’s first piece of advice was to be careful of what you research.
Since researching from scratch is incredibly time-consuming, if you can find a solution that's out there on the market, then it's much better to buy that solution, integrate it and see if you get the value out of it early on.
Then, if you believe that you can turn this into a market differentiator for your company, research the differentiator that solves the problem first by using something that exists already and then research from there on.
Opt for different data
Speaking from experience at Tricentis, David shared that his team thought that by increasing the data pool size, they would drastically improve the accuracy of their machine learning models. They were adding more of a similar data type to the stack and it wasn't increasing the diversity of that stack.
By generating synthetic data creation algorithms instead of data labeling, they achieved more progress. The lesson learned is to approach your data as an engineering problem with an engineering solution and look for ways to increase diversity in the data more than trying to increase volume in the data.
Persistence pays in machine learning
As David pointed out, persistence while difficult pays off in machine learning in a way that it doesn't pay off in many other software endeavors.
In traditional software construction, if you meet a difficult problem, it's often a good idea to pivot and try to find an easier way to solve it.
In machine learning, you are already operating at the bleeding edge of technology. So if you can push through and find that breakthrough, you will be able to offer something unique, which allows you to be first in the market.