In current era of rapid software development models, there’s no time to test the software. Even if there is, then there’s no time to create valuable artifacts after then. If so, do we really care to audit the relevance of test case and test data? Question is “Why?”. Why should one care about existing set of test cases?
Recently got an opportunity to re-learn one of the very basics of the software testing. We are the part of a legacy product which is being used by a limited but good number of users. For a long time, the product was well tested and up to the mark with this close circle of users. An acquisition happened, and the marketing team of the parent company started adding new big clients. Suddenly, it was like raining of support issues, which obviously took a while to understand the issue. Satisfied customer and stable product were the reasons for the company to decide to go aggressive for revenue multiplication. But, who would like to face the red faced customers? After all it is all about business and values.
Well, a few top managers’ job was on stake as the issue was initially identified as unknown managerial or implementation haphazard. During a closer look at support issues, business and quality team collaboratively identified that users with fresh eyeballs exhumed valid issues, which never got used or tested. What went wrong there? Development team includes coder and QA revisited the process, and identified that almost 90% of the test cases had been used for long with same steps and dataset. And those are still working fine. New pair of eyes with different perception and business flows was the main cause of these issues. It was identified that whenever a new module or feature was added to the product, the team used the same set of test cases to regress the legacy modules. A pretty old, but an applicable theory of Pesticide Paradox was identified as the root cause of the issue. A couple of theories and methodologies exist to handle the situation. Majority of the researchers blamed the automation of test cases to be the main cause of the pesticide paradox effect, as the same set of steps had been used to regress the new releases.
In this paper, I would like to challenge the contemporary theory of handling the pesticide paradox effect. Precaution is always better than cure. With the new approach and vision, we can use automation approach as a weapon to fight the effect. A few techniques are being discussed here, which not only make your test suite robust but also a proven technique to treat pesticide paradox effect.
Extreme Test Script Development
A different approach like Extreme programming proposal where pair of testers develop the test scripts. Continuous review of the test scripts is done by different set of testers. Collaborative approach for static and dynamic execution of the test scripts will definitely give extra strength to your test suite.
Use of Dynamic and Random Dataset
Avoiding static keyword could be one of the major strengths of your automation suite. With static keywords only a limited path could be tested. Behavior of the software may vary drastically with one changed setting or one different flow. Preparing the dynamic and random dataset will make sure your software is being tested with majority of possibilities.
Simulate end user scenarios
Try to achieve basic principle of successful testing, i.e. to simulate end user behavior as much as possible. Accepting the limitation of 100 coverage, we can leverage our dataset closer and closer to real world. Use of modern theories like Genetic algorithms etc. can be done to develop an effective and optimized dataset to depict closer user experience.
Continuous Maintenance of Manual and Automation cases
Regular conversion of manual cases to automation cases can give an extra room for your manual test team to perform additional exploratory and monkey testing. Also newly converted automation test cases can be utilized as shock absorbers with additional data set.
Don’t compromise with number of test cases
Who cares how much time your automation test cases are taking to complete. I don’t find a valid point in someone asking “Why your automation regression testsuite is taking <N> hrs.”. The question might be valid for quick smoke or build acceptance testing. But with the responsibility of certifying the build for release, I don’t see any reason to compromise with count of test cases to execute.