There are concerns among the Software Testing practitioners that metrics derived out of test cases are continued to being used as a primary measure of product quality, testing effectiveness and efficiency, and management efficiency. There is truth in it, and we need to look at this closely and see how to use test cases as artifacts effectively and how to manage them, and how not to use them.
As the product changes in its life cycle (frequently during initial stages because of dynamically changing requirements, features, and design, the artifacts related to validation and verification by the Software Testing also change, and these need not necessarily be test cases, but point checks along the way to make sure that product is being built to do what it is supposed to do and it is doing it in a way that it is supposed to do. Test cases that validate and verify the end product are written after the product is developed.
Often, test cases written for the end product are used to generate metrics which do not necessarily reflect the testing effectiveness or the product quality. For example, number of test cases passed against the total number of test cases is a very popular ‘metric’ that is used in the dashboards of test management. The rising percentage of passing test cases may give a sense of confidence to the team, but it becomes a vanity metric as it does not indicate the real quality of the product and the quality of the various features or modules involved.
There are approaches that do away with test cases altogether and use things like mind maps to note down test scenarios as they are easy to change as the product changes as they are not as hard-coded as test cases. While this approach makes it easy for the test engineers, the management will find it difficult to get a feel of how much testing is done and to get a feel of quality.
An important aspect of an artifact like test case is to enable repeatability and automation. Testing is not just ‘do once and forget’ activity. Many portions of testing will need repetition when there is a regression (when a defect is introduced in a stable piece of software and we need to perform the set of test activities that were earlier done – to make sure that the fix for the defect does not affect surrounding areas of the functionality and the non-functional aspects). Regression is usually done through automated checks. For automation checks, you need to code the test scenarios (which are effectively test cases). Having your test cases as mind maps as well as test code in the automation suite is duplication and can result in errors introduced while duplication (as is the case with any duplicated data).
To summarize, these are the considerations for test cases or test-case-like artifacts:
- They should be easy to write and maintain
- They should be heavily decoupled so that changes are easily incorporated (need to mention this separately apart from maintainability)
- They should be easily translatable to automation
- They should provide inputs to other methods to indirectly give an indication to management on product quality (not direct measurement of no. of test cases passed and failed)
- Should link easily with other validation and verification artifacts used in other phases of software development (like requirements, architecture, design, and code)
As always, the disclaimer is that your specific scenario of software product development might be unique, and context drives what is being followed in your scenario. The above are general guidelines as best practices for test cases or test-case-like artifacts.
Feel free to get in touch with me to discuss and get my inputs on designing test cases and other test artifacts for your organisation’s scenario. Happy to help!