Due to time pressure and the urgency to deliver, there are incidents of skipping testing activities occurring. In this article, we will take a deep look into such practices and how it affects the end-users and the customers ultimately.
The arguments that are placed for skipping testing activities are multi-fold:
- There is no time to test. We will figure out if there are any potential bugs when the customers encounter it. Testing is not important. Delivery is. We should get this to the hands of the customers as soon as possible
- The issues that might occur are minor. They wouldn’t affect the major functionality of this product/module. So we can skip testing
- Testing is a waste of time. During development itself, we have made sure that there’s enough quality using BDD, ATDD, TDD, etc. Also, we have ensured quality through ‘shift-left’ practices. Also, we have automation in place.
… and so on.
Skipping Testing Activities Now leads to heavy cost and heartburn later
We recently came across a major platform meltdown when millions of computers were affected because a patch for a driver was not correctly uploaded. Many health and travel websites around the world went down and millions of people were affected. It could have been avoided if a simple test of the patch had been performed to make sure that it is working fine, but it was not performed, leading to major downtime. The cost and the effects of such downtime was well felt and several folks from the testing and quality community raised their concerns regarding why that test was not done. In fact, on an average, every six months or so, there has been an incident that leads to such downtimes, and in many cases it is because of insufficient or no testing. So, we should realize that skipping testing activities for the sake of saving time is not a prudent thing to do. We should take the time to test to make sure that the product is working correctly and as expected.
Minor functionality introduction is not the same as minor quality issue
In these days of incremental check-in of code and CI/CD, it has become a practice to think, ‘Oh, I am just checking in a minor feature or a snippet of the code. It won’t affect the functionality of the module or create any issues.’ This thinking is wrong. Even very minor functionality can have implications on the overall working of the system. Thorough integration testing and system and solution testing needs to be done to see if things broke. Careful analysis of the implications of the change based on interoperability of the features and modules need to be done and relevant test scenarios as decided by the team need to be planned and executed.
Skipping testing because of ‘shift-left’ and development practices
For the sake of fast releases, testing is sometimes labelled as ‘manual regression testing’ and belittled. When confronted with data and proof, then the next excuse is ‘Oh, you could do exploratory testing’. Testing is not just exploratory. None of the shift-left or development practices like BDD, TDD, or ATDD can replace thorough testing towards great quality. Those practices are all good towards building with better quality can never assure quality. For that matter, even testing cannot ‘assure’ quality. Quality can only be analyzed using testing, can never be assured.
Conclusion
If there’s a pressure to skip testing or any types of testing like module, integration, system, solution, interoperability, regression, or specialized areas like performance, security, or accessibility, when they are relevant for the situation at hand, please show data or evidence that the testing is required and push back. Skipping testing at any level is not healthy for the product and its quality. If your organisation needs assessment of the types of testing to be done or assistance in any types of testing, please feel free to get in touch with me. Glad to help.