It is interesting that governments around the world are taking cognizance of the vulnerabilities in the software, and pushing for reforms to hold the software vendors liable and responsible for finding and fixing the software vulnerabilities in their software before the software is bought by the governments. This is indeed a good move, placing Software Testing at the center of things to prevent such vulnerabilities.
The article that I read about this makes it clear that for the software to be considered by the USA DoD, there should not be any known vulnerabilities. The key is the word “known”, because no one can prove that any software can be without vulnerabilities. We can only rigorously test for vulnerabilities and make them known if they are present to the best of our abilities. The unknown vulnerabilities remain unknown if they are not found, and they could very well surface and create issues after the software is deployed. There is nothing that we could do about it. With the level of dependencies that we have in the software supply chain, that too with so much of open source components, we are aware of the risks. With distributed cloud deployments, the risks add up or multiply in unpredictable ways. But, a first step towards preventing a software from having known vulnerabilities is welcome, and should be implemented. The regulation in consideration takes into account the National Institute of Standards and Technology’s database, which has the list of vulnerabilities.
Is Software Vulnerabilities Just About Security?
That brings to my Software Testing mind the question if the ‘no vulnerabilities’ should just be security-related. Though it is a good start to go with security vulnerabilities, my humble opinion is that it should not stop there. In real world scenarios, software outages occur because of a combination of multiple factors.
- Software not being able to scale to load and stress
- Networks not being able to handle the traffic
- Aspects of user interface malfunctioning (not related to security per se) complicating things
- Databases/Datastores malfunctioning
- … and so on
As seen by many real-time outages recently, a lot of factors interacted with each other to cause the outages. While it is great to start with known vulnerabilities list and make sure that they are mitigated, we should look for the sturdiness of software from a wholesome perspective taking into consideration all the solution aspects, and not just the security.
Solution Testing Is The Key
I cannot stress enough the importance of rigorous solution testing in areas like defense, or for that matter any large enterprise deployment that affects the lives of millions of people. Solution testing should not be trivialized to end-to-end acceptance testing of a few criteria and be done with it. Each and every possible way of how the system would break should be thought about and tested, and mitigation plans and procedures should in put in place. Needless to say that there would be multiple protocols involved (not just the security protocols), and we need to the various parameters involved in each of these protocols and how they would behave in various scenarios. While testing effort is being initiated, it is important to have the relevant domain experts involved who are very knowledgeable with the systems and the protocols involved, instead of generic software developers and testers.
The Way Forward
To summarize, assessing for security vulnerabilities with a known database is a good starting point, but it’s just a starting point. Software-hardware solutions are much more complex and act in ways that are unpredictable, so rigorous Software Testing practices have to be conducted to foresee and plan for all scenarios that can be thought about. Domain experts are the key in coming up with the test strategy for solution testing of systems to make the software sturdy and resilient under various real-world conditions.
Please feel free to get in touch with me to discuss testing strategies for solution testing.