I am a fan of running only the relevant checks (automated/scripted) at scale, because it takes a lot of time to run the automated checks if you have a large code base and changes getting committed frequently. The real challenge in this is coming up with the accurate dependency graph, i.e. which modules need to be tested when there is a change in a specific file. Although makefiles can/should help, have you felt that it is more than just checking the files provided by make?
It seems to be no straight-forward answer that I know of for this right now. As a tester, I am skeptical, and it’s good to be skeptical in this case too. Some approaches in the industry speak about a dependency graph (and I am not sure if this dependency graph is made out of the makefile), but from a functionality as well as from a non-functional standpoint, I feel it is good to add dependencies using human oversight to this ‘dependency graph’ in addition to what’s in the makefile.
It might not be going to be simple given the inter-dependencies of the design considerations that we usually deal with, and with everything automated, there are inherent risks! But, it is indeed a worthwhile effort to at least take a look at this and come up with some plausible considerations. It would be helpful when you have thousands of scripts waiting to run, and it would not be prudent to run all the checks for every single change.
Note that I am keeping the ‘testing’ away from this, because that involves a combination of human effort and automation. Just to be clear, I am only talking to automated checks planned for the pipeline in this case.
If you would like to talk about this, give me a buzz.