Software Testing: AI-assisted Context-Driven Testing

Software Testing: AI-assisted Context-Driven Testing

This is probably unofficially done already in many organisations right now, but I wanted to put it out formally in the form of this blog to talk about “Software Testing: AI-assisted Context-Driven Testing”.

I am not going to talk much about AI and how it can help testing, because there are so many articles, blogs, and ‘conference talks’ already. What I would like to do here is to delve a bit into how generative AI can play a role in context-driven testing, its advantages and pitfalls. The CEOs and CTOs of the world can then fill the gaps, connect the dots, and utilize the thought in their own contexts.

Context-driven testing and Exploratory Testing

I’ll take a moment to clarify the place of context-driven testing and exploratory testing in the scheme of things. I don’t want to use a diagram here as I think it would be an overkill, and a whole bunch of people would come up with its variants which I would like to avoid. I would just say that what is called as ‘exploratory testing’ is a method within context-driven testing. The term ‘exploratory testing’ stems from the idea of exploring the system ‘free-handedly’ without a scripted plan, but of course, within set limits of focus on what we are set about to test, not ‘wool-gathering’, ‘puttering around’, or ‘out-and-about’. But unfortunately, the phrase ‘exploratory testing’ has been misused in the place of ‘human testing’ by a set of people in the Agile world. Human testing is not just about exploration. Exploration is one tool in the gambit of context-driven testing. So when we say ‘Testing’, context-driven testing is a methodology of how to go about testing, and exploratory testing is one of the tools in that methodology. With that said, we will now look at context-driven testing.

Context-driven Testing Revisited

A bit of introduction to context-driven testing is in order for those who are uninitiated on context-driven testing. The definition from the ‘Context Driven School Of Testing’ goes something like this:

Context-Driven Testing (CDT) is a software testing philosophy that emphasizes tailoring the testing strategies and techniques to the specific context of a project. This context includes factors such as project goals, risks, budget, technical environment, regulatory requirements, and the skills and experience of the testing team.

Core principles based on the ideas from Cem Kaner, James Bach, and Bret Pettichord, the founders of the Context-Driven School of Testing

While this definition is great from an high-level view of the project (as you can see from the various factors listed), I would be interested in the specific context of where we are in testing while testing a system or a module. As you might appreciate, a scripted set of checks can only accomplish so much in aiding the testing, because the context in which the human executes those checks (automated or otherwise) is driven by the human. So the human is the best decision-maker on looking at the context and orienting the further course of the test(s).

How can AI help?

Generative AI can look at the knowledge that it has been fed with, based on the questions it is posed with, and come up with recommendations on how to orient the further course of test(s). Of course, this is subjective to so many conditions, which I want to spell out:

  • Whether the knowledge base of the generative-AI is of limited and curated scope. A limited and curated scope provides an environment of higher probability of correct recommendations than an environment that’s constantly changing and non-curated (like the whole Internet).
  • Whether the knowledge base curation is done by personnel who are well-versed in the business/industry/domain that the system is dealing with.
  • Whether the generative AI system is fine-tuned to be optimal to ‘fuzz’ around with various permutations and combinations that are possible in a context.
So what’s the verdict?

In my personal opinion, building and maintaining a generative-AI model that can assist in context-driven testing seems simple, but it is not easy. This is especially true when you are in an agile environment, with your requirements frequently changing, and your design/implementation is in flux. This is very true with early-stage startups too, where you might need to pivot a lot. In these environments, technically it might be possible to keep updating the model based on the changes, but from a business and economical perspective, ROI would be tough.

But in environments where a system is relatively stable, with this core functionality in-tact, it should be possible to build a decent generative-AI model to assist in context-driven testing. Of course, the model can learn from the humans if something changes and update their recommendations accordingly.

One has to keep in mind that the model is also learning, and thus testing should not totally rely on the model’s recommendations. Human oversight is totally essential. To me, generative-AI assistance in context-driven testing seems to be a ‘interesting add-on’ rather than a core, reliable partner in testing. And it comes with its own risks, which I would delve in a different post or a ‘conference talk’.

Hope you got a good glimpse of ‘Software Testing: AI-assisted Context-Driven Testing’. If you are thinking about introducing AI in your testing efforts, or you are already into it and need recommendations on how to make it better, please feel free to get in touch with me. Glad to help!

1 thought on “Software Testing: AI-assisted Context-Driven Testing”

  1. Pingback: Software Testing: Machine Learning For Prioritizing Tests - Venkat Ramakrishnan

Leave a Comment

Your email address will not be published. Required fields are marked *