In the quest towards Software Quality, Software Security is an important pillar. I have been writing regularly of late on security. There had been an interesting perspective recently when someone said handling security is akin to playing a chess game. And there is this impact of so called “AI” recently (read ‘LLMs’) that could throw a monkey wrench into the (already) monkey works! Let’s look at some considerations on the security psychology in this blog.
First of all, I agree with the opinion that software security is a people game like chess. Automate as much as you can, but there is always a new way in which attackers get to the system through a social approach. Social is the key. In fact, cybersecurity includes social engineering as a key part. A whole lot of the social engineering could be thwarted by human consciousness of being careful (not clicking the suspicious links, not answering questions that are not relevant, not responding to an email, etc.). This is all neat and fine as long as the attacks are limited in scope, but when your system feels like playing chess with multiple people at the same time who are attacking from various angles, naturally fatigue has its place and mistakes would happen.
Now, automation can play a huge role in helping out in those tough, repetitive situations. Personally I am a huge fan of Robotic Process Automation (RPA) where software robots can perform the solid pre-identified business rules. It takes off the fatigue, business rules can be iteratively modified according to the behavior, and there is no ambiguity involved in dealing with a situation. Sounds wonderful to me, but AI, that too generative AI?
Generative AI is a next-word driven technology which guesses the next output (word) based on probability. It is great for generating text, based on the data that it has, but can we apply this to a security situation?
So, these are my major issues:
- Data that an LLM has could be incomplete/totally wrong (over which decisions are made)
- Probability (based on which ANNs are built) is not a substitute for human heuristics
- Generative AI is not an effective method for suggesting security solutions
In an important pillar like security towards software quality, we need as much assurance as possible. In an already heuristic situation like a social engineering, it does not help to add more ambiguity by having generative AI (instead of helping devise a solution). Thus I would conclude that one could think of RPA as a better substitute to LLM in software security situations involving the security psychology, for automation.
For detailed consulting related to software quality and testing, feel free to get in touch with me.
Pingback: MFA ain't fool-proof ! - Venkat Ramakrishnan