Now that AI reasoning capabilities are blasting and becoming accessible, folks tend to argue that generative AI will bring us a new era of exploitation. More zero days, more vulnerabilities, more sophisticated, and in higher frequency. The emergence of more new exploitation techniques will significantly increase the number of new vulnerabilities. We have seen in the past what happens when there is an extraordinary rise in the supply of new vulnerabilities > it leads to more cyber-attacks.
However, generative AI embeds a promise. AI writes secure code in a much more profound manner than a human programmer, it will go on and get better exponentially, thus, resulting in substantially more secure apps, more resilient infrastructure, less prone to potential zero days and vulnerabilities.
We in HolistiCyber encourage our clients to shift gears to adjust their speed of adaptation. Adaptation in securing code with AI and staying in control with our Offensive Framework Methodology (OFM).
The human factor is a key component in our Offensive Framework Methodology (OFM). Humans are a crucial factor across various spheres of the cyber defense arrays. Humans who we grant accessibility usage, like clients, partners, affiliates, vendors. Humans who we grant privilege permissions such as administrators, and human users who grant rights and permission to access, alter, read, execute, etc. Human developers who we allow to code our applications.
People tend to make errors, neglect, forget, unintentionally ignore secure coding principles, miss match insecure deserialization, etc. Therefore, when generative AI will start seeking for zero days and vulnerabilities, it will easily unfold errors made by humans, or in other words – neglected by human programmers. We have seen that it took decades to discover some types of code flaws. For threat actors these are the most valuable diamonds! They reside in code that is “considered” to be secured, “clean” from breaches for many years. It took time to discover them. For instance, crafting an attack vector based on a race condition technique requires hard work for threat agents or for a security team testing the code. Even if they use instruments of security testing tools and automation, it requires precise controls on the details, and powerful simulation capabilities to compute a high volume of permutations. If a race condition technique exploits time-of-check to time-of-use (TOCTOU) it requires precise control over thread or process execution. Another example could be business logic vulnerabilities. Automated tools struggle to detect them, because they don’t violate syntax, memory safety, or input validation directly. They require a deep understanding of what the code should do. That’s an understanding and analysis that AI will be able to do (e.g. understanding e-commerce business logic, banking business logic, etc.).
AI will be more powerful in actual exploitation, where contemporary automation finds it very difficult like in heap spray of ASLR, DEP, or in Non-trivial Deserialization Exploits. And AI will be more powerful in the detection of vulnerabilities, like side channel vulnerabilities (e.g., Spectre, Meltdown) requiring knowledge of microarchitecture which AI can grasp and tolerate the exhausting details.
Organizations that will not adjust their speed of adaptation will stay behind and will meet enormous infliction of new zero days and vulnerabilities. The key to tackling this phenomenon is by calibrating your speed of adaptation of security AI coding and aligning it with a secure STD STP security assurance program. Depending on the organization’s architecture, business impact analysis, and digital environments, Cybersecurity governance can maintain a balanced program of AI secure coding incorporated with human in-control signoff and testing. Time is of the essence. We urge you to consider fostering these new approaches.
For more information, feel free to contact one of our account managers or sales team.