I have been asked by many of my security colleagues and friends if ChatGPT and similar AI technologies really are going to change the cyberwarfare game as we know it. My answer is yes. AI-generated code has the potential to make future groundbreaking changes in the development and security industries with far reaching repercussions - both positive and negative. I believe that a great tool for malicious attackers can also be a game changer for defenders as well. As a security researcher, these are the kinds of technologies that both excite me and keep me up at night. In this blog, I will break down some of the arguments surrounding the impact of ChatGPT on the security community.
Technological sophistication not required
It is very likely that ChatGPT is already being used by malicious actors in different attack vectors. As it simplifies processes in other fields, AI can simplify the manual steps attackers must take in order to build attacks and eliminates the need for deep technical understanding in order to do so. It will become much easier for attackers to undertake increasingly sophisticated, widespread and rapidly-executed campaigns. Finding exploits in the code will now be easier thanks to this technology. Attackers are already leveraging AI to execute social engineering attacks like phishing, by using ChatGPT to send out messages in different languages or create messages that would be more credible in the eyes of a target. While these tools have significant value for attackers, their benefits can also apply to defenders and cybersecurity researchers. For example, ChatGPT can be a highly effective tool for defenders to find vulnerabilities in the code. If the technology can be used to undertake static code analysis, then this is another great defense tool for security research.
Data that is lost isn’t dead
Another example of AI’s dual benefits for both attackers and defenders is when it comes to data from previous breaches. Data breaches often release enormous amounts of information that float around dormant for years. To turn this dormant data into something malicious, attackers must analyze the data at such a refined level and in a way that makes clear connections between the massive amount of information. AI could complement those malicious activities by taking a huge set of data and making connections between patterns, people, and vulnerable information from the breach. But just as this can complement the attackers' efforts, it can also help researchers understand and preempt those malicious efforts and identify entities that have been breached.
We are just beginning to understand how advanced AI capabilities like the new OpenAI chatbot ChatGPT can harm our code and bring back to life previous malicious breaches. Keep in mind that the next iteration of ChatGPT (4) is expected to be released soon and may represent a very big leap in attacker capabilities. Security professionals and developers would do well to keep up with this AI revolution, and with the attackers that are leveraging it.
About the author
Omer Yaron is the Head of Research at Enso Security, the first Application Security Posture Management (ASPM) tool used daily by AppSec teams to enforce, manage and scale a robust AppSec program, all without interfering with development.
Subscribe for updates