The fanfare around artificial intelligence may be dying down but according to security experts, its impact on defense strategy, red teaming and other aspects of cybersecurity could be long lasting.
Large language models have the potential to reduce threat analysis by as much as 80 percent, estimates Vicente Diaz (pictured), threat intelligence strategist at VirusTotal, a crowdsourced threat intelligence platform acquired by Google LLC.
“All the hype around AI, but the thing is that actually it works for different stuff,” Diaz said. “We are making advances. It’s not like we’ve solved security yet, of course, but we are making everyone’s life easier. In this case, what we are analyzing is how LLMs can help us to analyze malware binaries, reverse engineering, which basically means spending a lot of time, needing a lot of expertise to make sense of what the malware is doing. And, well, if an LLM can do it for us, that’s a nice step forward.”
Diaz spoke with theCUBE Research’s John Furrier and Savannah Peterson at mWISE 2024, during an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. They discussed AI’s contributions to security infrastructure and code generation. (* Disclosure below.)
AI transforms red teaming, threat testing
Cybersecurity is currently experiencing the first wave of use cases for LLMs. These include identifying malware behavior and analyzing key parts of its code.
“Everything you need to do for this spend testing, for this red teaming, you can use for LLMs to create codes for you to analyze the staff and to give you the best way to go to give you an answer to something that is not trivial,” said Diaz. “And little by little, we are getting to the point that they are able to orchestrate everything for us and find answers to complex questions … We can expect that we can to some extent maybe fully automate this in the future. Maybe we can have a constant red teaming exercise going on and evolving.”
Social engineering attacks have been some of the most common threats and according to Diaz, security teams are still working on how much data is being generated for malicious purposes. He emphasizes that the human factor is still crucial for cybersecurity and our exact role in the process will likely transform as AI grows more advanced.
“Artificial intelligence was just a boring thing more related to math,” he said. “And now you see the real implications that look like magic … We had constraints in the past. One of the biggest ones was the size of the prompt that we could use. Now that this is changing and as LLMs are evolving very fast, we don’t have these constraints … With all this together, we are starting to get better and better results.”
Comments
Post a Comment