RAD Security Combines AI With Behavioral Analytics to Improve Cybersecurity
RAD Security this week at the Black Hat USA 2024 conference revealed it has added artificial intelligence (AI) capabilities to its cloud detection and response (CDR) platform as part of an ongoing effort to reduce dependencies on signatures that need to be developed before threats can be detected.
Additionally, the company has added a Findings Center to track incidents and added support additional support for open source images and an ability to track version details to the RAD Open Source Catalog.
Finally, the RAD Security platform is now available as an add-on to the Amazon Elastics Kubernetes Service (EKS).
RAD Security CTO Jimmy Mesta said AI based on multiple large language models (LLMs) that the company has finetuned provides a more accurate method for detecting threats in a way that should dramatically reduce the number of false positive alerts that are generated by platforms that rely on signatures. RAD Security already relies on behavioral analytics to detect threats that are now being augmented using generative AI technologies, he added.
The addition of AI capabilities extends those capabilities to make it easier to identify, for example, attacks involving multiple stages that are all targeting the same vulnerability. Cybersecurity teams can then opt to either quarantine those workloads or terminate them altogether to close the loop between detection and remediation, noted Mesta.
Of course, not every indication of a change of behavior represents a cybersecurity event. AI tools will also make it simpler to distinguish between malicious and benign activity, said Mesta.
Ultimately, the goal is to eliminate the need for signatures as part of an effort to reduce the overall level of noise being generated by cybersecurity tools, he added.
It’s not clear at what rate organizations are embracing AI to improve cybersecurity, but after some initial skepticism, many cybersecurity professionals are rapidly approaching a point where they prefer to work for organizations that provide access to these tools. The more low-level toil these tools eliminate the more time cybersecurity professionals will have to focus on threats that represent actual threats to the business versus tracking down yet another set of false alarms.
The challenge, of course, is finding the funding needed to acquire these tools before cybercriminals use the same AI technologies to increase both the volume and sophistication of the cyberattacks being launched. In effect, cybersecurity teams are now involved in an AI arms race with adversaries that typically have a lot more financial resources at their disposal.
On the plus side, it’s getting easier than ever to replace legacy cybersecurity platforms with modern platforms that are infused with those capabilities, said Mesta.
Regardless of approach, the pace at which cyberattacks are being launched is already far greater than just about any cybersecurity team can handle without relying on increased automation. AI is simply the next logical wave of automation. Over time, organizations that lack these capabilities will find themselves victimized more frequently as cybercriminals discover more weaknesses to exploit.