Fortinet Adds Generative AI Tool to Security Operations Portfolio
Fortinet today added a generative artificial intelligence (AI) tool to its portfolio to eliminate a range of manual tasks that security operations teams would otherwise need to perform.
John Maddison, chief marketing officer for Fortinet, said Fortinet Advisor will initially be made available for the security information and event management (SIEM) and security orchestration, automation and response (SOAR) platforms that Fortinet provides before being incorporated into the rest of the company’s offerings.
Fortinet Advisor, via a natural language interface, provides access to summaries of incident analysis, generates syntax to optimize threat intelligence queries, provides access to remediation guidance and creates playbook templates.
Based on a mix of large language models (LLMs) that Fortinet is invoking across multiple use cases, Fortinet Advisor joins more than 40 other tools that Fortinet has previously augmented using various types of AI models on behalf of 700,000 customers, noted Maddison.
Generative AI uses machine learning algorithms to create a large language model (LLM) that generates content in response to prompts from an end user. The goal is to automatically create text, images and code based on the data used to train the LLM. Fortinet is extending the capabilities of various LLMs by exposing them to data it curates to specifically enable security operations teams to automate a wide range of tasks.
The overall goal is to reduce the level of manual toil that creates a level of drudgery that, over time, results in increased staff turnover, noted Maddison.
It’s not clear whether generative AI might eventually democratize security operations, but it’s already apparent the level of expertise required to be an effective member of a cybersecurity team is going to decline. In addition, the amount of time it takes to onboard a new member of a team should also be reduced.
Generative AI is already being used to automate a wide range of tasks and processes, so applying it to security operations is inevitable. As always, cybersecurity teams will need to verify the accuracy of the results generated, but as generative AI technologies continue to advance, the amount of effort required to, for example, create an incident report should drop considerably. In fact, as the overall level of cognitive load on cybersecurity professionals continues to decline, the number of individuals willing to fill job vacancies might increase.
At the very least, generative AI should make it simpler for other members of an IT team to help execute security operations tasks as more functions become automated.
One way or another, generative AI will soon be applied to every cybersecurity function. In fact, most cybersecurity professionals are not going to want to work for organizations that don’t provide them with AI tools to help level what today is already a decidedly uneven playing field. The issue now is determining how quickly cybersecurity teams can gain access to those tools at a time when cybercriminals are undoubtedly making similar investments. The only real difference is the financial resources that they have at their disposal are much greater than the average budget that a cybersecurity team is typically allocated.