Why GenAI fails at full SOC automation
The increasing prevalence of generative AI in the tech community has given certain individuals in the security community hopes of cutting down on the need for professionals to work long, overnight hours in a Security Operations Center (SOC) by introducing generative AI to occupy the role currently fulfilled by human analysts. AI enabled tools can certainly help reduce analyst workloads: all SOC environments already make use of algorithms to help filter out false positives. However, as of now, no automation can fully replace a human being.
There are a few different reasons for this. The first is that training data for a SIEM environment is large, varied, expensive, and time consuming. Enterprises put out a colossal quantity of data on a daily basis, and ingesting and processing that data remains a persistent problem, even for humans. At the current time, using this data to train AI is extremely resource intensive, making delivery of results in a timely manner difficult. Second is problems with tool integration. There is currently no consistent standard of data formatting for security tools, sometimes making it difficult for them to talk to each other. AI tools would have to process and comprehend data in sometimes radically different formats. This presents a scaling problem depending on the number of tools an AI would have to interface with.
Although the technology is not currently at the level where AI can take over from an analyst completely, there is still hope that AI-based tools can be leveraged to reduce the workload. Researchers anticipate that by 2027, Generative AI will be able to reduce false positive rates in security detection by as much as 30%. However, given the likelihood of new attacks making use of generative AI, analysts in the field anticipate that more human labor, not less, will be required.