For all the noise surrounding “agentic AI” in cybersecurity, security operations centers are still wrestling with the same fundamental questions: What does AI genuinely improve today? Where does it fall short? How can organizations tell the difference?
In the recent Sophos webinar, “The Agentic SOC: Separating Signal from Noise,” Kyle Falkenhagen, Sophos’ senior vice president of product management, laid out one of the clearest, most candid explanations of both sides of the story. Crucially, he focused on what defenders are actually experiencing in the field — not the hype cycle promises that have overwhelmed the market.
In a landscape where almost every cybersecurity vendor now claims to offer some form of AI and many more have “agent‑washed” traditional tools, that distinction has become more important than ever.
The market swell vs. Operational reality
Despite the explosion of AI‑labeled tools, relatively few organizations are using AI in production today. As Falkenhagen noted, Gartner estimates that fewer than a quarter of enterprises currently rely on AI‑enhanced cybersecurity tools, even though the hype around them is louder than ever.
That mismatch creates a dilemma. Many SOCs feel pressure to “do something with AI,” yet most are still evaluating where it fits — or whether it fits at all. And behind the hype lies a practical truth: SOC teams are already under strain.
False positives remain the No. 1 detection problem for most teams, and alert volumes continue to exceed what human analysts can realistically manage.
These operational challenges are what make AI useful today. When deployed thoughtfully, AI helps teams focus on real threats rather than the endless swirl of benign events.
Where AI is genuinely moving the needle
The first area where AI has matured is detection — not the flashy generative capabilities that grab headlines, but the years‑deep investment in behavioral models, machine learning, and NLP that quietly run in the background. As Falkenhagen said, “AI in detection isn’t new and it isn’t hype. It’s been quietly making security products better for years.”
That foundation is now supporting newer layers of automation in alert triage. This is where AI can help by evaluating massive volumes of telemetry in real time, scoring alerts in the context of an organization’s environment, and pushing only the meaningful ones toward analysts. Sophos alone generates more than 34 million detections daily.
“The SANS 2025 Detection and Response Survey found that 73% of security teams named false positives as their top detection challenge, up from previous years and still climbing,” Falkenhagen said. “That's not a minor annoyance. That's the defining operational problem ... and the threat landscape is accelerating the problem.”
Instead of wading through an unfiltered firehose of alerts, teams receive a refined, contextualized queue — something that allows actual security work to begin earlier and proceed faster.
That same theme carries through to investigations. AI can correlate data across logs, endpoints, authentication sources, and network activity far faster than humans can. During the webinar, Falkenhagen described how AI‑assisted investigations now construct timelines automatically, surface relevant indicators, and trace identity‑based attack chains — often in minutes.
For teams used to manually stitching together evidence across multiple tools, this shift is profound. It doesn’t eliminate the need for human judgment, but it accelerates everything that happens before that judgment is applied. With 88% of ransomware executing outside business hours, agentic AI provides the round‑the‑clock vigilance humans can’t sustain. And as 59% of organizations face severe skills shortages, AI scales your team’s capabilities, so critical threats are caught and contained no matter when they hit.
Natural language queries are also lowering the barrier for junior analysts who don’t have deep SIEM expertise. Asking for “all accounts with more than 50 failed logins in the last hour” is now a plain English interaction instead of a multi‑line query.
“AI becomes an equalizer. It gives less-experienced analysts the kind of contextual enrichment and guided investigation that previously required years of expertise,” Falkenhagen said.
What AI still doesn’t do well
For all these advances, AI still has limits, and overreliance on these tools can come with its own risks.
The first limitation is the business context. AI can suggest containment actions, but it can’t understand the operational consequences.
“AI doesn’t know your business,” Falkenhagen said.
For example, shutting down a compromised server may be technically correct but devastating for revenue if the timing is wrong.
AI also struggles with true novelty. Threats that fall outside existing patterns — zero‑day chains, new social engineering techniques, insider risk — often require human reasoning to piece together.
And then there’s the skill erosion problem. If analysts spend years simply approving AI decisions, rather than building investigations themselves, their expertise can atrophy. Gartner warns that up to 75% of SOC teams may face this by 2030.
“We’ll have a generation of security professionals who can supervise AI but can’t function without it," Falkenhagen said.
Communication is the final boundary. AI can recommend actions, but it can’t yet manage stakeholder conversations, breach notifications, or executive briefings. Those tasks require nuance, empathy, and a deep understanding of business impact — qualities SOC leaders can’t afford to hand over to AI.
Separating the real from the rebranded
Where does all this leave organizations trying to evaluate AI‑driven tools?
The first question most potential user should ask is: How does the AI actually work? Not in a marketing sense, but in an architectural one. If a vendor can’t explain its models, data sources, or decision logic, that’s a warning sign.
Additionally, consider what principles govern it. AI must be human‑centered, transparent, and accountable, or it risks introducing new failure modes instead of preventing them. SOC leaders should look for evidence of human oversight and see proof that analysts can override the system easily when needed.
The third question is the simplest and often the most revealing: What happens when the AI is wrong?
“If a vendor can’t explain how their AI works, what principles govern it, and what happens when it’s wrong — move on. These three questions cut through the noise [faster] more than any analyst report,” Falkenhagen said.
Want to learn more about how Sophos uses AI across all our products and services? Check out our Solution Brief here. You can also watch the full webinar available on demand now.



