Responsible AI FAQs
Sophos has been at the forefront of AI-driven cybersecurity for nearly a decade, combining advanced technologies with human expertise to defend against evolving threats. Our platform integrates both deep learning and generative AI capabilities, forming the largest AI-native security platform in the industry.
These FAQs outline Sophos’s approach to responsible AI, helping customers understand how we use AI across our products, what models are involved, and how customer data is handled securely and responsibly.
AI capabilities and models
How does Sophos use AI in its products and services?
Sophos integrates AI across its platform through two complementary approaches: discriminative machine learning (ML) and generative AI (GenAI). ML models apply learned patterns to perform tasks such as identifying malicious files in milliseconds, even if the file has never been seen before. These models power capabilities like behavioral anomaly detection, automated triage, and predictive threat scoring.
GenAI, meanwhile, creates new content from input data. It enables capabilities such as summarizing threat activity, interpreting attacker behavior, enabling natural language search, and prioritizing patching based on exploit likelihood. Together, ML models and GenAI technologies are embedded throughout our products and services, with more than 50 models currently in use and growing.
Learn more about Sophos's AI capabilities here. For details on GenAI-powered features in Sophos XDR, see here.
Which AI models are used by Sophos products?
Sophos uses a combination of proprietary ML models and third-party LLMs (large language models). Models are selected based on their suitability for specific product tasks and are integrated into our threat intelligence platform,Intelix, which supports multiple products.
Which third party LLMs are being used in Sophos products?
Sophos products use OpenAI’s GPT series of models hosted on Azure and Anthropic’s Claude series of models hosted on AWS Bedrock.
Does Sophos maintain an inventory of AI models used in Sophos products and services?
Yes. Sophos maintains an AI inventory that outlines the models used across our platform. The inventory is reviewed and updated incrementally.
Customer Data usage and privacy
Does Sophos use customer data to train its ML models?
Sophos uses cybersecurity relevant data to train and improve internal ML models. This includes threat telemetry, signals of malicious activity, attack patterns, etc. collected from customer environments, which provides real-world signals that help our models detect anomalous behaviors, predict triage outcomes, and identify suspicious activity with greater accuracy.
By learning from this data, Sophos's AI systems and products remain responsive to emerging threats and adapt quickly to changes in attacker behavior. This collective learning approach ensures that insights gained from one environment contribute to stronger defenses for all customers, enhancing protection across the entire Sophos ecosystem.
All use of customer data in this context is subject to Sophos End User Terms of Use,including Product Privacy Datasheets, and Data Processing Agreement, which describe how data is collected, processed, and safeguarded.
Does Sophos send customer data to train third-party LLMs?
Sophos does not share any data with third parties to fine-tune or train their LLM models.
Who can access customer data submitted via AI features?
Access is restricted to authorized personnel under strict privacy and security protocols. Customer data submitted to AI features is treated like all other customer data processed by Sophos.
Do any of the LLM providers store customer data?
No, none of our LLM providers store or retain customer inputs or outputs.
Customer data is transmitted to the LLM provider to perform the specific task requested, such as generating a case summary, interpreting a query, or executing another model-driven function. Each data request is sent to the LLM provider individually over an SSL encrypted service solely to process the requested task.
Does the use of GenAI enabled features respect customer-selected Sophos Central region?
Yes. Any customer data processed by Gen AI–enabled features within Sophos products is handled in accordance with the customer’s selected Sophos Central region.
Are third-party AI vendors contractually bound by privacy and data security requirements?
Yes. All vendors and contractors are bound by Sophos's privacy and data security agreements.
Can customers opt-out of having their data used to train Sophos’s proprietary ML models?
Sophos is committed to keeping our customers safe from constantly evolving cyber threats. To do this effectively, we need to continuously improve our threat detection models so they can identify new and emerging risks – such as unusual behavior, novel malware, or sophisticated phishing tactics. Improving these ML models requires learning from real-world data. Threat telemetry collected from customer environments therefore plays a critical role in helping us refine our detection capabilities and help our customers stay ahead of attackers. As such, Sophos does not currently offer an opt-out mechanism specifically for the use of customer data in ML AI model training .
Any data used for model improvement is handled securely and, wherever possible, anonymized or aggregated to protect customer and individual identities.
Customer data is not used to train third party LLMs.
Safety and security
How does Sophos validate the performance of AI models before deployment?
Sophos uses a multi-stage validation process tailored to the model type and intended use. For ML models, performance is assessed against curated datasets that reflect real-world conditions. We also apply statistical benchmarks and regression testing to ensure consistency across releases. For GenAI models, validation includes scenario-based evaluations and human reviews to confirm alignment with expected behavior and safety standards.
What testing processes are in place to ensure AI reliability across different environments?
Sophos conducts environment-specific testing to ensure AI models perform reliably across diverse deployment contexts. This includes sandbox simulations, cross-platform compatibility checks, and stress testing under variable network conditions. GenAI models undergo controlled interaction testing to confirm responsiveness and robustness across user types and usage patterns. We also collect in-app feedback from customers and internal users- through our dogfooding program on the accuracy of AI responses which are addressed at every stage.
What safety mechanisms prevent harmful outputs?
Sophos applies tailored safeguards depending on the type of model. For internal ML models, the most critical risk is a false positive – incorrectly identifying a benign file as malicious. To mitigate this, Sophos implements multiple layers of compensating controls before any model is allowed to make a blocking decision. For GenAI, Sophos uses input and output guardrails to prevent misuse. These include moderation APIs from third-party providers to screen for harmful content and ensure responsible interactions.
How does Sophos test against adversarial and prompt injection attacks?
Sophos uses input and output guardrails to protect GenAI models from adversarial manipulation, including prompt injection attacks. These safeguards are designed to detect and block malicious or unintended prompts before they can influence model behavior. If an issue is identified, Sophos has mechanisms in place to iteratively improve guardrail performance, just as it does with other AI models.
How does Sophos detect and mitigate bias in AI models?
Sophos employs a multi-layered approach to bias mitigation across its AI models. ML models used for threat detection are continuously trained and validated against diverse, real-world datasets to ensure high accuracy and minimize false positives across varied environments and user profiles. For GenAI features, Sophos leverages third-party models, such as those available through Amazon Bedrock, that include built-in bias mitigation capabilities. These are complemented by internal validation and monitoring processes to ensure fairness and relevance.
Are Sophos's AI models audited?
Yes. Sophos uses self-hosted observability platforms to monitor model activity, including inputs and outputs, for defined periods. All access to and changes made within AI models and their training data are logged. These logs support internal oversight and accountability.
How does Sophos handle AI-related incidents?
AI-related incidents are managed using the same rigorous protocols applied to software issues. This includes staged rollouts, rollback mechanisms, and root cause analysis to identify and prevent recurrence. These safeguards help ensure that any issues are addressed quickly and effectively.
Governance and compliance
What principles guide Sophos's AI development?
Sophos’s approach to AI is grounded in a set of core principles that emphasize human expertise, robustness, and responsible governance. These principles guide how models are developed, deployed, and monitored across our platform, and serve as the foundation for our broader AI governance efforts. You can read about our AI Principles here.
What internal structures govern AI development and use?
AI oversight at Sophos is cross-functional. A dedicated AI Steering Committee provides strategic direction, while product, legal, and engineering teams collaborate on day-to-day governance. Each engineering team is accountable for the models they own, with access controls and responsibilities aligned to Sophos’s broader software development standards.
Are Sophos's practices aligned with established standards or frameworks?
Yes. Sophos follows established security and compliance frameworks and maintains a list of product certificates and declarations of conformity. More information is available in the certifications section of the Sophos Trust Center
Is there a human-in-the-loop system for AI systems?
Sophos products and features may operate automatically, depending on customer configuration. However, all GenAI capabilities – including Assistant, Agents, Summaries, and Search – are designed to operate under human oversight by the customer. GenAI systems do not make changes autonomously that could impact a customer’s environment. For any sensitive actions, including policy changes and response operations, human approval is required before execution.
It is the customer’s responsibility to review outputs generated by GenAI features and ensure that qualified personnel are involved in decision-making where appropriate. While these tools are designed to support and accelerate security operations, they are not a substitute for human judgment, especially in high-stakes or context-sensitive scenarios. Sophos encourages customers to maintain a "human-in-the-loop" approach to ensure responsible use of AI and to uphold the integrity and security of their environments.
How does Sophos comply with the EU AI Act?
At Sophos, we recognize the importance of regulatory compliance as a cornerstone of trust and reliability in AI technologies. We're committed to creating responsible AI by design, and our Responsible AI Principles takes into account regulatory proposals and their evolution, including the EU AI Act.
Sophos is actively preparing for compliance with the EU AI Act, which will come into effect in phases, and has taken a comprehensive approach to assessing its product suite and customer-facing features. As part of this effort, we have conducted a review of our AI capabilities and mapped them against the AI Act’s risk classification framework
Sophos has already implemented the EU AI Act regulation for prohibited practices according to
article 5.