Stopping Data Leaks at the Speed of AI
Symantec brings DLP scanning to Google Agent Gateway
- Existing DLP investments (policies, EDM/IDM, workflows) remain critical, but must extend to new AI agent-driven data flows that dramatically expand the risk of data exfiltration.
- Agentic AI introduces high-speed, autonomous data exchanges across LLMs, tools, and other agents, creating unmonitored leakage points unless enforced at the communication layer.
- The Symantec integration with Google Cloud embeds DLP at the Agent Gateway, enabling real-time inspection and enforcement across all agent traffic without requiring application changes—delivering centralized, infrastructure-level protection.
Your organization has invested years in building a robust Data Loss Prevention (DLP) program. You know the drill: Detection rules tuned across hundreds of data identifiers. Exact Data Matching (EDM) profiles mapped to your customer databases. Indexed Document Matching (IDM) fingerprints covering your classified documents. Incident remediation workflows that route violations to the right teams. Compliance frameworks validated by auditors.
The acronyms and effort are seemingly endless, but it was a worthy investment serving as your bedrock for securing the next wave of enterprise computing.
AI agents are rewriting how enterprise applications move data. Unlike the chatbots of two years ago, today's agents are autonomous actors. They send prompts containing sensitive context to large language models (LLMs). They call tools—databases, APIs, code repositories, CRM systems—via protocols like the Model Context Protocol (MCP). They delegate tasks to other agents. Each of these actions is a data flow, and each data flow is a potential exfiltration vector. According to IDC, by 2027, agent use across the Global 2000 will increase tenfold—with token and API call loads rising a thousandfold. The data exfiltration surface is scaling faster than security teams can track.
The detection logic—knowing what sensitive data looks like—doesn't change. But the deployment points must.
Symantec (part of Broadcom’s Enterprise Security Group) is collaborating with Google Cloud to bring enterprise DLP scanning to Google's Agent Gateway—the network-level enforcement point for all agentic AI communications. Your existing DLP policies, detection rules, and incident workflows extend to every agent data flow—LLM inference, tool calls, and agent delegations—without starting from scratch.
The agentic AI data exfiltration problem
AI agents create entirely new data flows—between agents and LLMs, between agents and tools, and between agents themselves. These flows happen inside application logic and require new enforcement points to inspect and govern them, introducing significant operational risk, implications for trust and privacy.
- LLM inference. Agents send prompts to language models, and those prompts routinely contain sensitive context: customer records retrieved from a database, internal documents pulled from a knowledge base, conversation histories accumulated over a session. On the response side, models may synthesize PII from context, leak memorized training data, or expose sensitive reasoning chains. Every inference call is a bidirectional data flow between the agent and the model.
- MCP tool calls. Agents autonomously read databases, call APIs, and write files using MCP—a standard protocol for agent-to-tool communication. Every tool call and resources/read operation moves data between systems. An agent retrieving customer records from one tool and passing them to an external analytics API via another is not a hypothetical—it is the design pattern these frameworks encourage.
- Agent-to-agent delegations. In multi-agent architectures, agents pass tasks, context, and results to each other. Sensitive data propagates across agent boundaries, often accumulating additional context at each hop. A planning agent that hands customer data to a research agent that hands summarized findings to a reporting agent creates a chain of data flows—each one a potential leakage point.
Unlike human users, agents operate at machine speed—thousands of inference calls and tool invocations per minute, with no human reviewing what data moves where. Without DLP enforcement at the agent communication layer, these data flows are completely uninspected and susceptible to Data Security risks.
Consider a realistic scenario: A financial services agent retrieves customer portfolio data via an MCP tool call, sends it as context in an LLM inference request for analysis, then passes the model's response—which now contains synthesized PII—to an external reporting API via another tool call. Two leakage points. Zero DLP visibility.
Symantec + Google Cloud: DLP for the Agent Gateway
This is why Symantec is working with Google Cloud to integrate DLP scanning as a Service Extension for the Agent Gateway—the network-level enforcement point in Google's Agent Cloud platform that governs all agent traffic, including LLM inference calls, MCP tool calls, and client-to-agent communications.
The Agent Gateway streams full request and response bodies to Service Extensions via gRPC, giving partners like Symantec the ability to inspect every prompt, every completion, every tool call, and every tool response in real-time.
Symantec DLP handles sensitive data detection, including PII, PHI, financial data, intellectual property, and credentials, using enterprise DLP APIs with capabilities like EDM, IDM, and 300+ content identifiers that no cloud-native guardrails solution can match.
What this looks like in the real world
Financial Services (Portfolio Management Agent)
A portfolio rebalancing agent retrieves customer records via MCP tool calls, then sends them as context in LLM inference requests for analysis. DLP scans the LLM prompt for SSNs and account numbers before it reaches the model. On the response side, DLP scans for synthesized PII before results are returned. DLP also inspects tool call responses for sensitive data flowing into the agent. The result: an auditable trail proving data protection was enforced at both the inference and tool call layers.
Healthcare (Clinical Research Agent)
A clinical research agent sends patient case summaries to an LLM for diagnostic reasoning. DLP detects PHI—patient identifiers, diagnosis codes, medication histories—in the prompt payload before it reaches the model. On the response side, DLP scans for memorized patient data or synthesized identifiers in the model's output. HIPAA compliance is enforced regardless of which LLM provider the agent calls.
How DLP scanning works at the Agent Gateway
Symantec's DLP Traffic Extension—a Service Extension that already inspects API traffic on Google Cloud Application Load Balancer—is the foundation for this integration. The Agent Gateway uses the same EXT_PROC_GRPC protocol, so the extension can be adapted to the Agent Gateway.

Here is what happens when an agent sends a request:
- Agent sends an LLM inference request or MCP tool call. The agent's outbound traffic flows through the Agent Gateway.
- Gateway routes to DLP Traffic Extension. The gateway streams the full request body to the DLP Service Extension via gRPC callout.
- DLP inspects the payload. The Traffic Extension parses the content and calls Symantec DLP APIs for policy evaluation.
- Extension returns enforcement decision. Based on policy evaluation, the extension returns one of: block the request, redact sensitive fields within the payload while preserving message structure, or allow with incident logging and DLP status headers.
- Traffic continues or stops. The gateway enforces the decision. If allowed, the request reaches its destination. If blocked, the agent receives a policy violation response.
The same flow applies on the response path—model completions and tool responses are inspected before they return to the agent.
Two powerful backend deployment models
The DLP Traffic Extension connects to Symantec DLP APIs via one of two backend deployment models:
- DDS (Distributed Detection Service)—Customer-deployed Docker containers that run within the customer's own infrastructure. Content is scanned locally and never leaves the customer boundary—only metadata (policy name, severity, incident ID) egresses to the centralized management console. DDS delivers low latency and full data sovereignty, making it ideal for regulated industries with strict data residency requirements.
- CDS (Cloud Detection Service)—A managed SaaS detection service hosted by Broadcom. CDS provides the same DLP APIs without requiring customers to deploy and manage infrastructure. Suitable for organizations that prioritize operational simplicity over data sovereignty.
Both DDS and CDS expose the same Symantec DLP APIs with Exact Data Matching (EDM), Indexed Document Matching (IDM), and 300+ pre-built content identifiers covering PII, PHI, financial data, credentials, intellectual property, and more.
What gets inspected across traffic types:
- LLM inference (HTTP)—Prompt payloads and model completions. Example: Detect customer SSNs in prompts sent to an external model; catch memorized PII in completions.
- MCP tool calls (JSON-RPC)—tools call parameters, tool responses, resources/read content. Example: Detect PHI in patient records retrieved by an agent; block credentials in API responses.
For MCP traffic, the gateway's protocol-aware matching allows enterprises to be selective: inspect tool calls to external services, skip ping, and initialize methods. This reduces unnecessary scanning overhead without sacrificing coverage.
Why gateway-level DLP?
Application-level controls—prompt engineering, system instructions, input validation—are necessary but insufficient. They can be bypassed via prompt injection, tool poisoning, or simple misconfiguration. They require every application team to implement DLP correctly. And they provide no centralized visibility.
Gateway-level DLP is infrastructure-layer enforcement. It operates independently of the agent's code, framework, or LLM provider. Zero application code changes required—agents don't need to be DLP-aware. Policies are enforced consistently across all agents, all LLM providers, all tools, all MCP servers through a single chokepoint.
At the Agent Gateway, two complementary layers work together: Google Model Armor handles AI-native threats (prompt injection, jailbreaks, tool poisoning), while Symantec DLP handles sensitive data detection (PII, PHI, financial data, IP, credentials). Both run as Service Extensions on the same gateway, delivering Defense in Depth (DiD) without imposing additional network hops.
This doesn't replace agent-level guardrails or SDK-level scanning. It complements them. DiD means multiple independent layers, so if an application-level control fails, the gateway catches what it missed.
What's next?
Symantec is working with Google Cloud to bring enterprise DLP scanning to the Agent Gateway ecosystem. This builds on an existing collaboration: the Symantec DLP Traffic Extension already inspects API traffic on Google Cloud Application Load Balancer as a Service Extension.
If your organization is evaluating agentic AI governance on Google Cloud, we invite you to engage with the Symantec and Google Cloud teams to discuss early access and design partnership opportunities. Check out the Symantec DDS documentation to learn more about the DLP Distributed Detection Service that powers this integration.
Your DLP program is extending to cover the next generation of enterprise data flows. To enable this, Symantec and Google Cloud are working together to ensure that as enterprises adopt agentic AI, enterprise-grade data protection is built into the infrastructure from day one.
And in this new reality, data protection must extend to every layer of the AI stack. The Agent Gateway is where that journey begins on Google Cloud.
To learn how Symantec DDS is already enabling new DLP deployment patterns—including data sovereignty, real-time API scanning, and AI safety guardrails—read our launch announcement: The Data Sovereignty Paradox: Getting DLP Without Giving Up Data Control.





