Interview with a DLP Expert: A Blueprint for Scaling, Extensibility, and Resilience
A candid look into the dynamic evolution of the security industry through the lens of a Data Loss Prevention insider
- An industry pro delivers applause-worthy insights and best practices for a fast and future-ready DLP.
- A strong DLP strategy must scale quickly to balance both speed and depth of scanning.
- The right tools can strengthen your security stance and protect your business from data loss.
As the threat landscape continues to shift under defenders’ feet, so does the dynamic challenge of securing data. Remote work, sprawling cloud services, and increasingly sophisticated cyber threats make data loss feel less like “if” and more like “when.”
No organization can face that reality alone. Every incident is a reminder that vigilance, collaboration, shared insight, and continuous improvement are key to staying ahead.
Few know this challenge better than security legend Bharat Pallod, who has spent nearly 25 years building and refining data protection strategies. Drawing on his expertise, he shared what organizations need today and tomorrow to scale, extend, and build resilience into their data loss prevention (DLP) strategy.
A quarter-century journey with data security
The first question I asked him came easily:
Can you walk us through your experience? What are the key shifts and learnings you’ve observed throughout your career?
“With my security journey beginning in 2001, I’ve witnessed the industry’s dynamic evolution.
“A core learning has been the diverse and often extreme customer needs, proving one-size-fits-all approaches don’t work. Depending on the enterprise, requirements vary widely. From rapid DLP scans on small data for compliance to deep forensic analysis on massive volumes, both speed and comprehensiveness are demanded.”
Bharat also addressed cost-effective scanning for archives and the challenges of remote site scanning with limited bandwidth, emphasizing one major point: Data growth is staggering.
“From gigabytes in Y2K to petabytes and exabytes today, this isn't just volume—it's data across hundreds of new storage solutions. From an engineering view, it's rewarding yet challenging to keep pace. The core is providing extensibility and adaptability to meet evolving customer use cases across this vast, growing data landscape.”
You mentioned the paradoxical demand for both speed and comprehensiveness scanning on enormous data. What are some innovative approaches to address this?
“This paradoxical demand is central to our work. Many organizations, especially those prioritizing IP and privacy, prefer on-premise data, leveraging private cloud solutions like Broadcom's VCF (VMware Cloud Foundation) for efficient hardware use.”
From his work guiding Symantec’s DLP Network Discover and End User Remediation solutions, Bharat shared a closer look at how his team is solving this paradox today.
“Our engineering team prioritizes cost effectiveness and hardware utilization. Our High Speed Discovery (HSD) deployment option within DLP Network Discover exemplifies this.”
Bharat broke down how this option pays off in the real world:
Scalability on Demand: “A simple two-server HSD cluster (master, worker) scales instantly. Need faster scans? Add more HSD Worker Nodes via VMs or servers. Don't need speed? Remove them, freeing up hardware.”
Zero-Friction Management: “Scaling is automated. DLP admins don't need to manually register or de-register nodes. The system intelligently integrates changes.”
Unparalleled Performance: “With proper supporting infrastructure, it’s possible to achieve 1TB/hour and beyond using Symantec DLP's comprehensive policies.”
Intelligent Load Distribution: “HSD clusters automatically distribute policy application load across available worker nodes.”
Dynamic Resource Allocation During Live Scans: “You can add or remove Worker Nodes during ongoing scans. No need to stop and restart—hardware is integrated automatically. This approach delivers velocity for petabyte-scale data while respecting customer hardware investment and operational flexibility.”
How does HSD ensure continuous extensibility to new and emerging storage platforms?
“The rapid rise of new storage solutions is a constant. These platforms often provide sandboxed environments, including native encryption. For DLP to categorize and protect data effectively, it needs deep, logical access, often through native integration (e.g., Atlassian APIs, Sharepoint REST API).
“This is where DLP HSD's extensible architecture truly shines. We've heavily invested in robust extensible hooks—Service Provider Interfaces, or SPIs. Available to Broadcom's internal security developers, these SPIs allow us to rapidly develop new connectors for emerging storage. This modular approach keeps us agile. Using these SPIs, we've already developed connectors for SMB, NFS, DFS, and PST files, ensuring protection wherever data resides, even as the landscape rapidly evolves.”
What other aspects do you consider when designing scanning solutions like DLP HSD?
“Beyond raw power and extensibility, our DLP HSD design philosophy is like engineering an F1 race car: optimizing every component for unparalleled performance and control. We aim to excel on both straightaways and challenging turns”
Bharat ran me through how Symantec DLP Network Discover embodies this with sophisticated mechanisms to tailor scanning.
Inventory Mode Scanning (The 'Gears' for Control): “Like choosing a gear, Inventory mode allows precision scanning to create an inventory of sensitive data hotspots in the organization, prioritizing remediation without deep content-based incidents. It's about matching scan intensity to immediate needs.”
Scan Throttling (The 'Accelerator' for Optimized Pace): “Similar to fine-tuning an accelerator, throttling precisely controls scanning rates (files or data per minute), ensuring optimal resource use and minimal network impact.”
End User Remediation (EUR) & Automatic Remediation Technique (ART) (The 'Tires' for Navigating Incidents): “When faced with many or varied DLP incidents, EUR and ART act like perfect F1 tires. EUR delegates DLP Incident handling to end-users, while ART automates DLP Incident responses. This reduces security team workload, accelerates remediation, and maintains pace even under pressure.”
He added, “Our goal is to ensure customers hold pole position in securing their data and information in the ongoing race of securing ongoing data generation in organizations. We've meticulously crafted these solutions with decades of experience, continuously extending DLP Network Discover (HSD) to secure diverse customer needs, including proprietary solutions.”
How about handling breakdowns of cars?
“That's resilience—the invisible yet vital chassis and recovery system of our F1 car. We've engineered DLP Network Discover for robust resilience to transient and catastrophic disruptions. Our commitment is continuous operation and data integrity.”
Bharat notes this is achieved through:
Intelligent Scan Checkpointing & Resume: “Scans resume automatically from the last good checkpoint after any disruption, avoiding costly rescans of massive datasets.”
Distributed Architecture for Fault Tolerance: “If a component fails, the system isolates the problem and redistributes workload to healthy nodes, minimizing impact and ensuring operations continue.”
Robust Data Consistency Management: “Built-in mechanisms ensure scanned data and generated metadata remain consistent and accurate during pauses, resumes, or redistributions. We prioritize integrity.”
Automated Health Monitoring and Self-Healing: “Like F1 telemetry, Network Discover proactively detects anomalies and often initiates self-healing to restore components or re-route tasks.”
“Our resilience strategy ensures DLP Network Discover gracefully recovers, protects data integrity, and keeps your critical security posture strong even when the track gets bumpy.”
Defend your data into the future
Just as in racing, securing data isn’t about avoiding bumps in the road—it’s about building the car that keeps going, no matter how the track changes. Bharat’s experience shows that the winning strategy is one built on speed, adaptability, and resilience.
Want to see how faster, stronger data protection performs under pressure? Contact your in-region expert for a demo.