Read This Before Adopting AI
How to keep your data safe before activating tools like Copilot
- Some AI tools can amplify classic data security risks–with as many as half of office workers further amplifying that risk by using unauthorized AI platforms.
- Unlabeled and misclassified content is a major liability.
- AI makes it easy for users to share more than they should, but it doesn't have to be that way.
- Visibility into prompts, apps and access is necessary and possible with the right strategy.
AI tools like Microsoft Copilot are quickly becoming the norm in modern workplaces—and for good reason. 75% of companies surveyed in 2024 have already adopted generative AI (genAI), with top leaders seeing a return of up to $10 for every $1 invested. And four out of 10 (43%) are seeing the greatest ROI from productivity use cases.
But while the momentum for these powerful productivity boosters is real, so is the need for preparation. From shadow AI risks to data lifecycle blind spots, today’s security and IT teams are asked to move fast, but often without the visibility, labeling discipline or policy enforcement needed to keep sensitive data safe.
It’s a lot to process. To make all that easier, experts from Broadcom’s information and email security team recently hosted the webinar, “How to Be Data Smart Before You Turn On Copilot.” The message was clear: you can’t safely unlock AI’s potential until you get your data under control first.
In this honest conversation, Broadcom experts outlined the value of a deliberate approach to securing your data before flipping on the AI switch so you can reap the benefits of Copilot without the pain of oversight. We’ve summarized some of the webinar’s critical takeaways below.
The real risks of rushing AI adoption
In enterprises across all industries, AI adoption is accelerating fast. “Let’s do AI” has become a board-level rallying cry, motivated by the promise of a competitive edge and the fear of being left behind. Meanwhile, the departments in these large organizations aren’t complaining. Tools like Copilot are already automating workflows, simplifying tasks and boosting productivity.
But between top-down pressure and bottom-up excitement, some organizations are integrating AI before confirming their data is ready—without clear labeling, enforcement or visibility into how tools interact with sensitive information. What risks are we talking about, exactly? Let’s get into specifics.
Data exposure via prompts
Once data is uploaded to a public model like ChatGPT or Notebook LM, an organization loses control over where it goes or how it might be used. Just because a tool lives within a trusted platform like Office 365 doesn’t mean it should have access to every file.
AI responses to user prompts aren’t automatically deleted, either. That means that the AI might later surface sensitive documents from executives, HR or legal and deliver them to the wrong users, internally or externally. Furthermore, even if a document was deleted, the tool’s memory may still retain its content.
Misleading permissions
If Copilot offers content, it must be safe to use, right? Not always. AI tools can retrieve data that users technically have access to—but may be unaware of otherwise. Because Copilot returns that information so easily, it gives users the impression that the data is fair game to use however they want, even if their access might have been due to a mistake or permissions oversight.
Unlabeled or misclassified content
Much of the data in platforms like Office 365 has never been properly labeled or classified. That’s a problem when AI tools are introduced—anything unlabeled might be considered accessible by default. Unfortunately, manual labeling is often done incorrectly or forgotten altogether.
Even when a document starts out harmless, new sensitive content can be added over time without a manual label update to reflect the change. AI-generated content like meeting summaries and collaborative documents often lack labels altogether unless a DLP system intervenes.
Shadow AI and third-party exposure
Employees are now exploring tools on their own, often without approval or oversight. A recent study found that 50% of workers use unapproved AI tools, and most wouldn’t stop even if banned. These apps may store data in unknown regions or fail to support corporate data deletion requests. Nearly 21% of ChatGPT’s prompt traffic goes to the free tier, where inputs can be retained and used for training. A single prompt from a personal email account could send sensitive company data to a model with no way to get it back.
What to do before you turn it on
Whether your organization is just beginning to explore Copilot or already rolling it out, it’s worth taking a moment to assess how prepared your data is. It’s clear these tools are making a difference for organizations, and with the right strategy, you can use them confidently—without compromising control.
Get visibility into your sensitive data
Before flipping the switch on these tools, you need a clear view of where your sensitive data lives. Ensuring consistent classification and establishing data loss prevention (DLP) policies can prevent the wrong data from flowing into—and out of—AI tools unintentionally.
Start by scanning your existing Office 365 content, correcting mislabeled files and putting enforcement in place that ensures new, AI-generated content (meeting notes or collaborative documents) gets reviewed and labeled properly. Some of these controls are already available within Microsoft 365, including the “block content analysis” attribute that prevents Copilot from using certain documents for inference. But keep in mind: those settings only work if labels are in place.
Inspect prompts and discover AI activity
It’s no longer enough to simply secure documents. Prompt inspection and app discovery are also non-negotiables. Organizations need to see what tools employees are using across the organization, what they’re inputting into those tools and whether those tools have been properly sanctioned or restricted. A solid DLP solution can extend protection beyond files to inspecting prompts and monitoring which apps employees are using—that means real-time insight into app behavior, traffic and user-level actions.
How the right DLP solution helps
Even the best strategy is no good without the right industry-leading tools to implement it. Symantec DLP gives you the visibility, control and automation needed to protect sensitive data as it interacts with AI. You can scan and classify your existing Office 365 content as it’s created or modified, easily surface mislabeled or unlabeled files and enable teams to apply corrections through policy-based controls.
Symantec also supports Microsoft’s “block content analysis” attribute, preventing sensitive documents from being used in Copilot inference or training. Most importantly, Symantec DLP confirms the labeling is accurate before applying those settings. It also detects what employees are entering into third-party AI tools like ChatGPT or Notebook LM, and monitors what apps are in use across the organization—even the inevitable ones security hasn’t approved yet. Symantec can also block access to unsanctioned AI tools.
Don’t cede control of your data
AI is already shaping how work gets done and how businesses grow. The organizations that get adoption right won’t just be those that moved first. They’ll be the ones who moved deliberately, with the proper controls in place to prevent their data from popping up in the wrong place.
Watch the on-demand webinar for more insights into keeping your sensitive data from exposure via AI.

Your Guide to Data Governance in an AI-Driven World
AI tools will be used in your work—here’s how to make them safe

We encourage you to share your thoughts on your favorite social platform.