AI is Only As Smart As Its Access—And That’s The Problem
Part 1 of 2: AI tools are only as good as the data available, provided, or trained upon.
- When it comes to AI tools, context and identity must follow every request.
- Modern AI systems need indicators that reveal the completeness of the data they’re working with.
- Authorization and enforcement shouldn’t stop at the edge—every LLM to MCP Server needs continuous verification.
When I first heard about Model Context Protocol (MCP) Servers—described as “like plugging in a USB for adding data to an AI tool”—my immediate thought was: accurate, yet terrifying.
While MCPs help distribute where to store, host, and retrieve data, as an identity and cybersecurity professional with background in financial services, I instantly grew concerned of context, scope, and privilege issues.
What happens when AI gets access across domains
AI tools powered by MCP Servers often have access to data across multiple domains, raising serious concerns for the security and privacy of that information. Whether in financial firms, government agencies, or healthcare, ensuring proper access and maintaining data integrity defines how organizations minimize risk and avoid costly compliance fines.
Here are some real-world examples to consider:
Ethical walls in financial firms
These walls are designed to separate public and non-public information between departments and personnel. Establishing strict controls helps prevent insider trading and conflicts of interest, requiring robust physical and virtual protections to document and enforce wall-crossing situations.
Allowing AI tools, like language learning models (LLMs) and retrieval-augmented generation (RAGS), to aggregate information from these segregated areas via MCP Servers could expose a firm to significant risks.
Federal government data classification
The US Federal Government classifies data as Public, Confidential, Secret, Top Secret, and beyond. Access to these levels is typically granted through various security clearances and special access programs and is often compartmentalized by design.
Connecting MCP Servers to various classification domains for AI tools may lead analyses or results that vary based on each user’s access levels.
Agentic AI in Healthcare
Agentic artificial intelligence (AI) refers to autonomous AI systems making decisions and taking actions with limited human intervention. In healthcare, such systems could gather blood work, medication lists, and personal and familial medical history to suggest new prescriptions. This raises the question, “Would my family’s history be subject to HIPPA laws?”
MCP Servers, in this context, could connect to various domains across hospitals, clinics, and third-party vendors. Securing these interactions and controlling access presents a complex, multi-step and multi-dimensional challenge to solve, particularly when considering regulations like HIPAA.
Fundamentally, the AI tool chains follow:

Two main challenges surround MCP
The multi-step challenge
Many organizations are publishing MCP Servers to facilitate data access. That means AI tools are being built to pull from multiple sources—from just one to hundreds of MCP Servers—creating undeniable business value in associating disparate information streams for on-demand analysis.
But, several questions arise:
- What if only some of the MCP servers are available due to uptime issues?
- What if the MCP’s target data sources are unreachable because of network issues?
- What if the requester is not authorized to access some (or none) of the MCP’s sources?
Not only do these scenarios call into question the trustworthiness of data available to the LLMs, it poses even more concerns:
- Should users have logical restrictions, only accessing a subset of MCP servers?
- How will the RAG/LLM distinguish between previously learned data over a partially fresh set of data or data that is now blocked for business reasons?
The multi-dimensional challenge
With the emergence of MCP Servers, LLMs now pull data from an increasingly diverse set of sources. However, those sources are accessed via pre-existing APIs that enforce identity authentication and authorization.
AI client tools must act within the context of the agent’s or end user’s identity through the RAG, LLM, and MCP Server chain, presenting said identity to each downstream data source connected. Assuming the identity token is universally available, the problem becomes multi-dimensional. Careful handling is needed to maintain correct access content, otherwise it can lead to unauthorized access, data leakage, or misinformed AI outputs.
A Practical Example
Suppose Alice and Bob both request the same information. The data returned varies depending on each user’s authorization level and scope of access. Yet, LLMs and RAG tools will ingest, compile, and present this data without explicit awareness of these differences.
So we must ask:
- How can LLMs ensure they present the correct scope of data depending on the end user who posed the question?
- Should this responsibility fall on the AI tool itself?
Typically, an LLMs’ success confidence depends on matching a client’s query against the available data, but this assumes all of the data is always available and contextually relevant—which is not always the case.
If we don’t keep AI honest, who will?
Today, imaginative answers, also known as hallucinations, are a genuine concern in AI systems. But what happens when recommendations are made with insufficient, partial, or missing data? How can consumers, agents, or end-users be warned when their information is incomplete? Let’s map it back to our previous examples.
- When it comes to financial firms, users should only access the MCP servers appropriate for their role within the company, ensuring they only see the scope of data they are authorized to view.
- In government, access to MCP servers must align with a user’s security clearance. Granted, this may be a tad over-simplified, but these restrictions critically maintain sensitive data secure, compartmentalized and compliant.
- Agentic agents used in healthcare require clear, robust metrics on both the completeness and freshness of data before making medical decisions or recommendations. This safeguards patient privacy and strengthens compliance.
As we move forward:
- Agents, Clients , RAGs, LLMs, and MCP Servers need to carry forward the end user’s identity to ensure the right context is applied during data retrieval.
- They also must understand the completeness of data being retrieved to prevent misinformed recommendations.
- Security teams must also ensure that access to various MCP Servers are controlled appropriately.
First, authorize. Then, enforce.
Security-minded AI practices and tooling should strongly consider private infrastructure (such as a private cloud, to strengthen security across the AI tool chain), and ensure the appropriate contexts are used when gathering data. Here are some simple steps for each process:
Step one: Authorization
- Identify request origin and requester
- Authorize access to appropriate sources
- Verify successful data contact
- Providing transparency throughout the tool chain on data completeness
Step two: Enforcement
- Ensure trust at every communication stage across the tool chain
- Continuously confirm authorization for each request
Together, authorization and enforcement form the backbone of a secure AI tool chain, but even with these controls in place, there’s still gaps the industry needs to address before moving on.
What the industry needs to happen next
AI tools chains must reliably capture, carry, and forward user identity, ensuring the right context is preserved every time data is accessed or analyzed.
With the emergence of MCP servers and distributed data sources, organizations need stronger confidence measures—signals that help LLMs, RAGs, agents, and clients understand the completeness and reliability of the data they’re working with. Without this clarity, even well-secured systems can drift into partial results and blind spots.
This level of maturity won’t come from a single tool or control—it needs coordinated efforts across identity, infrastructure, and AI teams.
For more on how Broadcom’s Identity Management group tackles these multi-step and multi-dimensional challenges head on, catch the next installation of our series.





