Listen

Description

If you think Copilot only shows what you’ve already got permission to see—think again. One wrong Graph permission and suddenly your AI can surface data your compliance team never signed off on. The scary part? You might never even realize it’s happening.In this video, I’ll break down the real risks of unmanaged Copilot access—how sensitive files, financial spreadsheets, and confidential client data can slip through. Then I’ll show you how to lock it down using Graph permissions, DLP policies, and Purview—without breaking productivity for the people who actually need access.When Copilot Knows Too MuchA junior staffer asks Copilot for notes from last quarter’s project review, and what comes back isn’t a tidy summary of their own meeting—it’s detailed minutes from a private board session. Including strategy decisions, budget cuts, and names that should never have reached that person’s inbox. No breach alerts went off. No DLP warning. Just an AI quietly handing over a document it should never have touched.This happens because Copilot doesn’t magically stop at a user’s mailbox or OneDrive folder. Its reach is dictated by the permissions it’s been granted through Microsoft Graph. And Graph isn’t just a database—it’s the central point of access to nearly every piece of content in Microsoft 365. SharePoint, Teams messages, calendar events, OneNote, CRM data tied into the tenant—it all flows through Graph if the right door is unlocked. That’s the part many admins miss.There’s a common assumption that if I’m signed in as me, Copilot will only see what I can see. Sounds reasonable. The problem is, Copilot itself often runs with a separate set of application permissions. If those permissions are broader than the signed-in user’s rights, you end up with an AI assistant that can reach far more than the human sitting at the keyboard. And in some deployments, those elevated permissions are handed out without anyone questioning why.Picture a financial analyst working on a quarterly forecast. They ask Copilot for “current pipeline data for top 20 accounts.” In their regular role, they should only see figures for a subset of clients. But thanks to how Graph has been scoped in Copilot’s app registration, the AI pulls the entire sales pipeline report from a shared team site that the analyst has never had access to directly. From an end-user perspective, nothing looks suspicious. But from a security and compliance standpoint, that’s sensitive exposure.Graph API permissions are effectively the front door to your organization’s data. Microsoft splits them into delegated permissions—acting on behalf of a signed-in user—and application permissions, which allow an app to operate independently. Copilot scenarios often require delegated permissions for content retrieval, but certain features, like summarizing a Teams meeting the user wasn’t in, can prompt admins to approve application-level permissions. And that’s where the danger creeps in. Application permissions ignore individual user restrictions unless you deliberately scope them.These approvals often happen early in a rollout. An IT admin testing Copilot in a dev tenant might click “Accept” on a permission prompt just to get through setup, then replicate that configuration in production without reviewing the implications. Once in place, those broad permissions remain unless someone actively audits them. Over time, as new data sources connect into M365, Copilot’s reach expands without any conscious decision. That’s silent permission creep—no drama, no user complaints, just a gradual widening of the AI’s scope.The challenge is that most security teams aren’t fluent in which Copilot capabilities require what level of Graph access. They might see “Read all files in SharePoint” and assume it’s constrained by user context, not realizing that the permission is tenant-wide at the application level. Without mapping specific AI scenarios to the minimum necessary permissions, you end up defaulting to whatever was approved in that initial setup. And the broader those rights, the bigger the potential gap between expected and actual behavior.It’s also worth remembering that Copilot’s output doesn’t come with a built-in “permissions trail” visible to the user. If the AI retrieves content from a location the user would normally be blocked from browsing, there’s no warning banner saying “this is outside your clearance.” That lack of transparency makes it easier for risky exposures to blend into everyday workflows.The takeaway here is that Graph permissions for AI deployments aren’t just another checkbox in the onboarding process—they’re a design choice that shapes every interaction Copilot will have on your network. Treat them like you would firewall rules or VPN access scopes: deliberate, reviewed, and periodically revalidated. Default settings might get you running quickly, but they also assume you’re comfortable with the AI casting a much wider net than the human behind it. Now that we’ve seen how easily the scope can drift, the next question is how to find those gaps before they turn into a full-blown incident.Finding Leaks Before They SpillIf Copilot was already surfacing data it shouldn’t, would you even notice? For most organizations, the honest answer is no. It’s not that the information would be posted on a public site or blasted to a mailing list. The leak might show up quietly inside a document draft, a summary, or an AI-generated answer—and unless someone spots something unusual, it slips by without raising alarms.The visibility problem starts with how most monitoring systems are built. They’re tuned for traditional activities—file downloads, unusual login locations, large email sends—not for the way an AI retrieves and compiles information. Copilot doesn’t “open” files in the usual sense. It queries data sources through Microsoft Graph, compiles the results, and presents them as natural language text. That means standard file access reports can look clean, while the AI is still drawing from sensitive locations in the background.I’ve seen situations where a company only realized something was wrong because an employee casually mentioned a client name that wasn’t in their department’s remit. When the manager asked how they knew that, the answer was, “Copilot included it in my draft.” There was no incident ticket, no automated alert—just a random comment that led IT to investigate. By the time they pieced it together, those same AI responses had already been shared around several teams.Microsoft 365 gives you the tools to investigate these kinds of scenarios, but you have to know where to look. Purview’s Audit feature can record Copilot’s data access in detail—it’s just not labeled with a big flashing “AI” badge. Once you’re in the audit log search, you can filter by the specific operations Copilot uses, like `SearchQueryPerformed` or `FileAccessed`, and narrow that down by the application ID tied to your Copilot deployment. That takes a bit of prep: you’ll want to confirm the app registration details in Entra ID so you can identify the traffic.From there, it’s about spotting patterns. If you see high-volume queries from accounts that usually have low data needs—like an intern account running ten complex searches in an hour—that’s worth checking. Same with sudden spikes in content labeled “Confidential” showing up in departments that normally don’t touch it. Purview can flag label activity, so if a Copilot query pulls in a labeled document, you’ll see it in the logs, even if the AI didn’t output the full text.Role-based access reviews are another way to connect the dots. By mapping which people actually use Copilot, and cross-referencing with the kinds of data they interact with, you can see potential mismatches early. Maybe Finance is using Copilot heavily for reports, which makes sense—but why are there multiple Marketing accounts hitting payroll spreadsheets through AI queries? Those reviews give you a broader picture beyond single events in the audit trail.The catch is that generic monitoring dashboards won’t help much here. They aggregate every M365 activity into broad categories, which can cause AI-specific behavior to blend in with normal operations. Without creating custom filters or reports focused on your Copilot app ID and usage patterns, you’re basically sifting for specific grains of sand in a whole beach’s worth of data. You need targeted visibility, not just more visibility.It’s not about building a surveillance culture; it’s about knowing, with certainty, what your AI is actually pulling in. A proper logging approach answers three critical questions: What did Copilot retrieve? Who triggered it? And did that action align with your existing security and compliance policies? Those answers let you address issues with precision—whether that means adjusting a permission, refining a DLP rule, or tightening role assignments. Without that clarity, you’re left guessing, and guessing is not a security strategy.So rather than waiting for another “casual comment” moment to tip you off, it’s worth investing the time to structure your monitoring so Copilot’s footprint is visible and traceable. This way, any sign of data overexposure becomes a managed event, not a surprise. Knowing where the leaks are is only the first step. The real goal is making sure they can’t happen again—and that’s where the right guardrails come in.Guardrails That Actually WorkDLP isn’t just for catching emails with credit card numbers in them. In the context of Copilot, it can be the tripwire that stops sensitive data from slipping into an AI-generated answer that gets pasted into a Teams chat or exported into a document leaving your tenant. It’s still the same underlying tool in Microsoft 365, but the way you configure it for AI scenarios needs a different mindset.The gap is that most organizations’ DLP policies are still written with old-school triggers in mind—email attachments, file downloads to USB drives, co

Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-show-podcast--6704921/support.