Microsoft has acknowledged that a software bug allowed its AI assistant, Microsoft 365 Copilot Chat, to summarize confidential emails — even when Data Loss Prevention (DLP) policies were put in place to block such access.
The issue, tracked internally as CW1226324, was first detected on January 21 and was initially reported by BleepingComputer.
For those unaware, Microsoft 365 Copilot Chat is an AI-powered assistant integrated into workplace apps such as Word, Excel, PowerPoint, Outlook, and OneNote. It allows users to ask questions, summarize documents, and generate content directly within those tools.
What Went Wrong
According to Microsoft, the bug caused Copilot’s “work tab” chat feature to incorrectly read and summarize emails stored in users’ Sent Items and Drafts folders — including messages labelled as confidential.
These emails were protected by sensitivity labels and DLP policies, which are designed to stop sensitive data from being accessed or shared inappropriately by automated systems. However, due to what Microsoft described as a “code issue,” those protections were not properly enforced in this instance.
In simple terms, even when the Copilot was told not to read certain emails, the AI assistant was summarizing content it was supposed to ignore.
Was Sensitive Data Exposed?
Microsoft has stressed that the issue did not expose data to unauthorized individuals. In a statement, a Microsoft spokesperson said:
“We identified and addressed an issue where Microsoft 365 Copilot Chat could return content from emails labeled confidential authored by a user and stored within their Draft and Sent Items in Outlook desktop. This did not provide anyone access to information they weren’t already authorized to see. While our access controls and data protection policies remained intact, this behavior did not meet our intended Copilot experience, which is designed to exclude protected content from Copilot access. A configuration update has been deployed worldwide for enterprise customers.”
In other words, Copilot surfaced information only to users who already had access to those emails — but it still processed content that should have been excluded from AI summarization.
Fix Rolled Out
Microsoft said it began rolling out a fix in early February and has since deployed a configuration update worldwide for enterprise customers. The company continues to monitor the rollout and contact some affected users to confirm the issue is fully resolved.
However, the Redmond giant has not disclosed how many organizations were affected, how many users experienced the issue, or a final timeline for full remediation.
The incident has been categorized as an “advisory,” which typically indicates limited scope or impact.
Growing AI Security Concerns
The incident comes at a time when businesses are rapidly integrating generative AI tools into everyday workflows, drawn by promises of greater speed and productivity. However, experts warn that these systems also introduce new security risks, particularly because AI assistants process vast amounts of sensitive corporate data.
Although Microsoft says the issue has now been fixed, the Copilot bug serves as a cautionary reminder that enterprise AI tools must be carefully governed. As organizations continue adopting AI at scale, ensuring strict confidentiality safeguards will remain an ongoing challenge.
