Gemini AI Flaw Allowed Calendar Data Leaks Via Malicious Invites

Cybersecurity researchers have discovered a vulnerability in Google’s Gemini AI assistant that allowed attackers to leak private Google Calendar data using nothing more than carefully worded calendar invites.

The flaw was discovered by the researchers at Miggo Security, who demonstrated how Gemini’s deep integration with Google Calendar could be exploited.

Gemini, Google’s large language model (LLM) assistant, is designed to help users manage schedules, summarize meetings, and answer questions like “What does my day look like?” To do this, it automatically reads and interprets calendar event details, like titles, descriptions, times, and attendees. That convenience became the attack surface.

How The Attack Worked

The attack began with a seemingly harmless calendar sent to a target user. Inside the event’s description, researchers embedded natural language instructions that appeared harmless and plausible to manipulate Gemini — a technique known as indirect prompt injection.

The malicious instructions stayed dormant until the victim asked Gemini a routine question about their schedule. When the AI assistant scanned all relevant calendar entries to respond, including the attacker’s invite, it unknowingly followed the hidden instructions.

According to Miggo, the embedded prompt instructed Gemini to:

  • Summarize all meetings on a specific day, including private ones
  • Create a new calendar event containing that summary
  • Respond to the user with a harmless message, such as “It’s a free time slot.”

From the user’s perspective, nothing seemed wrong. Behind the scenes, however, the AI created a new calendar event containing sensitive meeting details and placed a summary of the user’s private meetings in its description.

In many enterprise environments, the newly created event was visible to other participants — including the attacker — effectively leaking sensitive calendar data silently, without requiring any direct action from the victim.

Why The Exploit Was Hard To Detect

Unlike traditional cyberattacks, this exploit did not rely on obvious malicious code or suspicious strings. The embedded instructions in the calendar description looked like something a real user might reasonably ask Gemini to do.

“The payload was syntactically innocuous, meaning it was plausible as a user request. However, it was semantically harmful, as we’ll see, when executed with the model tool’s permissions,” Miggo researchers explained in a blog post.

Security researchers say this highlights a fundamental challenge with AI-powered systems. Traditional security tools focus on syntax — identifying known malicious strings, such as SQL injections or cross-site scripting. In contrast, AI vulnerabilities can be hidden in plain language, with the danger only emerging when context and permissions come into play.

In this case, Gemini acted not just as a chatbot, but as an application layer with access to powerful tools and APIs — including the ability to create and modify calendar events.

Google’s Response

Miggo responsibly disclosed the vulnerability to Google, which confirmed the findings and has since deployed mitigations to block similar attacks. The search giant had already added protections following earlier research in 2025 that demonstrated prompt injection attacks using calendar invites, but Miggo’s work showed that reasoning-based manipulation was still possible.

While the specific issue has been addressed, researchers say the incident highlights a deeper challenge in securing AI-powered applications.

Bottom Line

As AI tools become more deeply embedded in everyday workflows, vulnerabilities are increasingly found not in code, but in language, context, and AI behavior at runtime. Security experts warn that defending against such threats will require moving beyond keyword-based defenses toward systems that understand intent, permissions, and data flow at runtime.

The Gemini incident serves as a reminder that, as AI becomes more powerful, convenience and automation must be balanced with new approaches to application security.

 

Kavita Iyer
Kavita Iyerhttps://www.techworm.net
An individual, optimist, homemaker, foodie, a die hard cricket fan and most importantly one who believes in Being Human!!!
spot_img

Read More

Suggested Post