Researchers Find 30+ Security Flaws In AI Coding Tools

Security researchers have uncovered more than 30 serious vulnerabilities across a range of AI-powered coding tools and IDE extensions that could be weaponized for data theft, configuration tampering, and remote code execution (RCE).

The newly identified vulnerability class, dubbed “IDEsaster” by Ari Marzouk (MaccariTA), the researcher behind the discovery, shows how AI agents can be manipulated through prompt injection to misuse legitimate IDE features.

The vulnerabilities span popular tools such as Cursor, Windsurf, Kiro.dev, GitHub Copilot, Zed.dev, Roo Code, Junie, Cline, Claude Code, and Gemini CLI. Researchers found that 100% of tested AI IDEs were vulnerable, with 24 CVEs assigned so far.

“I think the fact that multiple universal attack chains affected each and every AI IDE tested is the most surprising finding of this research,” Marzouk told The Hacker News.

How IDEsaster Works

The IDEsaster chain relies on three components:

  • Prompt injection – manipulating an AI model by feeding hidden or malicious instructions inside files, URLs, or text that the user might not notice.
  • Auto-approved AI tool actions – many AI agents are allowed to read or change files without first getting permission from the user.
  • Legitimate IDE features – features that normally help developers, such as reading JSON parsing config files, or other routine IDE functions, can be exploited once an AI agent is tricked.

Unlike earlier AI-related vulnerabilities that depended on buggy tools, IDEsaster exploits legitimate IDE capabilities — turning normal development features into pathways for data exfiltration or RCE.

Real-World Exploit Scenarios

Researchers demonstrated several high-impact attacks. Some of the most serious flaws include:

  1. Remote JSON Schema (e.g., CVE-2025-49150, CVE-2025-53097): Attackers can force the IDE to fetch a remote schema containing sensitive data, sending it to an attacker-controlled domain.
  2. IDE Settings Overwrite (e.g., CVE-2025-53773, CVE-2025-54130): A prompt injection can edit IDE configuration files like “.vscode/settings.json or “.idea/workspace.xml so that the IDE executes a malicious file.
  3. Multi-Root Workspace Settings (e.g., CVE-2025-64660): Attackers can alter workspace settings to load writable executable files and run malicious code automatically.

In all cases, these attacks require no user interaction once the malicious prompt is processed. The entire process can happen without the user noticing — and without reopening or refreshing the project.

Why AI Makes IDEs More Vulnerable

The core issue is that LLMs cannot reliably distinguish between normal content and embedded malicious instructions. A single poisoned file name, diff output, or pasted URL can manipulate the model.

“Any repository using AI for issue triage, PR labeling, code suggestions, or automated replies is at risk of prompt injection, command injection, secret exfiltration, repository compromise and upstream supply chain compromise,” Aikido researcher Rein Daelman warned.

Marzouk stressed that the industry must adopt a new mindset — Secure for AI” — meaning developers must anticipate how AI-driven features could be abused in the future, not just how they function today.

How Developers Can Protect Themselves

Researchers suggest several precautions for developers using AI IDEs:

  • Only work with trusted projects, files, and repositories
  • Connect only to trusted Model Context Protocol (MCP) servers and monitor them continuously for changes
  • Review URLs and external sources for hidden instructions or characters
  • Always configure the AI agent to require a human-in-the-loop whenever possible

For developers building AI IDEs, experts urge:

  • Applying least privilege principles to LLM tools
  • Continuously monitoring old and new IDE features for potential attack vectors
  • Assuming prompt injection is always possible, and the agent can be breached
  • Reducing prompt injection vectors
  • Hardening system prompts and limiting LLM selection
  • Sandboxing command execution
  • Adding egress controls to prevent unauthorized data transfer
  • Testing tools for path traversal, information leakage, and command injection

With millions of developers now relying on AI-powered IDEs, the push to embed security into their design has never been more urgent.

 

Kavita Iyer
Kavita Iyerhttps://www.techworm.net
An individual, optimist, homemaker, foodie, a die hard cricket fan and most importantly one who believes in Being Human!!!
spot_img

Read More

Suggested Post