AI Vulnerabilities - MCP Git Serve & Copilot's Reprompt

The "Confused Deputy": Inside the Anthropic MCP Git Server Exploit

A critical security flaw has been uncovered in the official Model Context Protocol (MCP) Git server, exposing a dangerous intersection between agentic AI and supply chain security. Researchers identified three distinct vulnerabilities (CVE-2025-68143, CVE-2025-68144, CVE-2025-68145) in Anthropic’s reference implementation, mcp-server-git, which allows AI assistants like Claude Desktop or Cursor to interface with Git repositories. By chaining these flaws, an attacker can achieve full Remote Code Execution (RCE) on a developer's machine simply by asking the AI to summarize a malicious repository. This "Zero-Click" style attack highlights the fragility of current tool-use safeguards when facing indirect prompt injection.

The technical mechanics of this attack are a textbook example of the "confused deputy" problem. The attack relies on placing hidden instructions within a repository’s text files (such as a README.md or issue ticket). When the LLM ingests this context, it unknowingly follows the malicious instructions to trigger the vulnerable tools. Specifically, the exploit chains a path traversal flaw to bypass allowlists, an unrestricted git_init command to create repositories in arbitrary locations, and argument injection in git_diff to execute shell commands. Essentially, the AI is tricked into modifying its own environment—writing malicious Git configurations—under the guise of performing standard version control tasks.

This discovery serves as a stark warning for the rapidly growing ecosystem of AI agents and MCP architecture. While the vulnerabilities have been patched in the latest versions, they demonstrate that "human-in-the-loop" approval mechanisms can be bypassed if the agent itself is compromised before presenting a plan to the user. For developers and security engineers, this reinforces the need for strict sandboxing of MCP servers; granting an AI agent direct access to local system tools requires treating the agent's context window as an untrusted input vector, much like a traditional SQL injection point.

Reprompt: Understanding the Single-Click Vulnerability in Microsoft Copilot

Reprompt is a newly disclosed AI security vulnerability affecting Microsoft Copilot that researchers say enables single-click data theft: a user only needs to click a crafted Copilot link for the attack to start. Varonis Threat Labs reported the issue in January 2026, highlighting how it can silently pull sensitive information without requiring plugins or complicated user interaction.

What makes Reprompt notable is its use of Copilot’s q URL parameter to inject instructions directly into the assistant’s prompt flow. Researchers described a “double-request” technique—prompting the assistant to perform the same action twice—where the second run can bypass protections that block the first attempt. After that foothold, “chain-request” behavior can let an attacker continue steering the session through dynamic follow-up instructions from an attacker-controlled server, enabling stealthy, iterative data extraction even if the user closes the chat.

The risk is amplified because it can operate without add-ons, meaning it can succeed in environments where defenders assume “no plugin” equals “lower risk.” Reports noted the exposure was primarily tied to Copilot Personal, while Microsoft 365 Copilot enterprise customers were described as not affected. Microsoft has since patched the vulnerability as of mid-January 2026, but Reprompt is a useful reminder that LLM apps need URL/prompt hardening, stronger guardrails against multi-step bypass patterns, and careful controls on what authenticated assistants can access by default.