Back to Blog
Cloud Strategy

One Click, Full Exfiltration: What the Reprompt Attack Teaches Us About AI Security

Leon Godwin
10 March 2026

The Challenge

You deploy Microsoft Copilot to help your team work faster. Summarise documents, answer questions, pull up files. Standard productivity play.

But what happens when someone clicks a legitimate-looking copilot.microsoft.com link — and that single click gives an attacker silent, persistent access to everything Copilot can see?

That's exactly what Varonis Threat Labs demonstrated with "Reprompt," a now-patched attack chain against Microsoft Copilot Personal. And while the specific vulnerability is fixed, the techniques behind it should make every IT leader rethink how they assess AI tool security.

The attack needed no plugins. No user interaction with Copilot beyond that first click. No connectors. Just a crafted URL and a server waiting to receive whatever Copilot could access.

What's Changed

Reprompt worked by chaining three techniques that individually seem minor but together created a devastating exfiltration pipeline.

The entry point: URL parameter injection. Copilot's q parameter lets developers pre-populate prompts via URL — a convenience feature for automation. Reprompt weaponised it. An attacker crafts a URL like copilot.microsoft.com/?q=[malicious prompt], sends it to the target via email or messaging, and Copilot executes the embedded instructions the moment the page loads.

The bypass: double-request trick. Copilot's data-leak protections check the first outbound request but not subsequent ones. By simply telling Copilot to "double-check yourself" and repeat each action twice, the attacker bypassed these safeguards entirely. The second request went through unchecked.

The persistence: server-side chain prompting. Once the initial prompt executed, the attacker's server issued follow-up instructions based on Copilot's responses. Each answer generated the next question. "Summarise all files the user accessed today." "Where does the user live?" "What vacations do they have planned?" The chain continued even after the user closed the Copilot chat window, because the session remained valid.

This is a fundamentally different threat model from earlier AI attacks like EchoLeak. Previous prompt injection attacks required the user to actively type something into the AI. Reprompt needed nothing beyond a click. The real instructions were hidden server-side, making client-side monitoring tools blind to what was being extracted.

Getting Started

Microsoft patched Reprompt in the January 2026 Patch Tuesday update. Enterprise Microsoft 365 Copilot was not affected — the vulnerability existed in Copilot Personal. But patching one hole doesn't close the class of vulnerability.

Here's what you should be doing now:

Audit your AI tool deployment. Map which AI assistants are accessible to your users — not just enterprise-managed ones, but personal tools they might access on corporate devices. Copilot Personal, ChatGPT, Perplexity. If it processes URLs or accepts pre-populated prompts, it's a potential attack surface.

Review URL parameter handling. The q parameter technique isn't unique to Copilot. Researchers from Tenable found similar vectors in ChatGPT, and Layerx Security found them in Perplexity. Any AI tool that accepts prompt injection via URL parameters needs scrutiny.

Implement session isolation controls. Reprompt exploited the fact that Copilot sessions persisted after the chat closed. Evaluate whether your AI deployments enforce session timeouts and whether external URL fetching is appropriately restricted.

Ensure January 2026 patches are deployed. Check that all endpoints have the January Patch Tuesday updates installed. Malwarebytes specifically flagged this update as the fix for the Reprompt vulnerability.

Educate users on AI-specific phishing. Traditional phishing awareness doesn't cover AI prompt injection attacks. Users need to understand that clicking a link to a legitimate AI service can still be an attack vector if the URL contains embedded instructions.

What This Means

Reprompt is a signal, not an anomaly. As AI assistants gain access to more of our data — emails, files, calendars, browsing history — the value of compromising them grows. We're building systems where a single point of access can expose everything.

The good news: Microsoft's separation between Personal and Enterprise Copilot security models meant that M365 Copilot users were insulated from this specific attack. That architectural decision matters.

The harder truth: every AI tool that accepts external input, fetches URLs, or maintains persistent sessions is a potential exfiltration channel. And traditional security tooling — endpoint detection, network monitoring, DLP — wasn't designed for a world where the data leaves through a legitimate chatbot response to a legitimate service.

Securing AI isn't just about prompt filtering. It's about rethinking what trust looks like when your productivity tools can be turned into data exfiltration engines with a single click.


Leon Godwin, Principal Cloud Evangelist at Cloud Direct