
Single-click prompt exploit drains Copilot Personal data in stealthy stages
Security researchers demonstrated a one-click, multistage prompt-injection attack against Copilot Personal that exfiltrated user data from chat histories, even after the chat was closed. The exploit used a malicious URL parameter and bypassed some endpoint protections by triggering repeated requests (“reprompt”), exposing names, locations, and event details. Microsoft has patched the flaw, with Copilot Personal affected but not Microsoft 365 Copilot.











