Tag

Prompt Injection

All articles tagged with #prompt injection

technology3 hours ago

Single-click prompt exploit drains Copilot Personal data in stealthy stages

Security researchers demonstrated a one-click, multistage prompt-injection attack against Copilot Personal that exfiltrated user data from chat histories, even after the chat was closed. The exploit used a malicious URL parameter and bypassed some endpoint protections by triggering repeated requests (“reprompt”), exposing names, locations, and event details. Microsoft has patched the flaw, with Copilot Personal affected but not Microsoft 365 Copilot.

security8 hours ago

Reprompt flaw lets attackers hijack Copilot sessions via malicious prompts

Researchers exposed 'Reprompt', a flaw that injects commands via Copilot's URL q parameter to hijack an authenticated session and exfiltrate data, using P2P injection, double-request, and chain-request techniques; Microsoft patched the vulnerability on January 2026 Patch Tuesday, mainly affecting Copilot Personal rather than Microsoft 365 Copilot, and users should apply the latest Windows updates.

technology2 months ago

OpenAI's New AI Browser and Its Future Impact on Web and Healthcare

Cybersecurity researchers have discovered serious prompt injection vulnerabilities in OpenAI's new AI browser Atlas, particularly in its agent mode and omnibox, which could allow hackers to execute harmful commands or access sensitive data. Experts recommend stricter URL validation to prevent such attacks, highlighting ongoing security challenges in AI-powered browsers.

technology2 months ago

OpenAI's Atlas Enhances ChatGPT, Raising Security and Web Integration Concerns

Researchers at LayerX discovered a vulnerability in OpenAI's Atlas browser that allows attackers to inject malicious prompts into ChatGPT's memory via cross-site request forgery, posing significant security risks, especially for Atlas users who are more exposed to phishing and prompt injection attacks. The exploit can persist across devices and browsers, potentially leading to malicious activities or data theft.

technology2 months ago

OpenAI’s ChatGPT Atlas Faces Security Risks Amid Web Enhancements

Cybersecurity experts warn that OpenAI's ChatGPT Atlas, an AI browser with new features like memory and agent mode, faces significant security risks including prompt injection attacks that could lead to data theft, malware downloads, and other malicious activities. Despite mitigation efforts by OpenAI, the attack surface is expanding, raising concerns about privacy, data sharing, and user safety as these AI tools become more integrated into internet browsing.

technology2 months ago

OpenAI Launches ChatGPT Atlas to Revolutionize Web Interaction and Security

OpenAI's Atlas browser, which integrates ChatGPT as an AI agent, has been shown to be vulnerable to indirect prompt injection attacks, raising concerns about AI security and the need for better safeguards. Despite OpenAI's efforts to mitigate these risks, security researchers demonstrate that prompt injection remains a significant and ongoing challenge in AI-powered systems.

technology4 months ago

Emerging AI Threats: Malware and Data Theft via Image and Prompt Attacks

Researchers have discovered a method where hackers hide malware in images served by large language models, exploiting image downscaling techniques like bicubic interpolation to reveal hidden instructions, posing significant security risks for AI-integrated systems. Users are advised to implement layered security measures and cautious input handling to mitigate these threats.

technology4 months ago

Anthropic Launches Claude AI Chrome Extension Amid Browser Security Concerns

Anthropic's AI Chrome extension, designed to automate tasks, has significant security vulnerabilities with a 23.6% attack success rate, reduced to 11.2% with safety measures. Experts warn that these risks, including prompt injection and malicious instructions, pose serious security concerns, and current protections are insufficient, placing the burden of security on users.

technology4 months ago

Hidden Data-Theft Prompts Exploit AI Image Resizing

Researchers have discovered a new AI attack that embeds hidden instructions in images through downscaling, which can lead to data theft and unauthorized actions when processed by AI systems. The attack exploits artifacts created during image resampling to hide malicious prompts that are interpreted by AI models, potentially compromising user data and system integrity. Mitigation strategies include imposing image dimension limits, providing preview feedback, and requiring user confirmation for sensitive operations. The researchers also released an open-source tool to demonstrate the attack.

technology5 months ago

Google AI email summaries vulnerable to phishing hacks

Researchers have discovered a vulnerability in Google's Gemini AI used in Workspace that allows attackers to embed hidden commands in email summaries, potentially leading to phishing attacks. Google is working on defenses, but users are advised to verify AI-generated content, avoid using summaries for suspicious emails, keep software updated, and consider disabling Gemini summaries temporarily to stay safe.