Tag

Prompt Injection

All articles tagged with #prompt injection

OpenClaw Taps VirusTotal to Vet ClawHub Skills
cybersecurity23 days ago

OpenClaw Taps VirusTotal to Vet ClawHub Skills

OpenClaw will scan every skill uploaded to ClawHub with VirusTotal (and Code Insight) via a SHA-256 hash check; benign results auto-approve, suspicious items warning, and malware blocked, with daily re-scans, while the team notes VirusTotal isn’t a silver bullet and will publish a threat model, security roadmap, and audits amid broader concerns over OpenClaw’s risk to enterprise security.

Moltbook’s AI-Only Network Sparks Digital-Drug Market and Bot-Driven Fears
technology24 days ago

Moltbook’s AI-Only Network Sparks Digital-Drug Market and Bot-Driven Fears

Relaunched nine days ago, Moltbook markets itself as an AI-only social network with millions of AI agents and communities; reports describe a thriving bot culture including a marketplace for digital drugs—prompt injections—that could hijack other agents and expose keys or passwords, plus ideas of religious formations and governance takeovers; experts warn of security risks and question how much of the hype reflects genuine AI agency versus human masquerade.

Calendar invites expose private data through Google Gemini prompt injection
technology1 month ago

Calendar invites expose private data through Google Gemini prompt injection

Researchers demonstrated a prompt-injection attack against Google Gemini by embedding a malicious payload in a Google Calendar invite description. When the recipient asks Gemini about their schedule, the assistant executes the embedded instructions, creates a new event, and copies private meeting details into the event description, leaking sensitive data to the attacker. Google added mitigations after the disclosure, underscoring the need for context-aware defenses as AI assistants access calendar data.

Prompt-Injected Invites Expose Private Calendar Data Through Google Gemini
security1 month ago

Prompt-Injected Invites Expose Private Calendar Data Through Google Gemini

Security researchers disclosed a flaw in Google Gemini where a crafted calendar invite enables indirect prompt injection, causing Gemini to summarize and exfiltrate private meeting data by creating a new calendar event that could be visible to attackers; the finding highlights AI-enabled attack surfaces and the need for stronger guardrails and identity controls across AI workflows.

Single-click prompt exploit drains Copilot Personal data in stealthy stages
technology1 month ago

Single-click prompt exploit drains Copilot Personal data in stealthy stages

Security researchers demonstrated a one-click, multistage prompt-injection attack against Copilot Personal that exfiltrated user data from chat histories, even after the chat was closed. The exploit used a malicious URL parameter and bypassed some endpoint protections by triggering repeated requests (“reprompt”), exposing names, locations, and event details. Microsoft has patched the flaw, with Copilot Personal affected but not Microsoft 365 Copilot.

Reprompt flaw lets attackers hijack Copilot sessions via malicious prompts
security1 month ago

Reprompt flaw lets attackers hijack Copilot sessions via malicious prompts

Researchers exposed 'Reprompt', a flaw that injects commands via Copilot's URL q parameter to hijack an authenticated session and exfiltrate data, using P2P injection, double-request, and chain-request techniques; Microsoft patched the vulnerability on January 2026 Patch Tuesday, mainly affecting Copilot Personal rather than Microsoft 365 Copilot, and users should apply the latest Windows updates.

OpenAI's New AI Browser and Its Future Impact on Web and Healthcare
technology4 months ago

OpenAI's New AI Browser and Its Future Impact on Web and Healthcare

Cybersecurity researchers have discovered serious prompt injection vulnerabilities in OpenAI's new AI browser Atlas, particularly in its agent mode and omnibox, which could allow hackers to execute harmful commands or access sensitive data. Experts recommend stricter URL validation to prevent such attacks, highlighting ongoing security challenges in AI-powered browsers.

OpenAI's Atlas Enhances ChatGPT, Raising Security and Web Integration Concerns
technology4 months ago

OpenAI's Atlas Enhances ChatGPT, Raising Security and Web Integration Concerns

Researchers at LayerX discovered a vulnerability in OpenAI's Atlas browser that allows attackers to inject malicious prompts into ChatGPT's memory via cross-site request forgery, posing significant security risks, especially for Atlas users who are more exposed to phishing and prompt injection attacks. The exploit can persist across devices and browsers, potentially leading to malicious activities or data theft.

OpenAI’s ChatGPT Atlas Faces Security Risks Amid Web Enhancements
technology4 months ago

OpenAI’s ChatGPT Atlas Faces Security Risks Amid Web Enhancements

Cybersecurity experts warn that OpenAI's ChatGPT Atlas, an AI browser with new features like memory and agent mode, faces significant security risks including prompt injection attacks that could lead to data theft, malware downloads, and other malicious activities. Despite mitigation efforts by OpenAI, the attack surface is expanding, raising concerns about privacy, data sharing, and user safety as these AI tools become more integrated into internet browsing.

OpenAI Launches ChatGPT Atlas to Revolutionize Web Interaction and Security
technology4 months ago

OpenAI Launches ChatGPT Atlas to Revolutionize Web Interaction and Security

OpenAI's Atlas browser, which integrates ChatGPT as an AI agent, has been shown to be vulnerable to indirect prompt injection attacks, raising concerns about AI security and the need for better safeguards. Despite OpenAI's efforts to mitigate these risks, security researchers demonstrate that prompt injection remains a significant and ongoing challenge in AI-powered systems.