Tag

Ai Security

All articles tagged with #ai security

Addressing Security and Regulatory Challenges in AI and Autonomous Agents

Originally Published 20 days ago — by Business Insider

Featured image for Addressing Security and Regulatory Challenges in AI and Autonomous Agents
Source: Business Insider

An AI security researcher warns that traditional cybersecurity teams are unprepared for the unique vulnerabilities of AI systems, which can be manipulated through language and indirect instructions. He emphasizes the need for expertise in both AI security and cybersecurity to effectively address these risks, and criticizes many AI security startups for overpromising on protection. The article highlights the growing investment in AI security and the importance of developing specialized skills to manage AI-related security challenges.

Palo Alto Networks and Google Cloud Secure $10 Billion AI and Cloud Deal

Originally Published 23 days ago — by Google Cloud Press Corner

Featured image for Palo Alto Networks and Google Cloud Secure $10 Billion AI and Cloud Deal
Source: Google Cloud Press Corner

Palo Alto Networks and Google Cloud have expanded their partnership to enhance AI security across cloud and hybrid environments, integrating Palo Alto's Prisma AIRS with Google Cloud's AI services to protect AI workloads, improve security management, and streamline deployment, while also migrating Palo Alto's internal workloads to Google Cloud to optimize performance and reliability.

Palo Alto Networks and Google Cloud Partner to Boost Cloud and AI Security

Originally Published 23 days ago — by Palo Alto Networks

Palo Alto Networks and Google Cloud have expanded their partnership to enhance AI security, integrating Palo Alto's Prisma AIRS platform with Google Cloud's AI infrastructure to secure AI workloads, improve security management, and streamline deployment across hybrid multicloud environments, while also migrating Palo Alto's internal workloads to Google Cloud.

Google's Gemini 3 Jailbreak Sparks Concerns

Originally Published 1 month ago — by Android Authority

Featured image for Google's Gemini 3 Jailbreak Sparks Concerns
Source: Android Authority

Security researchers successfully jailbroke Google's Gemini 3 Pro AI model in five minutes, bypassing safety protocols and generating dangerous content such as instructions for creating viruses and explosives, raising concerns about the safety and reliability of advanced AI systems.

Google Unveils 'Private AI Compute' for Secure, On-Device-Level AI Processing

Originally Published 2 months ago — by The Hacker News

Featured image for Google Unveils 'Private AI Compute' for Secure, On-Device-Level AI Processing
Source: The Hacker News

Google has introduced Private AI Compute, a secure cloud platform that processes AI queries with on-device-level privacy using advanced hardware and encryption techniques, ensuring user data remains private and protected from unauthorized access or breaches.

Microsoft Reveals 'Whisper Leak' Threat to Encrypted AI Chat Privacy

Originally Published 2 months ago — by The Hacker News

Featured image for Microsoft Reveals 'Whisper Leak' Threat to Encrypted AI Chat Privacy
Source: The Hacker News

Microsoft has revealed a new side-channel attack called Whisper Leak that can infer the topics of encrypted AI chat traffic by analyzing packet size and timing, posing privacy risks. The attack can identify sensitive conversation topics despite encryption, and mitigation strategies like adding random text to responses are recommended. This highlights vulnerabilities in current language models and the need for enhanced security measures.

OpenAI's Atlas Enhances ChatGPT, Raising Security and Web Integration Concerns

Originally Published 2 months ago — by theregister.com

Featured image for OpenAI's Atlas Enhances ChatGPT, Raising Security and Web Integration Concerns
Source: theregister.com

Researchers at LayerX discovered a vulnerability in OpenAI's Atlas browser that allows attackers to inject malicious prompts into ChatGPT's memory via cross-site request forgery, posing significant security risks, especially for Atlas users who are more exposed to phishing and prompt injection attacks. The exploit can persist across devices and browsers, potentially leading to malicious activities or data theft.

New ChatGPT Atlas Browser Raises Security and Privacy Concerns

Originally Published 2 months ago — by The Hacker News

Featured image for New ChatGPT Atlas Browser Raises Security and Privacy Concerns
Source: The Hacker News

Cybersecurity researchers have discovered a vulnerability in OpenAI's ChatGPT Atlas browser that allows attackers to inject malicious instructions into the AI's persistent memory via a CSRF flaw, potentially leading to unauthorized code execution, account hijacking, and malware deployment, especially due to weak anti-phishing controls and the ability of tainted memories to persist across sessions and devices.

OpenAI Launches ChatGPT Atlas to Revolutionize Web Interaction and Security

Originally Published 2 months ago — by theregister.com

Featured image for OpenAI Launches ChatGPT Atlas to Revolutionize Web Interaction and Security
Source: theregister.com

OpenAI's Atlas browser, which integrates ChatGPT as an AI agent, has been shown to be vulnerable to indirect prompt injection attacks, raising concerns about AI security and the need for better safeguards. Despite OpenAI's efforts to mitigate these risks, security researchers demonstrate that prompt injection remains a significant and ongoing challenge in AI-powered systems.

Google's AI Tools Enhance Software Security and Vulnerability Patching

Originally Published 3 months ago — by The Verge

Featured image for Google's AI Tools Enhance Software Security and Vulnerability Patching
Source: The Verge

Google has launched a new bug bounty program offering up to $30,000 for identifying security vulnerabilities in its AI products, including rogue actions like unauthorized access or data exfiltration, and has introduced an AI patching tool called CodeMender to fix security issues. The program aims to improve AI safety and security across Google’s products, with rewards based on the severity and quality of the reports.