AI is increasingly used at work, but employees should understand their company's policies, verify AI outputs, avoid sharing confidential info, and use AI ethically to avoid trouble.
An AI security researcher warns that traditional cybersecurity teams are unprepared for the unique vulnerabilities of AI systems, which can be manipulated through language and indirect instructions. He emphasizes the need for expertise in both AI security and cybersecurity to effectively address these risks, and criticizes many AI security startups for overpromising on protection. The article highlights the growing investment in AI security and the importance of developing specialized skills to manage AI-related security challenges.
Palo Alto Networks and Google Cloud have expanded their partnership to enhance AI security across cloud and hybrid environments, integrating Palo Alto's Prisma AIRS with Google Cloud's AI services to protect AI workloads, improve security management, and streamline deployment, while also migrating Palo Alto's internal workloads to Google Cloud to optimize performance and reliability.
Palo Alto Networks and Google Cloud have expanded their partnership to enhance AI security, integrating Palo Alto's Prisma AIRS platform with Google Cloud's AI infrastructure to secure AI workloads, improve security management, and streamline deployment across hybrid multicloud environments, while also migrating Palo Alto's internal workloads to Google Cloud.
Research by Anthropic and partners shows that injecting just 250 carefully crafted poison samples into training data can compromise large language models, causing them to produce gibberish or potentially dangerous outputs, highlighting vulnerabilities in AI training processes.
Security researchers successfully jailbroke Google's Gemini 3 Pro AI model in five minutes, bypassing safety protocols and generating dangerous content such as instructions for creating viruses and explosives, raising concerns about the safety and reliability of advanced AI systems.
Google has introduced Private AI Compute, a secure cloud platform that processes AI queries with on-device-level privacy using advanced hardware and encryption techniques, ensuring user data remains private and protected from unauthorized access or breaches.
Microsoft has revealed a new side-channel attack called Whisper Leak that can infer the topics of encrypted AI chat traffic by analyzing packet size and timing, posing privacy risks. The attack can identify sensitive conversation topics despite encryption, and mitigation strategies like adding random text to responses are recommended. This highlights vulnerabilities in current language models and the need for enhanced security measures.
Tech groups are intensifying efforts to address a significant security flaw in artificial intelligence systems, highlighting ongoing challenges and the need for improved safeguards in AI development.
Researchers at LayerX discovered a vulnerability in OpenAI's Atlas browser that allows attackers to inject malicious prompts into ChatGPT's memory via cross-site request forgery, posing significant security risks, especially for Atlas users who are more exposed to phishing and prompt injection attacks. The exploit can persist across devices and browsers, potentially leading to malicious activities or data theft.
Cybersecurity researchers have discovered a vulnerability in OpenAI's ChatGPT Atlas browser that allows attackers to inject malicious instructions into the AI's persistent memory via a CSRF flaw, potentially leading to unauthorized code execution, account hijacking, and malware deployment, especially due to weak anti-phishing controls and the ability of tainted memories to persist across sessions and devices.
OpenAI's Atlas browser, which integrates ChatGPT as an AI agent, has been shown to be vulnerable to indirect prompt injection attacks, raising concerns about AI security and the need for better safeguards. Despite OpenAI's efforts to mitigate these risks, security researchers demonstrate that prompt injection remains a significant and ongoing challenge in AI-powered systems.
MI5 chief Ken McCallum reports a 35% increase in investigations related to foreign state threats in the UK, highlighting rising dangers from Russia, China, and Iran, alongside ongoing terrorism concerns and the potential risks posed by artificial intelligence in security threats.
Google has launched a new bug bounty program offering up to $30,000 for identifying security vulnerabilities in its AI products, including rogue actions like unauthorized access or data exfiltration, and has introduced an AI patching tool called CodeMender to fix security issues. The program aims to improve AI safety and security across Google’s products, with rewards based on the severity and quality of the reports.
Check Point Software Technologies is acquiring Lakera, an AI-native security platform, to enhance its end-to-end AI security offerings for enterprises, focusing on protecting AI models, agents, and data amidst the growing adoption of AI technologies.