Critical LangChain Vulnerabilities Threaten AI System Security
Originally Published 16 days ago — by The Hacker News

A critical security flaw in LangChain Core (CVE-2025-68664) allows attackers to exploit serialization injection to steal secrets and manipulate LLM responses, prompting urgent updates to affected versions to mitigate risks.