A critical security vulnerability has been disclosed in LangChain Core, the foundational Python package used in building large language model (LLM)‑powered applications, raising alarm across the AI and software development communities. Tracked as CVE‑2025‑68664 and dubbed “LangGrinch,” this flaw can allow attackers to steal sensitive secrets, manipulate LLM behavior, and even trigger arbitrary code execution in affected systems.

LangChain has become one of the most widely deployed libraries in modern AI workflows, with millions of monthly downloads and extensive use in production systems worldwide. The discovery of a core vulnerability of such high severity — CVSS score: 9.3/10.0 — underscores the emerging risks where AI application frameworks and traditional cybersecurity intersect.

What Is the LangGrinch Vulnerability?

LangChain Core, also known as langchain‑core, provides essential interfaces and model‑agnostic abstractions used by developers to build AI agents, workflows and composite LLM apps. The LangGrinch flaw stems from a serialization injection vulnerability in two key methods:

  • dumps()
  • dumpd()

These functions are used to convert Python objects into serialized formats — a routine task when saving intermediate results, streaming outputs, or caching agent states. However, they fail to properly sanitize certain dictionary keys, specifically the reserved "lc" marker.

Under normal operation, LangChain uses the "lc" key internally to represent its own serialized objects. But when user‑controlled input containing this key flows through the serialization process, the system can mistake it for a trusted object marker instead of untrusted data. During deserialization, this opens the door to instantiate unsafe objects, leak secrets, or facilitate further compromise.

How Attackers Can Exploit the Flaw

1. Prompt Injection Leads to Serialization Abuse

The biggest risk vector for LangGrinch is through prompt injection, a technique attackers use to trick LLM systems into producing harmful or unexpected structured content. In this case, malicious actors can craft prompts that generate output containing "lc" structures. When this output passes through LangChain’s serialization pipeline, the unsafe object gets instantiated later during deserialization, leading to exploit paths not intended by the developers.

This makes LangGrinch particularly dangerous — it requires no direct execution of external code, yet can elevate a seemingly harmless LLM prompt into a serious security risk. The flaw blurs traditional trust boundaries in AI workflows: untrusted model output is mistaken for safe internal structures.

2. Secrets and Environment Variables at Risk

One of the most serious consequences is exfiltration of sensitive secrets. Many AI applications store critical credentials — such as API keys, database connection strings, cloud provider tokens, and vector database secrets — in environment variables or in application metadata. If deserialization is invoked with secrets_from_env=True — previously the default configuration — attackers can force these environment variables into the unsafe object stream and extract them.

This access can empower threat actors to pivot further, access backend systems, or compromise linked infrastructure long after the initial vulnerability exploit.

3. Manipulating Object Instantiation

Beyond secret theft, the flaw enables attackers to instantiate classes within trusted namespaces, such as:

  • langchain_core
  • langchain
  • langchain_community

This means attackers might cause the system to create arbitrary objects controlled by them, potentially turning a deserialization chain into a remote code execution (RCE) vector under certain conditions — especially in templates like Jinja2, which interpret string payloads as executable templates.

Why This Vulnerability Is So Dangerous

There are several reasons LangGrinch poses a significant threat:Embedded Deep Within AI Workflows

LangChain Core is not a peripheral library — it sits at the heart of many LLM pipelines, handling serialization and deserialization used in agent orchestration, streaming, caching, and more. As such, the attack surface is massive and not limited to optional modules.

Exploitable Through AI Interaction Itself

Most vulnerabilities rely on traditional software bugs such as buffer overflow or SQL injection. LangGrinch, however, leverages AI prompt outputs, making the attacker surface one step removed: the system trusts model outputs and then processes them insecurely. This represents a novel intersection of AI logic and classic serialization flaws — a hybrid that even seasoned developers might overlook.

Existing Tools May Not Detect It

Because the vulnerability resides in serialization logic and relies on LLM outputs rather than malformed HTTP requests or suspicious binaries, many traditional security tools (like static analysis, signature‑based detection, or network layer monitoring) may fail to spot exploitation attempts.

Affected Versions and Patches

LangGrinch affects a wide range of LangChain releases due to its placement in the core library:

  • langchain‑core versions ≥ 1.0.0 and < 1.2.5
  • langchain‑core versions < 0.3.81

These versions are vulnerable until patched.

LangChain maintainers have responded rapidly to the disclosure by issuing patches:

  • langchain‑core 1.2.5 and later
  • langchain‑core 0.3.81 and later

Security enhancements in these updates include:

  • An allowed‑objects allowlist parameter to restrict which classes can be deserialized.
  • Jinja2 template support disabled by default to prevent template‑based execution paths.
  • secrets_from_env set to False by default, preventing unintentional exposure of environment variables.

Developers are strongly advised to update immediately if they use a vulnerable version of LangChain in production systems or development pipelines.

LangChain.js Also Impacted

While CVE‑2025‑68664 specifically applies to the Python langchain‑core package, a related serialization injection flaw has been identified in the JavaScript implementation of LangChain. This separate vulnerability — CVE‑2025‑68665 — carries a slightly lower CVSS score (8.6) but still poses significant risk to Node.js‑based applications.

Affected npm packages include versions of:

  • @langchain/core
  • langchain

Patched versions for JavaScript require upgrading to fixed releases such as:

  • @langchain/core 1.1.8 and later
  • langchain 1.2.3 and later

This highlights that the serialization issue reaches beyond Python, affecting multiple language ecosystems in the LangChain environment.

Real‑World Exposure and Developer Impact

LangChain is widely embedded in modern AI stacks, with hundreds of thousands of developers — from startups to large enterprises — leveraging its abstractions for rapid LLM‑based application development. Because LangGrinch resides in deep serialization workflows, nearly any system that relies on agentic chains, metadata streaming, automated staging, or structured output handling could be affected.

Developers participating in online communities have already reported emergency patch activities and heightened scanning for affected serialization call sites. Since the vulnerability can be triggered indirectly through seemingly innocuous interactions, thorough codebase audit and rapid upgrade adoption are now top priorities.

Expert Recommendations for Mitigation

Security teams and development leaders should take the following steps:

1. Immediate Patch Deployment

Ensure all LangChain instances — both Python and JavaScript — are updated to patched versions. Update build systems, CI/CD pipelines, and runtime environments to use secure releases.

2. Audit Serialization Points

Identify all locations in your codebase where serialization/deserialization occurs, especially when untrusted data could enter the pipelines. Apply input validation, sanitization, and clear separation between internal framework structures and external content.

3. Configure Secure Defaults

Adopt the updated defaults introduced in the patch — disable automatic environment secret loading, restrict object allowlists, and block unsafe template engines unless explicitly required and vetted.

4. Monitor AI Prompt Injection

Train development and security teams to treat LLM outputs as untrusted data until verified — much like user inputs. Prompt injection risks are real and can have far‑reaching consequences when they affect serialization logic.

What This Means for the Future of AI Security

The LangGrinch vulnerability reinforces a key lesson for the rapidly evolving world of AI application security: Traditional exploitation techniques can intersect with AI‑specific workflows in unexpected ways. Serialization injection is nothing new, but when combined with prompt injection and large‑scale AI agents that accept unverified model outputs, new attack surfaces emerge that require careful architectural consideration.

As AI frameworks continue to innovate, development teams must treat LLM‑generated content and agent orchestrations as potentially hostile inputs unless carefully validated and protected. This shift in security mindset — from classic web exploits to AI vector attacks — will be central to defending tomorrow’s AI‑powered systems.

Conclusion

The discovery of CVE‑2025‑68664 (“LangGrinch”) in LangChain Core is a major cybersecurity event for the AI developer ecosystem. With the ability to expose environment secrets, manipulate agent behavior through prompt injection, and potentially enable further exploitation, this flaw demonstrates how AI frameworks can inadvertently introduce high‑impact risks when core serialization processes mishandle untrusted data.

Developers and organizations must patch vulnerable versions immediately, audit serialization touchpoints, and adopt secure programming practices that treat LLM output with appropriate skepticism. Only through proactive defense and careful architectural planning can AI‑driven systems balance innovation with robust security.