LiteLLM Exploited 36 Hours After Vulnerability Disclosure

Attackers hit CVE-2026-42208, a critical pre-auth SQL injection in LiteLLM proxy, within 36 hours of the public advisory - targeting database tables holding API keys for every upstream AI provider.

LiteLLM Exploited 36 Hours After Vulnerability Disclosure

TL;DR

  • CVE-2026-42208 is a CVSS 9.3 pre-auth SQL injection in LiteLLM proxy versions 1.81.16 through 1.83.6
  • Active exploitation observed roughly 36 hours after the GitHub advisory went public on April 24, 2026
  • Attackers went directly for credential tables holding OpenAI, Anthropic, and AWS Bedrock keys
  • Fix: upgrade to v1.83.7-stable; stop-gap: set disable_error_logs: true in general settings

If you run a LiteLLM proxy with a database backend, an unauthenticated attacker can dump your upstream provider keys with a single crafted HTTP request. That's the short version of CVE-2026-42208.

Sysdig's Threat Research Team caught active exploitation on April 26, 2026 - roughly 36 hours after the advisory landed in GitHub's global security advisory database. This is the third time LiteLLM infrastructure has been a target in three months. That pattern deserves some attention.

The Gateway in Your Stack

What LiteLLM Proxies

LiteLLM is an open-source AI gateway maintained by BerriAI that sits between your application and 100+ LLM providers. OpenAI, Anthropic, AWS Bedrock, Cohere, Azure OpenAI, Google Vertex - it normalizes them all behind a single OpenAI-compatible API. The GitHub repository has roughly 45,000 stars and the PyPI package pulls about 97 million downloads a month.

When you deploy the proxy, it stores all upstream provider credentials in a PostgreSQL (or SQLite) database. The proxy issues virtual API keys to your callers, verifies them against the database on every request, and forwards traffic to whichever LLM backend you've configured. In a production setup, that database contains the master key, every virtual key, every provider credential, and environment variables including the database connection string itself.

LiteLLM proxy acting as a centralized AI gateway for enterprise teams A centralized AI gateway creates a single authentication surface for every upstream provider credential. Source: unsplash.com

Why the Database Is a Tier-1 Target

In a traditional web app, a SQL injection vulnerability exposes your users table. In an AI gateway, it exposes five-figure monthly cloud budgets and workspace-level IAM permissions across every provider your organization uses simultaneously. One exploit, and an attacker has every API key you've configured - OpenAI, Anthropic, Bedrock, the works.

That changes the severity calculation considerably.

The Vulnerability

The Bug Itself

The flaw lives in LiteLLM's proxy API key verification path. During authentication, the proxy runs a database query to check whether the Authorization: Bearer value matches a known virtual key. In affected versions, the code concatenated the caller-supplied key value directly into the SQL query string instead of passing it as a bound parameter.

# Vulnerable pattern (simplified from affected code)
query = f"SELECT * FROM \"LiteLLM_VerificationToken\" WHERE token = '{bearer_value}'"
cursor.execute(query)

Any standard SQL injection now applies. No credentials required, because the vulnerable query runs before authentication succeeds.

The Attack Path

To exploit this, an attacker sends a request to any LLM API route - /chat/completions is a common entry point - with a crafted bearer token:

POST /chat/completions HTTP/1.1
Authorization: Bearer sk-litellm' UNION SELECT credential_values,NULL,NULL,NULL FROM litellm_credentials--

That's enough to bypass the query logic and start extracting data from whatever tables the attacker names. The fix in v1.83.7 is straightforward: parameterized queries replace string interpolation, so the bearer value is passed as data, not interpreted as SQL.

# Fixed pattern
cursor.execute(
    "SELECT * FROM \"LiteLLM_VerificationToken\" WHERE token = %s",
    (bearer_value,)
)

What the Attacker Actually Did

Sysdig's analysis of captured traffic showed this wasn't an automated scan. According to their researchers: "The traffic captured was not a generic SQLmap spray, which is very common in SQL injection attacks, but a deliberate, and likely customized, enumeration of the production LiteLLM schema."

The attacker knew LiteLLM uses Prisma ORM with PascalCase table identifiers. When lowercase queries failed - PostgreSQL treats unquoted identifiers as lowercase - the attacker retried with quoted names and kept going. That level of schema knowledge points to prior review of the publicly available Prisma schema in the repository.

Code terminal showing database query execution and security logs Sysdig researchers observed targeted, schema-aware queries rather than generic SQL injection spraying. Source: unsplash.com

What Tables Were Targeted

The attacker went straight for the high-value rows and ignored everything else.

TableContentsRisk
LiteLLM_VerificationTokenVirtual API keys and the master keyFull proxy control
litellm_credentialsUpstream provider keys (OpenAI, Anthropic, Bedrock)Direct billing exposure
litellm_configPostgreSQL DSN, master keys, webhook URLsDatabase and infra access

Sysdig found no confirmed follow-through - no authenticated calls using exfiltrated keys, no new virtual key creation. Whether that's because the attacker was probing for future use or the queries didn't land is unclear.

Affected Versions and Fix

Version rangeStatusAction
< 1.81.16Not affectedNo action needed
1.81.16 - 1.83.6Vulnerable (CVE-2026-42208)Upgrade immediately
1.83.7-stable+FixedNo action needed

The fix shipped on April 19, 2026 in v1.83.7-stable. If you can't upgrade now, set disable_error_logs: true under general settings. This blocks the specific path through which unauthenticated input reaches the vulnerable query. The Centre for Cybersecurity Belgium issued an advisory strongly recommending this workaround if immediate patching isn't possible.

To check your current version:

pip show litellm | grep Version
# or if running the proxy container
docker exec <container> pip show litellm | grep Version

Where It Falls Short

This is the third LiteLLM security incident in three months. In March, attackers compromised the project's CI/CD pipeline through Trivy, injected a credential stealer into PyPI versions 1.82.7 and 1.82.8, and hit 97 million monthly download targets before the backdoor was caught. Now a pre-auth SQL injection in the proxy's authentication path - affecting a range of versions that overlap with the post-supply-chain recovery period.

The core problem is that LiteLLM's proxy concentrates risk. A single deployment stores credentials for every provider an organization uses, serves as the authentication layer for every internal caller, and in many configurations is reachable from the internet. That's a high-value target architecture, and it requires a security posture to match.

The March supply chain attack exposed a structural gap in how LiteLLM handles CI/CD trust. This SQL injection reveals a gap in input handling inside the proxy itself. They're different bugs in different parts of the stack, but both are table stakes for a project carrying this much credential surface.

For context: Hugging Face's LeRobot framework shipped a critical RCE earlier this year through unsafe pickle deserialization in its gRPC interface. The pattern across AI infrastructure projects is consistent - features ships fast, security fundamentals get caught up on later. LiteLLM isn't uniquely at fault here, but it's been the most visible example.

If you're running a self-hosted LiteLLM proxy in production, the right call is to rotate all upstream provider keys now and upgrade to v1.83.7-stable, regardless of whether you saw anomalous traffic. The Sysdig researchers didn't observe confirmed data exfiltration, but "we didn't see evidence of theft" isn't the same as "nothing was taken."

The Microsoft Agent Governance Toolkit and similar projects are trying to address the broader problem of agentic AI security governance. SQL injection in an auth path predates all of that - it's a solved problem with a known fix. The real issue is that AI infrastructure projects under rapid development are accumulating technical security debt faster than the ecosystem is catching it.

Sources:

Sophie Zhang
About the author AI Infrastructure & Open Source Reporter

Sophie is a journalist and former systems engineer who covers AI infrastructure, open-source models, and the developer tooling ecosystem.