On March 24, 2026, litellm — the Python package that powers nearly every major AI agent framework — was hit by a supply chain attack. Two malicious versions (1.82.7 and 1.82.8) were published to PyPI after an attacker compromised the maintainer’s publishing credentials.
With 95 million downloads per month and direct dependencies from CrewAI, Browser-Use, Opik, DSPy, Mem0, Instructor, Guardrails, Agno, and Camel-AI, the blast radius of this attack is enormous. If you work in AI/ML and use Python, this likely affects your stack.
Here’s what happened, how we responded at Comet, and what you should do right now.
What Happened
The attacker gained access to the LiteLLM maintainer’s PyPI account (likely through the related Trivy GitHub Actions supply chain compromise) and published two malicious package versions that were never released through the official GitHub repository — neither version has a corresponding git tag.
The attack was sophisticated and used two different techniques across the two versions:
- Version 1.82.7: Malicious payload embedded in
litellm/proxy/proxy_server.py, triggered when importinglitellm.proxy - Version 1.82.8: A
.pthfile (litellm_init.pth) added to the package, which executes automatically on any Python startup — no import needed
The second technique is particularly dangerous. Python’s site module automatically processes .pth files in site-packages/ every time the interpreter starts. Simply having the package installed means every python, pytest, or pip install command in that environment triggers the payload. No explicit import statement required.
What the Payload Does
The malicious code is double base64-encoded to evade casual inspection. Once decoded, it performs a comprehensive credential harvest:
- Environment variables — captures ALL API keys, secrets, and tokens via
printenv - Cloud credentials — AWS (
~/.aws/credentials, IMDS tokens), GCP, Azure - SSH keys — all private keys,
known_hosts, and SSH config - Kubernetes configs —
~/.kube/config, service account tokens - Git credentials —
~/.git-credentials, gitconfig - Docker configs — registry auth, Kaniko credentials
- Database credentials — PostgreSQL, MySQL, Redis, LDAP config files
- CI/CD secrets — Terraform state, Jenkins, GitLab CI configs
- Shell history — bash, zsh, mysql, psql, redis history files
The collected data is encrypted with AES-256-CBC using a random session key, which is then wrapped with a hardcoded 4096-bit RSA public key. The encrypted archive is exfiltrated via HTTPS POST to models.litellm.cloud — a domain registered just hours before the attack (not the official litellm.ai).
Only the attacker holds the RSA private key, so only they can decrypt the stolen data.
Why This Matters for the AI Ecosystem
LiteLLM isn’t just another Python package. It’s the LLM gateway layer that most AI agent frameworks depend on. Here’s the downstream impact by the numbers:
| Package | Monthly Downloads | Depends on LiteLLM |
|---|---|---|
| litellm | 95M | — |
| CrewAI | 5.9M | Direct dependency |
| Browser-Use | 4.2M | Direct dependency |
| Opik | 3.5M | Direct dependency |
| Mem0 | 2.7M | Direct dependency |
| DSPy | 1.6M | Direct dependency |
| Agno | 1.6M | Direct dependency |
| Guardrails | 233K | Direct dependency |
| Camel-AI | 84K | Direct dependency |
Anyone who ran pip install or pip install --upgrade on any of these packages during the approximately 4-hour exposure window (roughly 09:00–13:30 UTC on March 24) could have pulled the compromised litellm as a transitive dependency.
CI/CD pipelines are the highest-risk target. They often hold the most privileged credentials — AWS deployment keys, org-wide API tokens, Docker registry auth — and they run pip install on every build.
How We Responded at Comet
When we learned about the compromise, we treated it as a critical security incident and launched an immediate, systematic response:
1. Full CI/CD audit across all repositories
We didn’t just check one repo. We audited every active repository in our GitHub organization — over 50 repos — examining the actual pip download logs in GitHub Actions job output to determine the exact litellm version installed in each workflow run.
We identified two CI workflows that installed compromised versions during the exposure window. In both cases, the exposed secrets were limited to CI test credentials. No production credentials, customer data, or production infrastructure was affected.
2. Company-wide developer machine scan
We deployed a scanning script to every engineer, product manager, and solutions engineer. Each person ran a full scan of every Python virtual environment on their machine, checking for litellm >= 1.82.7.
Result: All developers scanned, zero compromised environments found. The highest litellm version on any developer machine was 1.82.4 — well below the compromised threshold.
3. Immediate credential rotation
We rotated all potentially exposed CI credentials within hours of discovery, without waiting to complete the full investigation.
4. Platform and cloud confirmation
Our production services, Opik Cloud, and all customer-facing infrastructure use pinned, containerized images — they don’t run pip install at runtime. We confirmed these were never at risk.
How To Check if You’re Affected
Run this in every Python environment where you use litellm or any framework that depends on it:
pip show litellm 2>/dev/null | grep Version
If the version is 1.82.7 or 1.82.8, that environment is compromised.
To scan all virtual environments on your machine at once:
find "$HOME" -type d -name "litellm-*.dist-info" 2>/dev/null | while read dir; do
version=$(grep -m1 "^Version:" "$dir/METADATA" 2>/dev/null | awk '{print $2}')
venv=$(echo "$dir" | sed 's|/lib/python.*/site-packages/.*||')
if [ "$(printf '%s\n1.82.7' "$version" | sort -V | head -1)" = "1.82.7" ]; then
echo "!! AFFECTED $version $venv"
else
echo " ok $version $venv"
fi
done
What To Do if You’re Affected
- Stop using the environment immediately. Every Python invocation triggers the exfiltration payload.
- Delete and recreate the virtual environment — don’t just downgrade, nuke it entirely.
- Rotate all credentials that were present on the machine: API keys, cloud credentials, SSH keys, database passwords, anything in environment variables or config files.
- Check CI/CD pipelines. If your CI runs
pip installwithout pinning to exact versions, check job logs forDownloading litellm-1.82.7orlitellm-1.82.8lines. - Search for the malicious file:
find "$HOME" -name "litellm_init.pth" 2>/dev/null
Lessons and Recommendations
This attack reinforces several supply chain security practices that the AI/ML ecosystem has been slow to adopt:
Use lockfiles
In our audit, we found that repos using poetry.lock or uv.lock were completely protected — the lockfile pinned litellm to a safe version regardless of what was on PyPI. Repos doing bare pip install were vulnerable.
Pin dependencies to exact versions
A requirement like litellm>=1.79.2 means “give me the latest” — which during the attack window meant “give me the compromised version.” Pin to exact versions: litellm==1.82.6.
Pin GitHub Actions to SHAs, not tags
The same attacker group compromised the Trivy GitHub Action by force-pushing malicious code to existing tags. Only one tag (v0.35.0) was saved by GitHub’s immutable release protection. Use uses: action@sha256hash instead of uses: action@v1.
Audit CI/CD secret scoping
We discovered that some of our GitHub Actions workflows had API keys defined as workflow-level environment variables — available to every step, including pip install. Secrets should be scoped to the specific step that needs them.
Add dependency scanning
Tools like pip-audit, Dependabot, and Socket can catch known-malicious packages before they reach your CI runners.
Current Status
As of the time of writing:
- PyPI quarantine has been lifted. The compromised versions (1.82.7 and 1.82.8) have been permanently removed.
pip install litellmnow resolves to 1.82.6 (safe). - Comet platform and Opik Cloud were never affected.
- All Comet CI credentials have been rotated.
- All Comet developer machines have been verified clean.
