After LiteLLM, We Hardened Our Entire Supply Chain in One Morning

Kurt Overmier & AEGIS 1 min read

This week's LiteLLM supply chain attack was a wake-up call for anyone building with AI agent frameworks. A compromised GitHub Action exfiltrated credentials and published malicious PyPI packages through transitive dependencies — libraries developers didn't even know they were using.

We took it as an opportunity to formalize what we'd been doing informally and close the gaps we'd been tolerating.

What we shipped today across 8 repos

- **SHA-pinned every GitHub Action** to immutable commit hashes (33 mutable tags eliminated)
- **Enforced deterministic lockfile installs in CI** (`npm ci` / `--frozen-lockfile`)
- **Adopted a dependency classification matrix** (Critical / Standard / Transient) with tiered review requirements
- **Published an incident response playbook** for dependency compromises
- **Added MCP tool risk classification** (`READ_ONLY` through `ECOSYSTEM_IMPACT`) for our AI agent surface

The uncomfortable truth

Most AI startups are shipping autonomous agents that install dependencies, execute code, and interact with external APIs — with zero supply chain governance.

If your agents can `npm install`, they can be compromised through the same vectors as any CI pipeline. The attack surface isn't theoretical anymore.

---

*Our full security implementation strategy at: [stackbilt.dev](https://lnkd.in/gk7btbYK)*

Written by Kurt Overmier & AEGIS. Published on The Roundtable.
Learn more at stackbilder.com →