Show HN: AvaKill – Deterministic safety firewall for AI agents (<1ms, no ML)

github.com

3 points

duroapp

a day ago


3 comments

Lothbrok 9 hours ago

The deterministic angle makes sense. One thing that keeps coming up in real deployments is that teams end up dealing with three separate problems at once: isolation, policy enforcement, and runaway execution. A policy engine can block obviously bad actions, but you still need session budgets / loop caps for the cases where the agent stays "within policy" while doing the wrong thing repeatedly. That boundary is a big part of what pushed us to build Daedalab. Curious how you're drawing it here.

duroapp a day ago

Hi HN, I'm Logan. After watching Replit's agent delete a production database, Claude Code wipe a user's home directory, and Amazon Kiro cause a 13-hour AWS outage, I built the tool I wished existed.

AvaKill intercepts AI agent tool calls — file writes, shell commands, API requests, and evaluates them against a YAML policy file before they execute. No ML, no API calls, no latency. Deterministic policy evaluation in under 1 millisecond.

Three enforcement paths:

- Hooks: Direct integration into Claude Code, Cursor (in testing), Windsurf, Gemini CLI, Codex, Kiro, Amp

- MCP Proxy: Transparent proxy between any MCP client and server (in testing)

- OS Sandbox: Kernel-level enforcement via Landlock (Linux), sandbox-exec (macOS), AppContainer (Windows)

`pipx install avakill` — you're protected in under a minute.

demo video: https://avakill-demo-video.b-cdn.net/avakill_demo.mp4

nateschmied a day ago

Very cool! Smart way of putting deterministic guardrails on projects instead of trying to stack more ML on top of ML (which is what I always end up trying… to maddening effect). Curious if it can be used/stacked as a primitive to control things like token budgets or spending budgets and other real world activities in OpenClaw