Show HN: Agentcheck – Check what an AI agent can access before you run it

Hey HN! I've just open-sourced agentcheck, a fast, read-only CLI tool that scans your shell and reports what an AI agent could access: cloud IAM credentials, API keys, Kubernetes contexts, local tools, and more.

Main features:

- Broad coverage: scans AWS, GCP, Azure, 100+ API key environment variables and credential files, Kubernetes, Docker, SSH keys, Terraform configs, and .env files

- Severity levels: every finding is tagged LOW, MODERATE, HIGH, or CRITICAL so you know what actually matters

- CI/CD integration: run agentcheck --ci to fail a pipeline if findings exceed a configurable threshold, with JSON and Markdown output for automation

- Configurable: extend it with your own env vars, credential files, and CLI tool checks via a config file

When you hand a shell to an AI agent, it inherits everything in that environment: cloud credentials, API keys, SSH keys, kubectl contexts. That's often more access than you'd consciously grant, and it’s hard to keep track of what permissions your user account actually has. Agentcheck makes that surface area visible before you run the agent.

It’s a single Go binary, no dependencies. Install with Homebrew:

brew install Pringled/tap/agentcheck

Code: github.com/Pringled/agentcheck

Let me know if you have any feedback!

github.com

4 points

Bibabomas

17 hours ago


1 comment

matrixgard 4 hours ago

Running an AI agent with whatever credentials happen to be in the shell is basically the same mistake as running your app as root — feels fine until the agent makes a bad decision or gets manipulated. On a typical dev machine that's a personal AWS profile with admin access; on prod it's usually whatever the CI service account can touch, which is often a lot more than it should be.

The CI integration is the piece I'd actually lean on first. Most teams I've seen think about agent access controls after they've already deployed, at which point you're doing cleanup instead of prevention. Gating it in the pipeline means the access question gets answered before the agent is running against your Terraform state and live kube contexts.

Are you seeing any patterns in severity distribution — mostly cloud creds coming up critical, or are the kube context exposures landing higher than expected?