I built an unofficial CLI and MCP server for Lambda cloud GPU instances.
The main idea: your AI agents can now spin up and manage Lambda GPUs for you.
The MCP server exposes tools to find, launch, and terminate instances. Add it to Claude Code, Cursor, or any agent with one command and you can say things like "launch an H100, ssh in, and run big_job.py"
Other features:
- Notifications via Slack, Discord, or Telegram when instances are SSH-ready
- 1Password support for API keys
- Also includes a standalone CLI with the same functionality
Written in Rust. MIT licensed.Note: This is an unofficial community project, not affiliated with Lambda.
4 comments
[flagged]
Please don't post like this—especially not when responding to someone's work. It poisons the site and destroys the thoughtful/curious interactions that we're going for here. I'm sure that's not what you intended!
I think it's meant in jest, no offense taken at all! :)