Show HN: Prompt-to-proof: reproducible LLM eval with hash-chained receipts

prompt-to-proof is an open-source toolkit to (1) measure LLM streaming latency and throughput and (2) run a small, reproducible code eval, with hash-chained receipts you can verify. It targets OpenAI-style /chat/completions (works with OpenAI or local vLLM/llama.cpp).

github.com

3 points

Qendresahoti

3 days ago


1 comment