DeepSeek-V4 on Day 0: From Fast Inference to Verified RL with SGLang and Miles

lmsys.org

29 points

mji

6 hours ago


2 comments

Palmik an hour ago

Similar article for vLLM: https://vllm-website-pdzeaspbm-inferact-inc.vercel.app/blog/...

Bechmarks from InferenceX: https://inferencex.semianalysis.com/inference (they do not have apples-to-apples setups to compare the different engines for whatever reason).

I find it odd that sglang, vLLM, TRTLLM don't seem to want to publish benchmarks comparing each other. They used to, but now there seems to be some unspoken rule against it.

At least we get comparison against "other OSS engine" this time, but that could be HF's Transformers as well :)

  • imjonse 26 minutes ago

    They're OSS projects in a friendly competition, both working towards the goal of having alternatives to big closed source players. No need for jabs.