Did we just give up on evaluations these days?
Over, and over again my experience building production AI tools/systems has been that evaluations are vital for improving performance.
I've also see a lot of people proposing some variation of "LLM as critic" as a solution to this, but I've never seen empirical evidence that this works. Further more, I've worked with a pretty well respected researcher in this space and in our internal experiment we found that LLMs where not good critics.
Results are always changing, so I'm very open to the possibility that someone has successfully figured out how to use "LLM as critic" but without the foundations of some basic evals to compare by, I remain skeptical.
This is the best guide I've seen to the LLM-as-judge pattern: https://hamel.dev/blog/posts/llm-judge/index.html
This is fantastic, thank you for sharing.
Evals are a core part of any up to date LLM team. If some team was just winging it without robust eval practices they’re not to be trusted.
> Further more, I've worked with a pretty well respected researcher in this space and in our internal experiment we found that LLMs where not good critics
This is an idea that seems so obvious in retrospect, after using LLMs and getting so many flattering responses telling us we’re right and complementing our inputs.
For what it’s worth, I’ve heard from some people who said they were getting better results by intentionally using different LLM models for the eval portion. Feels like having a model in the same family evaluate its own output triggers too many false positives.
I once asked Claude Code (Opus 4) to review a codebase I’d built, and threw in at the end of my prompt something like “No need to be nice about it.”
Now granted, you could say it was “flattering that instruction”, but it sure didn’t flatter me. It absolutely eviscerated my code, calling out numerous security issues (which were real), all manner of code smells and bad architectural decisions, and ended by saying that the codebase appeared to have been thrown together in a rush with no mind toward future maintenance (which was… half true… maybe more true than I’d like to admit).
All this to say that it is far from obvious that LLMs are intrinsically bad critics.
The problem isn't that LLMs can't be critical, it's that LLMs don't have taste. It's easy to get an LLM to give praise, and it's easy to get an LLM to give criticism, but getting an LLM to praise good things and criticize bad things is currently impossible for non-trival inputs. That's not say that prompting your LLM to generate criticism is useless, it's just that any LLM prompted to generate criticism is going to criticize things are that actually fine, just like how an LLM prompted to generate praise (which is effectively the default behavior) is going to praise things that are deeply not fine.
Absolutely matches my experience - it can still be super helpful, but AI have an extreme version of an anchoring bias.
Another issue is that the behaviour of the LLMs is not very consistent.
I have an idea. What if we used a third LLM to evaluate how good the secondary LLM is at critiquing the primary LLM.
Evals somehow seem to be very very underrated, which is concerning in a world where we are moving towards (or trying to) systems with more autonomy.
Your skepticism of "llm-as-a-judge" setups is spot on. If your LLM can make mistakes/hallucinate, then of course, your judge llm can too. In practice, you need to validate your judges and possibly adapt to your task based on sample annotated data. You might adapt them by trial and error, or prompt optimization, e.g., using DSPy [1], or learning a small correction model on top of their outputs, e.g., LLM-Rubric [2] or Prediction Powered Inference [3].
In the end, using the LLM as a judge confers just these benefits:
1. It is easy to express complex evaluation criteria. This does not guarantee correctness.
2. Seen as a model, it is easy to "train", i.e., you get all the benefits of in-context learning, e.g., prompt based, few-shot.
But you still need to evaluate and adapt them. I have notes from a NeurIPS workshop from last year [4]. Btw, love your username!
[2]https://aclanthology.org/2024.acl-long.745/
For coding agents, evaluations are tricky - thorough evaluation tasks tend to be slow and/or expensive and/or display a high degree of variance over N attempts. You could run a whole benchmark like SWE Bench or Terminal Bench against a coding agent on every change but it quickly becomes infeasible.
I used to own the eval suite for a coding agent, it's certainly doable, even when it requires SQL + tables etc. We even had support for a wide range of data options ranging from canned csv data to plugging into prod to simulate the user experience, all easily configurable at eval run time. It also supported agentic flows where the results from one eval could be chained to the next (with a known correct answer being an optional send to check the framework end to end in the case of node failure).
Interestingly enough, we started with hundreds of evals, but after that experience my advice has become: less evals tied more closely to specific features and product ambitions.
By that I mean: some evals should serve as a warning ("uh oh, that eval failed, don't push to prod"), others as a mile stone ("woohoo! we got it work!"), and all should be informed by the product road map. You basically should understand where the product is going just by looking over the eval suite.
And, if you don't have evals, you really don't know if you're moving the needle at all. There were multiple situations where a tweak to a prompt passed an initial vibe check, but when run against the full eval suite, clearly performed worse.
The other piece of advice would be: evals don't have to sophisticated, just repeatable and agnostic to who's running them. Heck even "vibe checks" can be good evals, if they're written down and they need to pass some consensus among multiple people around whether they passed or not.
Running evals aren't the problem, the problem is acquiring or building a high-quality, non-contaminated dataset.
https://arxiv.org/abs/2506.12286 makes a very compelling case that swebench (and in extension, anything that's based on public source code) is most likely overestimating your agents actual capabilities.