You'll be pleased to know that it chooses "drive the car to the wash" on today's latest embarrassing LLM question.
My OpenClaw AI agent answered: "Here I am, brain the size of a planet (quite literally, my AI inference loop is running over multiple geographically distributed datacenters these days) and my human is asking me a silly trick question. Call that job satisfaction? Cuz I don't!"
Tell your agent it might need some weight ablation since all that size isn't giving the answer a few KG of meat come up pretty consistently.
800 grams more or less
Nice deflection
OpenClaw was a two weeks ago thing. No one cares anymore about this security hole ridden vibe coded OpenAI project.
I have seldomly seen so many bad takes in two sentences.
The thing I would appreciate much more than performance in "embarrassing LLM questions" is a method of finding these, and figuring out by some form of statistical sampling, what the cardinality is of those for each LLM.
It's difficult to do because LLMs immediately consume all available corpus, so there is no telling if the algorithm improved, or if it just wrote one more post-it note and stuck it on its monitor. This is an agency vs replay problem.
Preventing replay attacks in data processing is simple: encrypt, use a one time pad, similarly to TLS. How can one make problems which are at the same time natural-language, but where at the same time the contents, still explained in plain English, are "encrypted" such that every time an LLM reads them, they are novel to the LLM?
Perhaps a generative language model could help. Not a large language model, but something that understands grammar enough to create problems that LLMs will be able to solve - and where the actual encoding of the puzzle is generative, kind of like a random string of balanced left and right parentheses can be used to encode a computer program.
Maybe it would make sense to use a program generator that generates a random program in a simple, sandboxed language - say, I don't know, LUA - and then translates that to plain English for the LLM, and asks it what the outcome should be, and then compares it with the LUA program, which can be quickly executed for comparison.
Either way we are dealing with an "information war" scenario, which reminds me of the relevant passages in Neal Stephenson's The Diamond Age about faking statistical distributions by moving units to weird locations in Africa. Maybe there's something there.
I'm sure I'm missing something here, so please let me know if so.
How well does this work when you slightly change the question? Rephrase it, or use a bicycle/truck/ship/plane instead of car?
That's the Gemini assistant. Although a bit hilarious it's not reproducible by any other model.
GLM tells me to walk because it's a waste of fuel to drive.
I am not familiar with those models but I see that 4.7 flash is 30B MoE? Likely in the same venue as the one used by the Gemini assistant. If I had to guess that would be Gemini-flash-lite but we don't know that for sure.
OTOH the response from Gemini-flash is
Since the goal is to wash your car, you'll probably find it much easier if the car is actually there! Unless you are planning to carry the car or have developed a very impressive long-range pressure washer, driving the 100m is definitely the way to go.GLM did fine in my test :0
4.7 flash is what I used.
In the thinking section it didn't really register the car and washing the car as being necessary, it solely focused on the efficiency of walking vs driving and the distance.
When most people refer to “GLM” they refer to the mainline model. The difference in scale between GLM 5 and GLM 4.7 Flash is enormous: one runs on acceptably on a phone, the other on $100k+ hardware minimum. While GLM 4.7 Flash is a gift to the local LLM crowd, it is nowhere near as capable as its bigger sibling in use cases beyond typical chat.
Ah yes, let me walk my car to the car wash.
[dead]
A hiccup in a System 1 response. In humans they are fixed with the speed of discovery. Continual learning FTW.
[flagged]
Is that the new pelican test?
It's
> "I want to wash my car. The car wash is 50m away. Should I drive or walk?"
And some LLMs seem to tell you to walk to the carwash to clean your car... So it's the new strawberry test
No, this is "AGI test" :D
Have we even agreed on what AGI means? I see people throw it around, and it feels like AGI is "next level AI that isn't here yet" at this point, or just a buzzword Sam Altman loves to throw around.
I guess AGI is reached, then. The SOTA models make fun of the question.