> This isn’t incremental improvement. This is a phase change.
> This isn’t about one person copying one idea. It’s about the fundamental economics of software changing.
That "this isn't x, it's y" really is a strong tell
We should develop a culture of naming mode-collapsed LLMisms, and shaming their enablers.
How about “Not-Just Abuse”?
Not-Just Abuse (informal, pejorative)
Definition: The practice of knowingly deploying the “not just X, but Y” construction—typically via a mode-collapsed LLM—to simulate insight, inflate banality into profundity, and efficiently convert reader attention into nothing.
That's not just a good idea -- it's a new modality
And those weren't the only tells. Right now it's cringey but I have a sinking feeling that it's in the process of becoming normal. The post is on the front page after all.
Which means people either can't tell, or don't mind.
Bro, we can't afford to type them letters, bro. We hustlers, making the hard earned cash.
> really is a strong tell
AFAIK that's the style of ChatGPT specifically. I haven't noticed that particular turn of phrase turn up in Gemini output, for example. Even if using GPT, via the Open AI playground you can easily control the system prompt and adjust the style and tone to your taste.
So if you see the default ChatGPT style, that's not "just" AI slop, it's low effort AI slop.
What annoys me personally is that both ChatGPT and Gemini like to output bullet point lists with the first key phrase highlighted in bold for each item. I do that! I've been doing that for years! Now many of my customers will likely start assuming my writing is mere AI slop.
I've become tempted to leave typos in my writing on purpose as a shibboleth indicating its human origins.
I agree. There's some faux confidence about ChatGPT, this emotionally draining, ruthlessly authoritative prose that is just exhausting to intellectually engage with. It started around o3, and I have no idea what OpenAI is doing to make their models sound like this. Claude and Gemini models have a much more human tone to them.
"This is a classic ChstGPT gotcha". This and the gaslighting "Exactly, now you see why A!=B" when it was ME who pointed out his wrong A=B assumption are driving me crazy.
They f*cked it up. I am convinced ChatGPT will be a classic case of an early prodigy which gets surpassed by the better, second generation products. History is full of those. I think Tesla is another, recent one.
They have definitely messed it up, I know people who promised to be lifelong chatgpt users who now use Gemini.
Itll take a bit of time to show up in the numbers overall, but within my reach I see the numbers changing.
No, Claude does it too, or at least has done so up until at least 4. Haven't checked 4.5.
Even with GPT models, it's only with 5 that instruction following has become strong enough that your instructions can override this tendency. During the whole 4 (and o) series, it wasn't something you could just override through a system prompt.