I think this article makes a valid point. However, if AI coding is considered gambling, then being a project manager overseeing multiple developers could also be seen as a form of gambling to a certain degree. In reality, there isn't much difference between the two. AI models are non-deterministic, and humans are also non-deterministic. You could assign the same task to two different developers and end up with entirely different results.
I think this is a very good point. We have a natural bias toward human output as there is an illusion of full control - in reality even just from a solo dev perspective you've still got a load of hidden illogical persuasions that are influencing your code and how you approach a problem. AI has its own biases that come out of the nature its training on large unknowable data sets, but I'd argue the 'black box' thinking that comes out that isn't too different to the black box of the human mind. That's not at all to say that AI isn't worse (even if quicker) than top developer talent today writing handwritten code - just that the barrier to getting that level of quality isn't as insurmountable as it might appear.
AI coding is gambling on slot machines, managing developers is betting on race horses.
Only if your AI coding approach is the slot machine approach.
I've ended up with a process that produces very, very high quality outputs. Often needing little to no correct from me.
I think of it like an Age of Empires map. If you go into battle surrounded by undiscovered parts of the map, you're in for a rude surprise. Winning a battle means having clarity on both the battle itself and risks next to the battle.
Would you mind sharing some of your findings?
Good analogy! Would be interesting to read more details about how you’re getting very high quality outputs
Until it produces predictable output, it's gambling. But it can't produce predictable output because it's a non-deterministic tool.
What you're describing is increasing your odds while gambling, not that it's not gambling. Card counting also increases your odds while gambling, but it doesn't make it not gambling.
This is a pretty wild comparison in my opinion, it counts almost everything as gambling which means it has almost no use as a definition.
The most obvious issue is it’d class working with humans as gambling. Fine if you want to make that as your definition but it seems unhelpful to the discussion.
Similar to quantum computing, a probabilistic model when condensed to sufficiently narrow ranges can be treated as discrete.
Dam this is so accurate. As a project manager turned product manager this is so true. You need to estimate a project based on the “pedigree” of your engineers
- [deleted]
What is it with you guys and stallions?
There is a long history of managers just wanting to work their developers like horses.
Great analogy, I’m saving it!
I think the addiction angle seems to make AI coding more similar to gambling. Some people seem to be disturbingly addicted to agentic coding. Much more so than traditional programming. To the point of doing destructive things like waking up in the middle of the night to check agents. Or giving an agent access to their bank account.
I mean, it’s just so fun. Claude wrote a native macOS app for me today.
I don’t think I’d describe my behavior as destructive though
I know at least one case where the obsession with agents ruined a marriage.
You (in theory) have more control over the quality of the team you are managing, than the quality of the models you are using.
And the quality of code models puts out is, in general, well below the average output of a professional developer.
It is however much faster, which makes the gambling loop feel better. Buying and holding a stock for a few months doesn't feel the same as playing a slot machine.
What theory is that?
My experience is the absolute opposite. I am much more in control of quality with Ai agents.
I am never letting junior to midlevels into my team again.
In fact, I am not sure I will allow any form of manual programming in a year or so.
> I am never letting junior to midlevels into my team again
Exactly. You control the quality of the people in your team. You can train, fire, hire, etc until you get the skill level you want.
You have effectively no control over the quality of the output from an LLM. You get what the frontier labs give you and must work with that.
Eh. You want a good mix of experience levels, what really matters is everyone should be talented. Less experienced colleagues are unburdened by yesterday’s lessons that may no longer be relevant today, they don’t have the same blind spots.
Also, our profession is doomed if we won’t give less experienced colleagues a chance to shine.
One difference is those developers are moral subjects who feel bad if they screw up whereas a computer is not a moral subject and can never be held accountable.
https://simonwillison.net/2025/Feb/3/a-computer-can-never-be...
Right, you need to hire a scapegoat. Usually tester has that role: little impact but huge responsibility for quality.
You have a lot of control over LLM quality. There is different models available. Even with different effort settings of those models you have different outcomes.
E.g. look at the "SWE-Bench Pro (public)" heading in this page: https://openai.com/index/introducing-gpt-5-4/ , showing reasoning efforts from none to high.
Of course, they don't learn like humans so you can't do the trick of hiring someone less senior but with great potential and then mentor them. Instead it's more of an up front price you have to pay. The top models at the highest settings obviously form a ceiling though.
You also have control over the workflow they follow and the standards you expect them to stick through, through multiple layers of context. Expecting a model to understand your workflow and standards without doing the effort of writing them down is like expecting a new hire to know them without any onboarding. Allowing bad AI code into your production pipeline is a skill issue.
Framing anything with a common blanket concepts usually fails to apply the same framing to related areas. A lot of things include some gambling, you need to compare how it was before was 'gambling', and how 'not using AI' is also 'gambling', etc.
As @m00x points out "coding is gambling on slot machines, managing developers is betting on race horses."
I don‘t think so. A project manager can give feedback, train their staff, etc. An AI coding model is all you get, and you have to wait until your provider trains a new model before you might see an improvement.
I ssk an AI to play hangman with me and looked at it's reasoning. It didn't just pick a secret word and play a straightforward game of hangman. It continually adjusted the secret word based on the letters I guessed, providing me the "perfect" game of hangman. Not too many of my guesses were "right" and not too many "wrong" and I after a little struggle and almost losing, I won in the end.
It wasn't a real game of hangman, it was flat out manipulation, engagement farming. Do you think it's possible that AI does that in any other situations?
That says more about how you see developers than whether or not managers are in a sense gamblers.
This must be it. So many of our colleagues have been burnt by bad coworkers that they would rather burn everything down than spend another day working with them.
> AI models are non-deterministic, and humans are also non-deterministic. You could assign the same task to two different developers and end up with entirely different results.
Except, one can explain themselves (humans) and their actions can be held to account in the case of any legal issue whereas an AI cannot; making such an entity completely unsuitable for high risk situations.
This typical AI booster comparison has got to stop.
Love that you needed to make it clear that it is humans that can explain themselves..
Employees can only be held accountable with severe malice.
There is a good chance that the person actually responsible (eg. The ceo or someone delegated to be responsible) will soon prefer to have AIs do the work as their quality can be quantified.
> Except, one can explain themselves (humans) and their actions can be held to account in the case of any legal issue whereas an AI cannot
You "own" the software it creates which means you're responsible for it. If you use AI to commit crimes you'll go to jail, not the AI.
As a human, you generally have the opportunity make decent headway in understanding the other humans that you're working with and adjusting your instructions to better anticipate the outputs that they'll return to you. This is almost impossible with AI because of a combination of several factors:
>You are not an AI and do not know how an AI "thinks".
>Even if you come to be able to anticipate an AI's output, you will be undermined by the constant and uncontrollable update schedule imposed on you by AI platforms. Humans only make drastic changes like this under uncommon circumstances, like when they're going through large changes in their life, not as a matter of course.
>However, without this update schedule, problems that were once intractable will likely stay so forever. Humans, on the other hand, can grow without becoming completely unpredictable.
It's a Catch-22. AI is way closer to gambling.