> If the job were mainly about producing syntactically valid code, then of course A.I. would be on a direct path to replacing large parts of the profession. But that was never the highest-value part of the work. The value was always in judgment.
> The valuable engineer is the one who sees the hidden constraint before it causes an outage. The one who notices that the team is solving the wrong problem. The one who reduces a vague debate into crisp tradeoffs. The one who identifies the missing abstraction. The one who can debug reality, not just read code. The one who can create clarity where everyone else sees noise
How do you think engineers in the second half got there? By writing tons and tons of code to "build those reps" and gain that experience.
The author tries to answer this:
> That process is not optional. It is how engineers acquire and elevate their competency. If early-career engineers use A.I. to remove all struggle from the learning loop, they are hurting their development.
but in a world wherein writing code by hand (the "struggle") is "artisinal" and "outdated", this process being non-optional (which I agree with) is contradictory.
How juniors and fresh grads do that with AI that is designed to give you whatever answer you need in a given moment is unclear to me. I don't see how that's possible, but maybe I'm thinking too myopically.
Myopic is inevitable, to some extent. It's very hard to project this stuff.
Socrates wrote about what was being lost as philosophy was becoming written rather than oral...and he was right.
We can't even understand what was lost. Many methods of learning and thinking became entirely lost. You could say they were redundant, and they were. But... writing largely replaced oral traditions. It didn't just augment them.
He was that old school coder who had the skills to do philosophy and be an intellectual without writing. Writing was an augmentation for him. But for the new cohort... it was a new paradigm and old paradigm skills became absent.
It is very hard to imagine skilled coders becoming skilled without need pressing that skill acquisition. The diligent student will acquire some basic "manual coding" skill... but mostly the skill development will be wherever the hard work is.
I'd say that by purging stuff from the brain we are losing thinking itself. Thinking is manipulating ideas and concepts in your head, assembling and linking. The fewer things there is, the more primitive the result. You cannot juggle without object to juggle, connecting the dots result in trivial patterns when you have just a couple of dots.
> I'd say that by purging stuff from the brain we are losing thinking itself
The idea that there will be less to think about seems a bit short-sighted. Humans are very good at moving to higher levels of abstraction, often with more complexity to deal with, not less.
A lot of paraimony between your statement and Socrates' comments on the transition to writing.
Interestingly, he placed a lot of importance on memory... where you emphasize manipulation of concepts.
I’ve grown to appreciate this aspect of standard examination as I’ve gotten older. Everyone wants to say “oh, you can just look it up now”, but how can you come up with higher level thinking, when you don’t have the fundamentals in your mind?
It's true for all automation we do get more comfort. We build systems so that we humans have as little struggle as possible, not realising that struggle is the only reason for existence. By eliminating it, we are erasing ourselves from this world.
Automation is also for reducing drudgery - the work that prevents us from meaningful struggle by taking up resources that can be better applied elsewhere. Not all struggle (or pain) is created equal.
I wouldn’t count on reduced drudgery. The assembly line automated many movements needed for manufacturing. But which work involved more drudgery—-craftsman-style car production or standing on an assembly line at Ford?
With any new technology, subsequent drudgery depends on the technology, its concomitant economics, and the imagination of the people using it.
This kind of argument flies in the face of the fact that plenty of inherited rich people seem to lead very happy lives. Of course, they do find things to struggle with, but it's much more pleasant to struggle to score 72 at the golf course or to outbid a rival for a piece of contemporary art than to struggle for basic needs.
I don’t share your idea of a happy life.
I can live a happy life without struggling for basic needs and without playing golf all day long. If you strip off every obligation from life, then you exist, not live.
Facing challenges and overcoming obstacles, friends and family is what makes me happy. When you’re rich, most people only care about your money, not the person you are. And I think that’s exactly what a happy life is about.
I guess to each their own. But in the little free time I have as a non-rich version, I like to face low-stakes challenges I myself choose, e.g. in my case those currently mostly are learning Chinese and learning to play a musical instrument. Those still provide obstacles, difficulties, the feeling of progress and moments of success/failure, but I can do them at my own pace and with no serious consequences if I fail.
I can imagine I could be perfectly happy with a life full of challenges of that kind, instead of being forced to work at given scheduled times which often imply I spend less time with my son than I would like, including days I don't feel like it, and including boring tasks (I love my job, but like almost every job, it also has its paperwork, pointless meetings, etc.), knowing I depend on that work to live.
In short, I think we all do need the challenge, the struggle, the successes and the failures, otherwise life would just be boring and pointless. But I don't think we (or at least I) need the obligation component and the high stakes.
What you mention about the rich attracting people focused on money rings true, but it would be moot if AI led us all to lead lives more similar to the rich, which was the point here. (Of course, there's also the issue of whether there is widespread or unequal access to AI, but that's another story...).
We will never fundamentally get rid of thinking; it's coupled to navigation of 3D reality we live
And we don't need words to think; cognitive problem solving and language processing are separate processes [1]
We will shift the problems we need to think about. Same as always; humanity isn't really solving building stone pyramids. Did we stop thinking? No just thought about a different todo list.
[1] https://www.scientificamerican.com/article/you-dont-need-wor...
We also never run out of fuel. There will always be some energy left here and there to tap into.
Yeah but where comparison with philosophy falls short is - if we lost some ways of thinking, it was gradual and most didn't notice.
Software code is on the other hand extremely formal, and either it works perfectly as intended, it works crappily and keeps breaking in various edge cases or just doesn't work (last 2 are just variants of same dysfunctionality, technically its binary state). There is no scenario where broken code somehow ends up working and delivering, or maybe 1 in trillion, sometimes.
Also the change is so fast that the failure is immediately obvious to everybody, its not gradual change of thinking over few decades/generations.
LLMs are getting impressive, but anybody claiming there is no massive long term harm to getting to what we call now proper seniority is... don't know, delusional, junior who never walked that long and hard-won path, doing PR for llms at all costs or some other similar type. Or simply has some narrow use case working great for them long term which definitely can't be transferred on whole industry, like 1-man indie game dev.
I would argue it's virtually impossible going forward for a junior engineer to run that harder path.
Because the easier path seemingly delivers what's expected of them. Sigh, they may even be demanded to take the faster path.
I've seen many junior unable to walk that necessary path before LLMs were a thing.
You aren't thinking myopically; it's a fundamental contradiction the root of which is in how human brains take in and understand new information. No amount of pontification or bollocks hedging as this and all other "thinkpieces" on this issue do, will change that. It is beyond preference and perspective. There is only doing the very task that produces skills pertaining to that task. Prompting alone or even in dominant is too far from this task. They can only write the code.
AI has not yet aligned with human thinking absolutely but some people create euphoria that it's surpassing human thinking so only after alignment and surpassing AI can think of an outside inview now it is still inside out
you learn by struggling and slogging through, even as a senior if your shit breaks it's on you to understand why. no LLM will shortcut that process for you (even asking LLMs why something is wrong requires you to actually understand it eventually, aka LEARNING). how that happens is up to the person.
i don't understand all this fear projected as if people won't have agency of learning just because LLMs make it easier to do certain things. i don't think it's contradictory at all. half the people here will never have to wrangle the bullshit i dealt with 20 years ago and i'm sure when i was dealing with it there was another 20 years of bullshit before me lol.
if you vibe code your app with no regard for the underlying code you will pay the price for it at some point in the future, anybody worth their salt will slow down enough to figure it out the "artisanal" way.
I'd argue that the engineers of 20 years ago were better than the engineers of today because they were significantly more resource constrained and for example, would never use a 300mb javascript library for a profile page.
John Carmack did praise restraint of resources when he recalled his early days working as a lone contractor and as an employee of Softdisk, when he and the team had to push out games on a very tight schedule.
I think this extends to other parts of life, too. I still remember that I fondly played a game over and over again back in high school, when I did not have the Internet and had to borrow CDs from my friends — but when I went into the university and had access to pretty much every game freely on the Intranet, I rarely do that anymore. That’s why I always think an abundance of X may not be the best option for me. That’s why probably includes money, too.
As a percentage of good to mediocre, maybe. Engineers of 40 years ago were probably better than engineers 20 years ago. Less of them and more constraints they had to deal with. Democratization of technology makes it easier for more people to use. It applies to programming as much as just using a computer.
Understanding something and learning something are not the same things.
Almost none of my operational knowledge came from writing code but a lot sure came from the reading code in the debugging process.
One thing worth mentioning is that even before AI only some small subset of engineers have experienced building systems from scratch or inventing new ways of doing things or root causing complex problems or even writing a lot of code. Most software engineering is maintenance or mundane or not productive.
Even in a world where there's a lot of AI generated code there can still be people that have enough exposure to doing hard things. Definitely at this point in time where AI can't really do all those hard things anyways - but even after it'll be able to.
you don't need to build systems from scratch to acquire problem-solving skills. even routine maintenance problems require to dig into documentation, look at github issues, and do root-cause analysis. These skills are eliminated from reliance on AI and there is no fallback if one never acquired them in the first place.
> I don't see how that's possible, but maybe I'm thinking too myopically.
you are thinking too myopically.
We have people who can still do maths well after the introduction of the calculator. We have people who can still spell after the introduction of spell check.
The junior only need to train without using AI to gain the skills needed - that's called education. If they choose to rely on AI solely, and gimp their own education, that's on them.
> We have people who can still do maths well after the introduction of the calculator.
I assume by "do maths" you mean doing simple calculations, like adding a bunch of small numbers, in one's head. That's because in many situations it's more convenient to do so, than using a calculator. So the skill is preserved / practiced, because a calculator is too cumbersome to use. The skills of most people settle at the equilibrium where it takes the same effort to take out the calculator and focus on typing, as it would to strain the brain doing it without a calculator.
> We have people who can still spell after the introduction of spell check.
When using spell check to fix your document, you automatically learn to spell. Your skills improve by using the tool. A better analogy to AI would be an email client with a "Fix all and send"-button, where you never look at the output of the spell checker.
I would also argue, that most school system forbid the usage of a calculator the first couple of years (at least that's how it was Germany a few decades ago). The same with writing per hand. You can spell check by looking the word up and then manually correcting it.
Both require manual "labor" which leads to learning.
And calculators took decades to become widespread. So we could learn of their side effects before they became mainstream.
Also to note. Calculators merely solve intermediary steps. LLMs are increasingly designed to do a one shot full blown work. Longer context, deep thinking, agentic loops.
No. These tools are very good at creating illusion of learning, without any learning. When you watch them do stuff, you think, yeah I got this. Once they are gone, you realize all your supposed skill is gone too. Getting a skill requires deliberate practice. You can use AI for that, but just using AI is not that.
Why no? It sounds like you agree with the person you replied to
There's an old Latin proverb "Scribere bis legere", which translates to "writing is reading twice".
In practice, what this means is that you can read some subject many times, but you would still struggle to reproduce the content by yourself. That is why, when learning, it is not sufficient to just read the material several times.
Yes but currently I don't know of a single company in my area that doesn't make you use AI daily because of the supposedly increased productivity. That means that juniors also absolutely have to use AI, probably sabotaging their learning process in the long run.
Why is it always so consistently a comparison to a technology of a fundamentally different order? Perhaps what has been lost is the ability to recognise distinct and incommensurable categories.
> We have people who can still do maths well after the introduction of the calculator.
Arithmetics is a very, very small subset of math.