The "spreadsheet" example video is kind of funny: guy talks about how it normally takes him 4 to 8 hours to put together complicated, data-heavy reports. Now he fires off an agent request, goes to walk his dog, and comes back to a downloadable spreadsheet of dense data, which he pulls up and says "I think it got 98% of the information correct... I just needed to copy / paste a few things. If it can do 90 - 95% of the time consuming work, that will save you a ton of time"
It feels like either finding that 2% that's off (or dealing with 2% error) will be the time consuming part in a lot of cases. I mean, this is nothing new with LLMs, but as these use cases encourage users to input more complex tasks, that are more integrated with our personal data (and at times money, as hinted at by all the "do task X and buy me Y" examples), "almost right" seems like it has the potential to cause a lot of headaches. Especially when the 2% error is subtle and buried in step 3 of 46 of some complex agentic flow.
> how it normally takes him 4 to 8 hours to put together complicated, data-heavy reports. Now he fires off an agent request, goes to walk his dog, and comes back to a downloadable spreadsheet of dense data, which he pulls up and says "I think it got 98% of the information correct...
This is where the AI hype bites people.
A great use of AI in this situation would be to automate the collection and checking of data. Search all of the data sources and aggregate links to them in an easy place. Use AI to search the data sources again and compare against the spreadsheet, flagging any numbers that appear to disagree.
Yet the AI hype train takes this all the way to the extreme conclusion of having AI do all the work for them. The quip about 98% correct should be a red flag for anyone familiar with spreadsheets, because it’s rarely simple to identify which 2% is actually correct or incorrect without reviewing everything.
This same problem extends to code. People who use AI as a force multiplier to do the thing for them and review each step as they go, while also disengaging and working manually when it’s more appropriate have much better results. The people who YOLO it with prompting cycles until the code passes tests and then submit a PR are causing problems almost as fast as they’re developing new features in non-trivial codebases.
From John Dewey's Human Nature and Conduct:
“The fallacy in these versions of the same idea is perhaps the most pervasive of all fallacies in philosophy. So common is it that one questions whether it might not be called the philosophical fallacy. It consists in the supposition that whatever is found true under certain conditions may forthwith be asserted universally or without limits and conditions. Because a thirsty man gets satisfaction in drinking water, bliss consists in being drowned. Because the success of any particular struggle is measured by reaching a point of frictionless action, therefore there is such a thing as an all-inclusive end of effortless smooth activity endlessly maintained.
It is forgotten that success is success of a specific effort, and satisfaction the fulfillment of a specific demand, so that success and satisfaction become meaningless when severed from the wants and struggles whose consummations they arc, or when taken universally.”
The proper use of these systems is to treat them like an intern or new grad hire. You can give them the work that none of the mid-tier or senior people want to do, thereby speeding up the team. But you will have to review their work thoroughly because there is a good chance they have no idea what they are actually doing. If you give them mission-critical work that demands accuracy or just let them have free rein without keeping an eye on them, there is a good chance you are going to regret it.
I’ve never experienced an intern who was remotely as mediocre and incapable of growth as an LLM.
What about a coach's ability for improving instruction?
The point of coaching a Junior is so they improve their skills for next time
What would be the point of coaching an LLM? You will just have to coach it again and again
What about it?
Isn't the point of an intern or new grad that you are training them to be useful in the future, acknowledging that for now they are a net drain on resources.
An overly eager intern with short term memory loss, sure.
And working with interns requires more work for final output compared do-it-yourself
For this example - Let’s replace the word “intern” with “initial-stage-experts” or something.
There’s a reason people invest their time with interns.
Yeah, people complaining about accuracy of AI-generated code should be examining their code review procedures. It shouldn’t matter if the code was generated by a senior employee, an intern, or an LLM wielded by either of them. If your review process isn’t catching mistakes, then the review process needs to be fixed.
This is especially true in open source where contributions aren’t limited to employees who passed a hiring screen.
This is taking what I said further than intended. I'm not saying the standard review process should catch the AI generated mistakes. I'm saying this work is at the level of someone who can and will make plenty of stupid mistakes. It therefore needs to be thoroughly reviewed by the person using before it is even up to the standard of a typical employee's work that the normal review process generally assumes.
Yep, in the case of open source contributions as an example, the bottleneck isn't contributors producing and proposing patches, it's a maintainer deciding if the proposal has merit, whipping (or asking contributors to whip) patches into shape, making sure it integrates, etc. If contributors use generative AI to increase the load on the bottleneck it is likely to cause a negative net effect.
This very much. Most of the time, it's not a code issue, it's a communication issue. Patches are generally small, it's the whole communication around it until both parties have a common understanding that takes so much time. If the contributor comes with no understanding of his patch, that breaks the whole premise of the conversation.
I can still complain about the added workload of inaccurate code.
If 10 times more code is being created, you need 10 times as many code reviewers..
Plus the overhead of coordinating the reviewers as well!
"Corporate says the review process needs to be relaxed because its preventing our AI agents from checking in their code"
98% sure each commit doesn’t corrupt the database, regress a customer feature, open a security vulnerability. 50 commits later … (which is like, one day for an agentic workflow)
It’s only a 64% chance of corruption after 50 such commits at a 98% success.
I would be embarrassed to be at OpenAI releasing this and pretending the last 9 months haven't happened... waxing poetically about "age of agents" - absolutely cringe and pathetic
Or as I would like to put it, LLM outputs are essentially the Library of Babel. Yes, it contains all of the correct answers, but might as well be entirely useless.
”The people who YOLO it with prompting cycles until the code passes tests and then submit a PR are causing problems almost as fast as they’re developing new features in non-trivial codebases.”
This might as well be the new definition of “script kiddie”, and it’s the kids that are literally going to be the ones birthed into this lifestyle. The “craft” of programming may not be carried by these coming generations and possibly will need to be rediscovered at some point in the future. The Lost Art of Programming is a book that’s going to need to be written soon.
Oh come on, people have been writing code with bad, incomplete, flaky, or absent tests since automated testing was invented (possibly before).
It's having a good, useful and reliable test suite that separates the sheep from the goats.*
Would you rather play whack-a-mole with regressions and Heisenbugs, or ship features?
* (Or you use some absurdly good programing language that is hard to get into knots with. I've been liking Elixir. Gleam looks even better...)
It sounds like you’re saying that good tests are enough to ensure good code even when programmers are unskilled and just rewrite until they pass the tests. I’m very skeptical.
It may not be a provable take, but it’s also not absurd. This is the concept behind modern TDD (as seen in frameworks like cucumber):
Someone with product knowledge writes the tests in a DSL
Someone skilled writes the verbs to make the DSL function correctly
And from there, any amount of skill is irrelevant: either the tests pass, or they fail. One could hook up a markov chain to a javascript sourcebook and eventually get working code out.
> One could hook up a markov chain to a javascript sourcebook and eventually get working code out.
Can they? Either the dsl is so detailed and specific as to be just code with extra steps or there is a lot of ground not covered by the test cases with landmines that a million monkeys with typewriters could unwittingly step on.
The bugs that exist while the tests pass are often the most brutal - first to find and understand and secondly when they occasionally reveal that a fundamental assumption was wrong.
Tests are just for the bugs you already know about
They're also there to prevent future bugs.
So is here to stay. If you’re unable to write good code with it. Doesn’t mean everyone is writing bad code with it.
“The quip about 98% correct should be a red flag for anyone familiar with spreadsheets”
I disagree. Receiving a spreadsheet from a junior means I need to check it. If this gives me infinite additional juniors I’m good.
It’s this popular pattern of HN comments - expect AI to behave deterministically correct - while the whole world operates on stochastically correct all the time…
In my experience the value of junior contributors is that they will one day become senior contributors. Their work as juniors tends to require so much oversight and coaching from seniors that they are a net negative on forward progress in the short term, but the payoff is huge in the long term.
I don't see how this can be true when no one stays at a single job long enough for this to play out. You would simply be training junior employees to become senior employees for someone else.
So this has been a problem in the tech market for a while now. Nobody wants to hire juniors for tech because even at FAANGs the average career trajectory is what, 2-3 years? There's no incentive for companies to spend the time, money, and productivity hit to train juniors properly. When the current cohort ages out, a serious problem is going to occur, and it won't be pretty.
Exactly this
And it should go without saying that LLMs do not have the same investment/value tradeoff. Whether or not they contribute like a senior or junior seems entirely up to luck
Prompt skill is flaky and unreliable to ensure good output from LLMs
When my life was spreadsheets, we were expected to get to the point of being 99.99% right.
You went from “do it again” to “go check the newbies work”.
To get to that stage your degree of proficiency would be “can make out which font is wrong at a glance.”
You wouldn’t be looking at the sheet, you would be running the model in your head.
That stopped being a stochastic function, with the error rate dropping significantly - to the point that making a mistake had consequences tacked on to it.
The act of trying to make that 2% appear like "minimal, dismissable" is almost a mass psychosis in the AI world at times it seems like.
A few comparisons:
>Pressing the button: $1 >Knowing which button to press: $9,999 Those 2% copy-paste changes are the $9.999 and might take as long to find as rest of the work.
Also: SCE to AUX.
I also find that validating data can be much faster than calculating data. It's like when you're in algebra class and you're told to "solve for X". Once you find the value for X you plug it into the equation to see if it fits, and it's 10x faster than solving for X originally.
Regardless of if AI generates the spreadsheet or if I generate the spreadsheet, I'm still going to do the same validation steps before I share it with anyone. I might have a 2% error rate on a first draft.
Of course, Pareto principle is at work here. In an adjacent field, self-driving, they are working on the last "20%" for almost a decade now. It feels kind of odd that almost no one is talking about self-driving now, compared to how hot of a topic it used to be, with a lot of deep, moral, almost philosophical discussions.
> The first 90 percent of the code accounts for the first 90 percent of the development time. The remaining 10 percent of the code accounts for the other 90 percent of the development time.
— Tom Cargill, Bell Labs
In my experience for enterprise software engineering, in this stage we are able to shrink the coding time with ~20%, depending on the kind of code/tests.
However CICD remains tricky. In fact when AI agents start building autonomous, merge trains become a necessity…
> It feels kind of odd that almost no one is talking about self-driving now, compared to how hot of a topic it used to be
Probably because it's just here now? More people take Waymo than Lyft each day in SF.
It's "here" if you live in a handful of cities around the world, and travel within specific areas in those cities.
Getting this tech deployed globally will take another decade or two, optimistically speaking.
Given how well it seems to be going in those specific areas, it seems like it's more of a regulatory issue than a technological one.
Ah, those pesky regulations that try to prevent road accidents...
If it's not a technological limitation, why aren't we seeing self-driving cars in countries with lax regulations? Mexico, Brazil, India, etc.
Tesla launched FSD in Mexico earlier this year, but you would think companies would be jumping at the opportunity to launch in markets with less regulation.
So this is largely a technological limitation. They have less driving data to train on, and the tech doesn't handle scenarios outside of the training dataset well.
Do we even know what % of Waymo rides in SF are completely autonomous? I would not be surprised if more of them are remotely piloted than they've let on...
Can you name any of the specific regulations that robot taxi companies are lobbying to get rid of? As long as robotaxis abide by the same rules of the road as humans do, what's the problem? Regulations like you're not allowed to have robotaxis unless you pay me, your local robotaxi commissioner $3/million/year, aren't going to be popular with the populus but unfortunately for them, they don't vote, so I'm sure we'll see holdouts and if multiple companies are in multiple markets and are complaining about the local taxi cab regulatory commision, but there's just so much of the world without robotaxis right now (summer 2025) that I doubt it's anything mure than the technology being brand spanking new.
Maybe, but it's also going to be a financial issue eventually too
My city had Car2Go for a couple of years, but it's gone now. They had to pull out of the region because it wasn't making them enough money
I expect Waymo and any other sort of vehicle ridesharing thing will have the same problem in many places
But it seems the reason for that is that this is a new, immature technology. Every new technology goes through that cycle until someone figures out how to make it financially profitable.
This is a big moving of the goalposts. The optimists were saying Level 5 would be purchasable everywhere by ~2018. They aren’t purchasable today, just hail-able. And there’s a lot of remote human intervention.
And San Francisco doesn’t get snow.
Hell - SF doesn’t have motorcyclists or any vehicular traffic, driving on the wrong side of the road.
Or cows sharing the thoroughfares.
It should be obvious to all HNers that have lived or travelled to developing / global south regions - driving data is cultural data.
You may as well say that self driving will only happen in countries where the local norms and driving culture is suitable to the task.
A desperately anemic proposition compared to the science fiction ambition.
I’m quietly hoping I’m going to be proven wrong, but we’re better off building trains, than investing in level 5. It’s going to take a coordination architecture owned by a central government to overcome human behavior variance, and make full self driving a reality.
I'm in the Philippines now, and that's how I know this is the correct take. Especially this part:
"Driving data is cultural data."
The optimists underestimate a lot of things about self-driving cars.
The biggest one may be that in developing and global south regions, civil engineering, design, and planning are far, far away from being up to snuff to a level where Level 5 is even a slim possibility. Here on the island I'm on, the roads, storm water drainage (if it exists at all) and quality of the built environment in general is very poor.
Also, a lot of otherwise smart people think that the increment between Level 4 and Level 5 is the same as that between all six levels, when the jump from Level 4 to Level 5 automation is the biggest one and the hardest to successfully accomplish.
Most people live within a couple hours of a city though, and I think we'll see robot taxis in a majority of continents by 2035 though. The first couple cities and continents will take the longest, but after that it's just a money question, and rich people have a lot of money. The question then is: is the taxi cab consortium, which still holds a lot of power, despite Uber, in each city the in world, large enough to prevent Waymo from getting a hold, for every city in the world that Google has offices in.
Yeah where they have every inch of SF mapped, and then still have human interventions. We were promised no more human drivers like 5-7 years ago at this point.
Human interventions.
High speed connectivity and off vehicle processing for some tasks.
Density of locations to "idle" at.
There are a lot of things that make all these services work that means they can NOT scale.
These are all solvable but we have a compute problem that needs to be addressed before we get there, and I haven't seen any clues that there is anything in the pipeline to help out.
The typical Lyft vehicle is a piece of junk worth less than $20k, while the typical Waymo vehicle is a pretend luxury car with $$$ of equipment tacked on.
Waymo needs to be proving 5-10x the number of daily rides as Lyft before we get excited
Well, if we say these systems are here, it still took 10+ years between prototype and operational system.
And as I understand it; These are systems, not individual cars that are intelligent and just decide how to drive from immediate input, These system still require some number of human wranglers and worst-case drivers, there's a lot of specific-purpose code rather nothing-but-neural-network etc.
Which to say "AI"/neural nets are important technology that can achieve things but they can give an illusion of doing everything instantly by magic but they generally don't do that.
It’s past the hype curve and into the trough of disillusionment. Over the next 5,10,15 years (who can say?) the tech will mature out of the trough into general adoption.
GenAI is the exciting new tech currently riding the initial hype spike. This will die down into the trough of disillusionment as well, probably sometime next year. Like self-driving, people will continue to innovate in the space and the tech will be developed towards general adoption.
We saw the same during crypto hype, though that could be construed as more of a snake oil type event.
The Gartner hype cycle assumes a single fundamental technical breakthrough, and describes the process of the market figuring out what it is and isn't good for. This isn't straightforwardly applicable to LLMs because the question of what they're good for is a moving target; the foundation models are actually getting more capable every few months, which wasn't true of cryptocurrency or self-driving cars. At least some people who overestimate what current LLMs can do won't have the chance to find out that they're wrong, because by the time they would have reached the trough of disillusionment, LLM capabilities will have caught up to their expectations.
If and when LLM scaling stalls out, then you'd expect a Gartner hype cycle to occur from there (because people won't realize right away that there won't be further capability gains), but that hasn't happened yet (or if it has, it's too recent to be visible yet) and I see no reason to be confident that it will happen at any particular time in the medium term.
If scaling doesn't stall out soon, then I honestly have no idea what to expect the visibility curve to look like. Is there any historical precedent for a technology's scope of potential applications expanding this much this fast?
> If scaling doesn't stall out soon, then I honestly have no idea what to expect the visibility curve to look like. Is there any historical precedent for a technology's scope of potential applications expanding this much this fast?
Lots of pre-internet technologies went through this curve. PCs during the clock speed race, aircraft before that during the aeronautics surge of the 50s, cars when Detroit was in its heydays. In fact, cloud computing was enabled by the breakthroughs in PCs which allowed commodity computing to be architected in a way to compete with mainframes and servers of the era. Even the original industrial revolution was actually a 200-year ish period where mechanization became better and better understood.
Personally I've always been a bit confused about the Gartner Hype Cycle and its usage by pundits in online comments. As you say it applies to point changes in technology but many technological revolutions have created academic, social, and economic conditions that lead to a flywheel of innovation up until some point on an envisioned sigmoid curve where the innovation flattens out. I've never understood how the hype cycle fits into that and why it's invoked so much in online discussions. I wonder if folks who have business school exposure can answer this question better.
> If scaling doesn't stall out soon, then I honestly have no idea what to expect the visibility curve to look like.
We are seeing diminishing returns on scaling already. LLMs released this year have been marginal improvements over their predecessors. Graphs on benchmarks[1] are hitting an asymptote.
The improvements we are seeing are related to engineering and value added services. This is why "agents" are the latest buzzword most marketing is clinging on. This is expected, and good, in a sense. The tech is starting to deliver actual value as it's maturing.
I reckon AI companies can still squeeze out a few years of good engineering around the current generation of tools. The question is what happens if there are no ML breakthroughs in that time. The industry desperately needs them for the promise of ASI, AI 2027, and the rest of the hyped predictions to become reality. Otherwise it will be a rough time when the bubble actually bursts.
The problem with LLMs and all other modern statistical large-data-driven solutions’ approach is that it tries to collapse the entire problem space of general problem solving to combinatorial search of the permutations of previously solved problems. Yes, this approach works well for many problems as we can see with the results with huge amount of data and processing utilized.
One implicit assumption is that all problems can be solved with some permutations of existing solutions. The other assumption is the approach can find those permutations and can do so efficiently.
Essentially, the true-believers want you to think that rearranging some bits in their cloud will find all the answers to the universe. I am sure Socrates would not find that a good place to stop the investigation.
Right. I do think that just the capability to find and generate interesting patterns from existing data can be very valuable. It has many applications in many fields, and can genuinely be transformative for society.
But, yeah, the question is whether that approach can be defined as intelligence, and whether it can be applicable to all problems and tasks. I'm highly skeptical of this, but it will be interesting to see how it plays out.
I'm more concerned about the problems and dangers of this tech today, than whatever some entrepreneurs are promising for the future.
> We are seeing diminishing returns on scaling already. LLMs released this year have been marginal improvements over their predecessors. Graphs on benchmarks[1] are hitting an asymptote.
This isnt just a software problem. IF you go look at the hardware side you see that same flat line (IPC is flat generation over generation). There are also power and heat problems that are going to require some rather exotic and creative solutions if companies are looking to hardware for gains.
The Gartner hype cycle is complete nonsense, it's just a completely fabricated way to view the world that helps sell Gartner's research products. It may, at times, make "intuitive sense", but so does astrology.
The hype cycle has no mathematical basis whatsoever. It's marketing gimmick. It's only value in my life has been to quickly identify people that don't really understand models or larger trends in technology.
I continue to be, but on introspection probably shouldn't be, surprised that people on HN treat is as some kind of gospel. The only people who should respected are other people in the research marketing space as the perfect example of how to dupe people into paying for your "insights".
Could you please expand on your point about expanding scopes? I am waiting earnestly for all the cheaper services that these expansions promise. You know cheaper white-collar-services like accounting, tax, and healthcare etc. The last reports saw accelerating service inflation. Someone is lying. Please tell me who.
Hence why I said potential applications. Each new generation of models is capable, according to evaluations, of doing things that previous models couldn't that prima facie have potential commercial applications (e.g., because they are similar to things that humans get paid to do today). Not all of them will necessarily work out commercially at that capability level; that's what the Gartner hype cycle is about. But because LLM capabilities are a moving target, it's hard to tell the difference between things that aren't commercialized yet because the foundation models can't handle all the requirements, vs. because commercializing things takes time (and the most knowledgeable AI researchers aren't working on it because they're too busy training the next generation of foundation models).
It sounds like people should just ignore those pesky ROI questions. In the long run, we are all dead so let’s just invest now and worry about the actual low level details of delivering on the economy-wide efficiency later.
As capital allocators, we can just keep threatening the worker class with replacing their jobs with LLMs to keep the wages low and have some fun playing monopoly in the meantime. Also, we get to hire these super smart AI researchers people (aka the smartest and most valuable minds in the world) and hold the greatest trophies. We win. End of story.
It's saving healthcare costs for those who solved their problem and never go in which would not be reflected in service inflation costs.
Back in my youthful days, educated and informed people chastised using the internet to self-diagnose and self-treat. I completely missed the memo on when it became a good idea to do so with LLMs.
Which model should I ask about this vague pain I have been having in my left hip? Will my insurance cover the model service subscription? Also, my inner thigh skin looks a bit bruised. Not sure what’s going on? Does the chat interface allow me to upload a picture of it? It won’t train on my photos right?
> or if it has, it's too recent to be visible yet
It's very visible.
Silicon Valley, and VC money has a proven formula. Bet on founders and their ideas, deliver them and get rich. Everyone knows the game, we all get it.
Thats how things were going till recently. Then FB came in and threw money at people and they all jumped ship. Google did the same. These are two companies famous for throwing money at things (Oculus, metaverse, G+, quantum computing) and right and proper face planting with them.
Do you really think that any of these people believe deep down that they are going to have some big breath through? Or do you think they all see the writing on the wall and are taking the payday where they can get it?
Liquidity in search of the biggest holes in the ground. Whoever can dig the biggest holes wins. Why or what you get out of digging the holes? Who cares.
The critics of the current AI buzz certainly have been drawing comparisons to self driving cars as LLMs inch along with their logarithmic curve of improvement that's been clear since the GPT-2 days.
Whenever someone tells me how these models are going to make white collar professions obsolete in five years, I remind them that the people making these predictions 1) said we'd have self driving cars "in a few years" back in 2015 and 2) the predictions about white collar professions started in 2022 so five years from when?
> said we'd have self driving cars "in a few years" back in 2015
And they wouldn't have been too far off! Waymo became L4 self-driving in 2021, and has been transporting people in the SF Bay Area without human supervision ever since. There are still barriers — cost, policies, trust — but the technology certainly is here.
People were saying we would all be getting in our cars and taking a nap on our morning commute. We are clearly still a pretty long ways off from self-driving being as ubiquitous as it was claimed it would be.
There are always extremists with absurd timelines on any topic! (Didn't people think we'd be on Mars in 2020?) But this one? In the right cities, plenty of people take a Waymo morning commute every day. I'd say self-driving cars have been pretty successful at meeting people's expectations — or maybe you and I are thinking of different people.
Reminds me of electricity entering the market and the first DC power stations setup in New York to power a few buildings. It would have been impossible to replicate that model for everyone. AC solved the distance issue.
That's where we are at with self driving. It can only operate in one small area, you can't own one.
We're not even close to where we are with 3d printers today or the microwave in the 50s.
I think people don't realize how much models have to extrapolate still, which causes hallucinations. We are still not great at giving all the context in our brain to LLMs.
There's still a lot of tooling to be built before it can start completely replacing anyone.
- [deleted]
How profound. No one has ever posted that exact same thought before on here. Thank you.
Okay, but the experts saying self driving cars were 50 years out in 2015 were wrong too. Lots of people were there for those speeches, and yet, even the most cynical take on Waymo, Cruise and Zoox’s limitations would concede that the vehicles are autonomous most of the time in a technologically important way.
There’s more to this than “predictions are hard.” There are very powerful incentives to eliminate driving and bloated administrative workforces. This is why we don’t have flying cars: lack of demand. But for “not driving?” Nobody wants to drive!
This is the exact same issue that I've had trying to use LLMs for anything that needs to be precise such as multi-step data pipelines. The code it produces will look correct and produce a result that seems correct. But when you do quality checks on the end data, you'll notice that things are not adding up.
So then you have to dig into all this overly verbose code to identify the 3-4 subtle flaws with how it transformed/joined the data. And these flaws take as much time to identify and correct as just writing the whole pipeline yourself.
I'll get into hot water with this, but I still think LLMs do not think like humans do - as in the code is not a result of a trying to recreate a correct thought process in a programming language, but some sort of statistically most likely string that matches the input requirements,
I used to have a non-technical manager like this - he'd watch out for the words I (and other engineers) said and in what context, and would repeat them back mostly in accurate word contexts. He sounded remarkably like he knew what he was talking about, but would occasionally make a baffling mistake - like mixing up CDN and CSS.
LLMs are like this, I often see Cursor with Claude making the same kind of strange mistake, only to catch itself in the act, and fix the code (but what happens when it doesn't)
I think that if people say LLMs can never be made to think, that is bordering on a religious belief - it'd require humans to exceed the Turing computable (note also that saying they never can is very different from believing current architectures never will - it's entirely reasonable to believe it will take architectural advances to make it practically feasible).
But saying they aren't thinking yet or like humans is entirely uncontroversial.
Even most maximalists would agree at least with the latter, and the former largely depends on definitions.
As someone who uses Claude extensively, I think of it almost as a slightly dumb alien intelligence - it can speak like a human adult, but makes mistakes a human adult generally wouldn't, and that combinstion breaks the heuristics we use to judge competency,and often lead people to overestimate these models.
Claude writes about half of my code now, so I'm overall bullish on LLMs, but it saves me less than half of my time.
The savings improve as I learn how to better judge what it is competent at, and where it merely sounds competent and needs serious guardrails and oversight, but there's certainly a long way to go before it'd make sense to argue they think like humans.
Everyone has this impression that our internal monologue is what our brain is doing. It's not. We have all sorts of individual components that exist totally outside the realm of "token generation". E.g. the amygdala does its own thing in handling emotions/fear/survival, fires in response to anything that triggers emotion. We can modulate that with our conscious brain, but not directly - we have to basically hack the amygdala by thinking thoughts that deal with the response (don't worry about the exam, you've studied for it already)
LLMs don't have anything like that. Part of why they aren't great at some aspects of human behaviour. E.g. coding, choosing an appropriate level of abstraction - no fear of things becoming unmaintainable. Their approach is weird when doing agentic coding because they don't feel the fear of having to start over.
Emotions are important.
I don't think you'll get into hot water for that. Anthropomorphizing LLMs is an easy way to describe and think about them, but anyone serious about using LLMs for productivity is aware they don't actually think like people, and run into exactly the sort of things you're describing.
I just wrote a post on my site where the LLM had trouble with 1) clicking a button, 2) taking a screenshot, 3) repeat. The non-deterministic nature of LLMs is both a feature and a bug. That said, read/correct can sometimes be a preferable workflow to create/debug, especially if you don't know where to start with creating.
I think it's basically equivalent to giving that prompt to a low paid contractor coder and hoping their solution works out. At least the turnaround time is faster?
But normally you would want a more hands on back and forth to ensure the requirements actually capture everything, validation and etc that the results are good, layers of reviews right
It seems to be a mix between hiring an offshore/low level contractor and playing a slot machine. And by that I mean at least with the contractor you can pretty quickly understand their limitations and see a pattern in the mistakes they make. While an LLM is obviously faster, the mistakes are seemingly random so you have to examine the result much more than you would with a contractor (if you are working on something that needs to be exact).
the slot machine is apt. insert tokens, pull lever, ALMOST get a reward. Think: I can start over, manually, or pull the lever again. Maybe I'll get a prize if I pull it again...
and of course, you pay whether the slot machine gives a prize or not. Between the slot machine psychological effect and sunk cost fallacy I have a very hard time believing the anecdotes -- and my own experiences -- with paid LLMs.
Often I say, I'd be way more willing to use and trust and pay for these things if I got my money back for output that is false.
If the contractor is producing unusable code, they won't be my contractor anymore.
In my experience using small steps and a lot of automated tests work very well with CC. Don’t go for these huge prompts that have a complete feature in it.
Remember the title “attention is all you need”? Well you need to pay a lot of attention to CC during these small steps and have a solid mental model of what it is building.
Yeah but once you break things down into small enough steps you might as well just code it yourself.
"It feels like either finding that 2% that's off (or dealing with 2% error) will be the time consuming part in a lot of cases."
This is the part you have wrong. People just won't do that. They'll save the 8 hours and just deal with 2% error in their work (which reduces as AI models get better). This doesn't work with something with a low error tolerance, but most people aren't building the next Golden Gate Bridge. They'll just fix any problems as they crop up.
Some of you will be screaming right now "THAT'S NOT WORTH IT", as if companies don't already do this to consumers constantly, like losing your luggage at the airport or getting your order wrong. Or just selling you something defective, all of that happens >2% of the time, because companies know customers will just deal-with-it.
- [deleted]
I think the question then is what's the human error rate... We know we're not perfect... So if you're 100% rested and only have to find the edge case bug, maybe you'll usually find it vs you're burned out getting it 98% of the way there and fail to see the 2% of the time bugs... Wording here is tricky to explain but I think what we'll find is this helps us get that much closer... Of course when you spend your time building out 98% of the thing you have sometimes a deeper understanding of it so finding the 2% edge case is easier/faster but only time will tell
The problem with this spreadsheet task is that you don't know whether you got only 2% wrong (just rounded some numbers) or way more (e.g. did it get confused and mistook a 2023 PDF with one from 1993?), and checking things yourself is still quite tedious unless there's good support for this in the tool.
At least with humans you have things like reputation (has this person been reliable) or if you did things yourself, you have some good idea of how diligent you've been.
Would be insane to expect an ai to just match us right…nooooo if it pertains computers/automation/ai it needs to be beyond perfect.
Right? Why are we giving grace to a damn computer as if it's human? How are people defending this? If it's a computer, I don't care how intelligent it is. 98% right is actually unacceptable.
Distinguishing whether a problem is 0.02 ^ n for error or 0.98 ^ n for accuracy is emerging as an important skill.
Might explain why some people grind up a billion tokens trying to make code work only to have it get worse while others pick apart the bits of truth and quickly fill in their blind spots. The skillsets separating wheat from chaff are things like honest appreciation for corroboration, differentiating subjective from objective problems, and recognizing truth-preserving relationships. If you can find the 0.02 ^ n sub-problems, you can grind them down with AI and they will rapidly converge, leaving the 0.98 ^ n problems to focus human touch on.
I’ve worked at places that sre run on spreadsheets. You’d be amazed at how often they’re wrong IME
There is a literature on this.
The usual estimate you see is that about 2-5% of spreadsheets used for running a business contain errors.
It takes my boss seven hours to create that spreadsheet, and another eight to render a graph.
Exciting stuff
> "I think it got 98% of the information correct... I just needed to copy / paste a few things. If it can do 90 - 95% of the time consuming work, that will save you a ton of time"
"Hello, yes, I would like to pollute my entire data store" is an insane a sales pitch. Start backing up your data lakes on physical media, there is going to be an outrageous market for low-background data in the future.
semi-related: How many people are going to get killed because of this?
the bigger takeaway here is will his boss allow him to walk his dog or will he see available downtime and try to fill it with more work?
More work, without a doubt - any productivity gain immediately becomes the new normal. But now with an additional "2%" error rate compounded on all the tasks you're expected to do in parallel.
95% of people doing his job will lose them. 1 person will figure out the 2% that requires a human in the loop.
I do this kind of job and there is no way I am doing this job in 5-10 years.
I don't even think it is my company that is going to adapt to let me go but it is going to be an AI first competitor that puts the company I work for out of business completely.
There are all these massively inefficient dinosaur companies in the economy that are running digitized versions of paper shuffling and a huge number of white collar bullshit jobs built on top of digitized paper shuffling.
Wage inflation has been eating away at the bottom line on all these businesses since Covid and we are going to have a dinosaur company mass extinction event in the next recession.
IMO the category error being made is that LLMs are going to agentically do digitized paper shuffling and put digitized paper shufflers out of work. That is not the problem for my job. The issue is agentically from the ground up making the concept of digitized paper shuffling null and void. A relic of the past that can't compete in the economy.
I don't know why everyone is so confident that jobs will be lost. When we invented power tools did we fire everyone that builds stuff, or did we just build more stuff?
if you replace "power tools" with industrial automation it's easy to cherry pick extremes from either side. Manufacturing? a lot of jobs displaced, maybe not lost.
It compounds too:
At a certain point, relentlessly checking for whether the model has got everything is more effort in turn than…doing it.
Moreover, is it actually a 4-8 hour job? Or is the person not using the right tool, is the better tool a sql query?
Half these “wow ai” examples feel like “oh my plates are dirty, better just buy more”.
People say this, but in my experience it’s not true.
1) The cognitive burden is much lower when the AI can correctly do 90% of the work. Yes, the remaining 10% still takes effort, but your mind has more space for it.
2) For experts who have a clear mental model of the task requirements, it’s generally less effort to fix an almost-correct solution than to invent the entire thing from scratch. The “starting cost” in mental energy to go from a blank page/empty spreadsheet to something useful is significant. (I limit this to experts because I do think you have to have a strong mental framework you can immediately slot the AI output into, in order to be able to quickly spot errors.)
3) Even when the LLM gets it totally wrong, I’ve actually had experiences where a clearly flawed output was still a useful starting point, especially when I’m tired or busy. It nerd-snipes my brain from “I need another cup of coffee before I can even begin thinking about this” to “no you idiot, that’s not how it should be done at all, do this instead…”
>The cognitive burden is much lower when the AI can correctly do 90% of the work. Yes, the remaining 10% still takes effort, but your mind has more space for it.
I think their point is that 10%, 1%, whatever %, the type of problem is a huge headache. In something like a complicated spreadsheet it can quickly become hours of looking for needles in the haystack, a search that wouldn't be necessary if AI didn't get it almost right. In fact it's almost better if it just gets some big chunk wholesale wrong - at least you can quickly identify the issue and do that part yourself, which you would have had to in the first place anyway.
Getting something almost right, no matter how close, can often be worse than not doing it at all. Undoing/correcting mistakes can be more costly as well as labor intensive. "Measure twice cut once" and all that.
I think of how in video production (edits specifically) I can get you often 90% of the way there in about half the time it takes to get it 100%. Those last bits can be exponentially more time consuming (such as an intense color grade or audio repair). The thing is with a spreadsheet like that, you can't accept a B+ or A-. If something is broken, the whole thing is broken. It needs to work more or less 100%. Closing that gap can be a huge process.
I'll stop now as I can tell I'm running a bit in circles lol
I understand the idea. My position is that this is a largely speculative claim from people who have not spent much time seriously applying agents for spreadsheet or video editing work (since those agents didn’t even exist until now).
“Getting something almost right, no matter how close, can often be worse than not doing it at all” - true with human employees and with low quality agents, but not necessarily true with expert humans using high quality agents. The cost to throw a job at an agent and see what happens is so small that in actual practice, the experience is very different and most people don’t realize this yet.
In the context of a budget that's really funny too. If you make a 18 trillion dollar error just once, no big deal, just one error right?
By that definition, the ChatGPT app is now an AI agent. When you use ChatGPT nowadays, you can select different models and complement these models with tools like web search and image creation. It’s no longer a simple text-in / text-out interface. It looks like it is still that, but deep down, it is something new: it is agentic… https://medium.com/thoughts-on-machine-learning/building-ai-...
I think this is my favorite part of the LLM hype train: the butterfly effect of dependence on an undependable stochastic system propagates errors up the chain until the whole system is worthless.
"I think it got 98% of the information correct..." how do you know how much is correct without doing the whole thing properly yourself?
The two options are:
- Do the whole thing yourself to validate
- Skim 40% of it, 'seems right to me', accept the slop and send it off to the next sucker to plug into his agent.
I think the funny part is that humans are not exempt from similar mistakes, but a human making those mistakes again and again would get fired. Meanwhile an agent that you accept to get only 98% of things right is meeting expectations.
This depends on the type of work being done. Sometimes the cost of verification is much lower than the cost of doing the work, sometimes it's about the same, and sometimes it's much more. Here's some recent discussion [0]
[0] https://www.jasonwei.net/blog/asymmetry-of-verification-and-...
> I think the funny part is that humans are not exempt from similar mistakes, but a human making those mistakes again and again would get fired. Meanwhile an agent that you accept to get only 98% of things right is meeting expectations.
My rule is that if you submit code/whatever and it has problems you are responsible for them no matter how you "wrote" it. Put another way "The LLM made a mistake" is not a valid excuse nor is "That's what the LLM spit out" a valid response to "why did you write this code this way?".
LLMs are tools, tools used by humans. The human kicking off an agent, or rather submitting the final work, is still on the hook for what they submit.
"a human making those mistakes again and again would get fired"
You must be really desperate for anti-AI arguments if this is the one you're going with. Employees make mistakes all day every day and they don't get fired. Companies don't give a shit as long as the cost of the mistakes is less than the cost of hiring someone new.
> Meanwhile an agent that you accept to get only 98% of things right is meeting expectations.
Well yeah, because the agent is so much cheaper and faster than a human that you can eat the cost of the mistakes and everything that comes with them and still come out way ahead. No, of course that doesn't work in aircraft manufacturing or medicine or coding or many other scenarios that get tossed around on HN, but it does work in a lot of others.
Definitely would work in coding. Most software companies can only dream of a 2% defect rate. Reality is probably closer to 98%, which is why we have so much organisational overhead around finding and fixing human error in software.
I wonder if you can establish some kind of confidence interval by passing data through a model x number of times. I guess it mostly depends on subjective/objective correctness as well as correctness within a certain context that you may not know if the model knows about or not. Either way sounds like more corporate drudgery.
> how do you know how much is correct
Because it's a budget. Verifying them is _much_ cheaper than finding all the entries in a giant PDF in the first place.
> the butterfly effect of dependence on an undependable stochastic system
We're using stochastic systems for a long time. We know just fine how to deal with them.
> Meanwhile an agent that you accept to get only 98% of things right is meeting expectations.
There are very few tasks humans complete at a 98% success rate either. If you think "build spreadsheet from PDF" comes anywhere close to that, you've never done that task. We're barely able to recognize objects in their default orientation at a 98% success rate. (And in many cases, deep networks outperform humans at object recognition)
The task of engineering has always been to manage error rates and risk, not to achieve perfection. "butterfly effect" is a cheap rhetorical distraction, not a criticism.
There are in fact lots of tasks people complete immediately at 99.99% success rate at first iteration or 99.999% after self and peer checking work
Perhaps importantly checking is a continual process and errors are identified as they are made and corrected whilst in context instead of being identified later by someone completely devoid of any context a task humans are notably bad at.
Lastly it's important to note the difference between a overarching task containing many sub tasks and the sub tasks.
Something which fails at a sub task comprising 10 sub tasks 2% of the time per task has a miserable 18% failure rate at the overarching task. By 20 it's failed at 1 in 3 attempts worse a failing human knows they don't know the answer the failing AI produces not only wrong answers but convincing lies
Failure to distinguish between human failure and AI failure in nature or degree of errors is a failure of analysis.
> There are in fact lots of tasks people complete immediately at 99.99% success rate at first iteration or 99.999% after self and peer checking work
This is so absurd that I wonder if you're telling? Humans don't even have a 99.99% success rate in breathing, let alone any cognitive tasks.
> Humans don't even have a 99.99% success rate in breathing
Will you please elaborate a little on this?
Humans cough or otherwise have to clear their airways about 1 in every 1,000 breaths, which is a 99.9% success rate.
That’s quite good given the complexity and fragility of the system and the chaotic nature of the environment.
I have a friend who's vibe-coding apps. He has a lot of them, like 15 or more, but most are only 60–90% complete (almost every feature is only 60-90% complete), which means almost nothing works properly. Last time he showed me something, it was sending the Supabase API key in the frontend with write permissions, so I could edit anything on his site just by inspecting the network tab in developer tools. The amount of technical debt and security issues building up over the coming years is going to be massive.
How well does the average employee do it? The baseline is not what you would do but what it would take to task someone to do it.
98% correct spreadsheets are going to get so many papers retracted.
2% wrong is $40,000 on a $2m budget.
Great point. Plus, working on your laptop on a couch is not ideal for deep excel work
Yes - and that is especially true for high-stakes processes in organizations. For example, accounting, HR benefits, taxation needs to be exactly right.
Yes. Any success I have had with LLMs has been by micromanaging them. Lots of very simple instructions, look at the results, correct them if necessary, then next step.
Honestly, though, there are far more use cases where 98% correct is equivalent to perfect than situations that require absolute correctness, both in business and for personal use.
Lol the music and presentation made it sound like that guy was going to talk about something deep and emotional not spreadsheets and expense reports.
- [deleted]
Totally agree.
Also, do you really understand what the numbers in that spreadsheet mean if you have not been participating in pulling them together?
> It feels like either finding that 2% that's off (or dealing with 2% error) will be the time consuming part in a lot of cases.
The last '2%' (and in some benchmarks 20%) could cost as much as $100B+ more to make it perfect consistently without error.
This requirement does not apply to generating art. But for agentic tasks, errors at worst being 20% or at best being 2% for an agent may be unacceptable for mistakes.
As you said, if the agent makes an error in either of the steps in an agentic flow or task, the entire result would be incorrect and you would need to check over the entire work again to spot it.
Most will just throw it away and start over; wasting more tokens, money and time.
And no, it is not "AGI" either.
- [deleted]
it now will take him 4-8hours plus a 200usd monthly bill, a win-win for everybody.
I see it as a good reason why people aren’t going to lose their jobs that much.
It just make people quite faster at what they’re already doing.
- [deleted]
I am looking forward to learning why this is entirely unlike working with humans, who in my experience commit very silly and unpredictable errors all the time (in addition to predictable ones), but additionally are often proud and anxious and happy to deliberately obfuscate their errors.
You can point out the errors to people, which will lead to less issues over time, as they gain experience. The models however don’t do that.
I think there is a lot of confusion on this topic. Humans as employees have the same basic problem: You have to train them, and at some point they quit, and then all that experience is gone. Only: The teaching takes much longer. The retention, relative to the time it takes to teach, is probably not great (admittedly I have not done the math).
A model forgets "quicker" (in human time), but can also be taught on the spot, simply by pushing necessary stuff into the ever increasing context (see claude code and multiple claude.md on how that works at any level). Experience gaining is simply not necessary, because it can infer on the spot, given you provide enough context.
In both cases having good information/context is key. But here the difference is of course, that an AI is engineered to be competent and helpful as a worker, and will be consistently great and willing to ingest all of that, and a human will be a human and bring their individual human stuff and will not be very keen to tell you about all of their insecurities.
but the person doing the job changes every month or two.
theres no persistent experience being built, and each newcomer to the job screws it up in their own unique way
The models do do that, just at the next iteration of the model. And everyone gains from everyone's mistakes.
I call it a monkey's paw for this exact reason.