> There is a lot of debate whether AI will surpass humans in all economically viable skills (AGI, by one definition).
Actually very little debate. We get a lot of unsubstantiated hype from companies like OpenAI, Anthropic, Google, Microsoft. So-called AI has barely made a dent in economic activities, and no company makes money from it. Tech journalism repeatedly fails to question the PR narrative (read Ed Zitron).
> Regardless of whether this will happen, or when, many people already have lost their jobs in part due to the emerging capabilities of AI models…
Consider the more likely explanation: many companies over-hired a few years ago and have cut jobs. Focus on stock price in an uncertain economy leads to layoffs. Easier to blame AI for layoffs than admitting C-suite management incompetence. Fear of the AI boogeyman gives employers the upper hand in hiring and salary negotiations, and keeps employers in line out of fear.
People lost their jobs because of tax changes.
Yes. When the dust settles and the data gets crunched I expect tax code changes and a glut of money looking for returns will explain the layoffs much better than AI doing anything useful.
And the tax code makes sense, because it was a Trump tax and a lot of these companies support him so have to tow the line.
It couldn't be that people lost jobs because of the policies they voted for.
> Actually very little debate
Isn’t the debate about timeline?
Do you really believe there is a 0% chance that AI will surpass humans in all economically viable skills in the next say 100,000 years? If not, what confidence do you have in it?
I didn’t make any claims about the long-term prospects of AI, not sure where you got that.
100,000 years — 20 times longer than recorded history — presents a ridiculous horizon to predict anything. Whatever happens on that time scale will have nothing to do with anyone now alive, or tech companies laying people off today.
Based on the current state of AI I don’t think it replaces much economically viable human activity. Over time our definition of “economically viable” will change, as it has for the last 100,000 years.
> I didn’t make any claims about the long-term prospects of AI
Actually you did. Someone else said that “There is a lot of debate whether AI will surpass humans in all economically viable skills” and you said, “Actually very little debate.”
The absence of a timeline means that either you are talking about a very long timeline or you enjoy being vague so that you can feel like your predictions turn out to be correct. Which is it?
> Based on the current state of AI I don’t think it replaces much economically viable human activity
How much is not much? 5% or 22%?
> Over time our definition of “economically viable” will change, as it has for the last 100,000 years
What our definition of “economically viable” 100,000 years ago? What is it today?
>So-called AI has barely made a dent in economic activities, and no company makes money from it.
Tell that to NVidia!
I have "programmed" Copilot to do some Q&A work. For my org the team should scale to 50 to make it effective. My idea and implementation will keep us to ~20 with the capability of dropping it to 5 (organically). I don't think anyone will double my salary though, so there is your real life impact. Now take that and apply it to at least "10 areas" in every org above 10k headcount. It won't change life on earth, but considering that this has just began...
Oh, I just meant the post I was replying to said
>and no company makes money from it.
and I was pointing out NVidia are making very good money, in a "selling shovels during a gold rush" sense.
> Actually very little debate. We get a lot of unsubstantiated hype from companies like OpenAI, Anthropic, Google, Microsoft
Would you really consider the Nobel laureates Geoffrey Hinton¹, Demis Hassabis² and Barack Obama³ not worth listening to on this matter? Demis is the only one with ulterior motives to hype it up, but compared to normal tech CEOs he certainly has quite a bit of proven impact (Alphafold, AlphaZero) to be worth listening to.
> AI has barely made a dent in economic activities
AI companies' revenues are growing rapidly, reaching the tens of billions. The claim that it's just a scapegoat for inevitable layoffs seems fanciful when there are many real-life cases of AI tools performing equivalent person-hours work in white-collar domains.
https://www.businessinsider.com/how-lawyer-used-ai-help-win-...
To claim it is impossible that AI could be at least a partial cause of layoffs requires an unshakable belief that AI tools could not even be labor-multiplying (as in allowing one person to perform more work at the same level of quality than they would otherwise). To assume that this has never happened by this point in 2025 requires a heavy amount of denial.
That being said, I could cite dozens of articles, numerous takes from leading experts, scientists, legitimate sources without conflicts of interest, and I'm certain a fair portion of the HN regulars would not be swayed one inch. Lively debate is the lifeblood of any domain that prides itself on intellectual rigor, but a lot of the dismissal of the actual utility of AI, the impending impacts, and its implications feels like reflexive coping.
I would really really love to hear an argument that convinced me that AGI is impossible, or far away, or that all the utility I get from Claude, o3 or Gemini are all just tricks of scale and memorization entirely orthogonal to something somewhat akin to general human-like intelligence. However, I have not heard a good argument. The replies I get seem to be largely ad-hominems toward tech CEOs, dismissive characterizations of the tech industry at large, and thought-terminating quips that hold no ontological weight.
1: https://www.wired.com/story/plaintext-geoffrey-hinton-godfat... 2: https://www.axios.com/2025/05/21/google-sergey-brin-demis-ha... 3: https://www.youtube.com/watch?v=72bHop6AIcc 4: https://www.cio.com/article/4012162/ai-begins-to-reshape-the...
> Would you really consider the Nobel laureates Geoffrey Hinton¹, Demis Hassabis² and Barack Obama³ not worth listening to on this matter?
Obama, no. Geoff Hinton has his opinions and I’ve listened to them. For every smart person who believes in AI and AGI happening soon you can find other smart people who argue the other way.
> AI companies' revenues are growing rapidly, reaching the tens of billions.
Trading stock and Azure credits don’t equal revenue. OpenAI, the leader in the AI industry, loses billions every quarter. Microsoft and Google and Meta subsidize their work from other profitable activities. The profit isn’t there.
> The claim that it's just a scapegoat for inevitable layoffs seems fanciful when there are many real-life cases of AI tools performing equivalent person-hours work in white-collar domains. https://www.businessinsider.com/how-lawyer-used-ai-help-win-...
A few questionable anecdotes? Given the years since ChatGPT and the billions invested I’d expect more tectonic changes than “It wrote my term paper.” Companies have not replaced employees with AI doing the same job at any scale. You simply can’t find honest examples of that except for call centers that got automated and offshored decades ago.
> To claim it is impossible that AI could be at least a partial cause of layoffs requires an unshakable belief that AI tools could not even be labor-multiplying (as in allowing one person to perform more work at the same level of quality than they would otherwise). To assume that this has never happened by this point in 2025 requires a heavy amount of denial.
I might agree, but I didn’t make that claim. AI tools probably can add value as tools. If you find a real example of AI taking over a professional job at scale let us know.
> That being said, I could cite dozens of articles, numerous takes from leading experts, scientists, legitimate sources without conflicts of interest, and I'm certain a fair portion of the HN regulars would not be swayed one inch. Lively debate is the lifeblood of any domain that prides itself on intellectual rigor, but a lot of the dismissal of the actual utility of AI, the impending impacts, and its implications feels like reflexive coping.
We could have better discussions if the AI industry wasn’t led by chronic liars and frauds, people making ridiculous self-serving predictions not backed by anything that resembles science. AI gets literally shoved down our throats with no demonstrated or measurable benefit, poor accuracy, severe limitations, heavy costs that get subsidized by investors and the public. Forget about the energy and environmental impact. Which “side” acts in good faith in the so-called debate?
> I would really really love to hear an argument that convinced me that AGI is impossible, or far away, or that all the utility I get from Claude, o3 or Gemini are all just tricks of scale and memorization entirely orthogonal to something somewhat akin to general human-like intelligence. However, I have not heard a good argument.
I have. You just need to take the critics seriously. No one can even define intelligence or AGI, but they sure can sell it to FOMO CIOs.
> Obama, no. Geoff Hinton has his opinions and I’ve listened to them. For every smart person who believes in AI and AGI happening soon you can find other smart people who argue the other way.
So your argument is "smart people are wrong sometimes so I will say he is one of those people". Not an incorrect argument in principle, but it can be used to arbitrarily dismiss any opinion you disagree with. I think what's more important is the delta of average sentiment among experts. The general opinion you'd get talking to reasonably smart, well informed tech-adjacent people on the impending approach of AGI is very different now than a few years ago, and these are people who can tell the difference between marketing hype and legitimate impacts.
>Trading stock and Azure credits don’t equal revenue. OpenAI, the leader in the AI industry, loses billions every quarter. Microsoft and Google and Meta subsidize their work from other profitable activities. The profit isn’t there.
They're losing money because they're spending a lot on talent, datacenter buildup and training runs. This makes sense given the returns on scaling. This is like how people had said "Amazon never made a profit" for years, even though their money was just being put into growth. A lack of profitability during growth isn't a weird exception that proves that OpenAI and AI companies are scams, it's pretty much the standard for the tech industry due to the benefits of being a loss leader when money invested early is instrumental for relative market dominance.
ChatGPT has 500 million active users. Over half of its billions in revenue is from ChatGPT plus subscriptions (which is mostly individuals paying for it, not b2b swap schemes.)
> I’d expect more tectonic changes than “It wrote my term paper.”
My specific example was a bigger deal than "It wrote my term paper". Even so, I know you used that example as a shorthand for a tool of trivial utility, but a tool which can effectively write at the college is invariably a big deal. However it seems that every advance in AI shifts the goalposts.
> I might agree, but I didn’t make that claim. AI tools probably can add value as tools. If you find a real example of AI taking over a professional job at scale let us know.
Your claim was that it's unreasonable to assert that layoffs were in part due to AI. My claim is that AI being merely labor multiplying is enough for it to be an impetus for layoffs. Even if it's impossible for an AI tool to completely replace one job fully on its own with no human input, if a senior engineer could use an AI tool that allows them to perform the job of 1.5 equivalent engineers at a company, then that company requires only 2/3 its original number of engineers for the same output, which would preempt a layoff of the least productive engineers due to their contribution being made redundant.
No one doubts that a smaller percentage of humans are employed as farmers today than were in the 1800s, but it would be silly to claim that since no robot exists that can fully perform every task that a farmer can completely autonomously that the massive reduction in farm jobs since the 1800s could not have been in any part due to technological advancements. No one single farmer has been "fully replaced", but the integration of labor multiplying tools allowed a smaller number of farmers to perform the duties of far more, which is what is effectively happening with AI tools.
> We could have better discussions if the AI industry wasn’t led by chronic liars and frauds, people making ridiculous self-serving predictions not backed by anything that resembles science. AI gets literally shoved down our throats with no demonstrated or measurable benefit, poor accuracy, severe limitations, heavy costs that get subsidized by investors and the public. Forget about the energy and environmental impact. Which “side” acts in good faith in the so-called debate?
Every booming industry is rife with those seeking to maximally exploit its growth potential. This was true with the emergence of electricity, the dot-com boom, mobile phones, etc. Anyone with a brain could see that in any new flashy domain there is a lot of deceptive hype, but anyone with a brain could also see that these new advancements did massively impact the world and in many ways improved productivity and general technological enablement.
> I have. You just need to take the critics seriously. No one can even define intelligence or AGI, but they sure can sell it to FOMO CIOs.
You still haven't provided any substantive arguments beyond "AI company CEOs are all a pack of liars", outright denial of any claim I make, and reflexive dismissal of any citation I provide.
The substance of your arguments seems to rely on retreating to a bastion of impossible to certify claims, using the lack of absolute consensus in one direction as evidence that the complete opposite claim must be correct. Claiming that "no one can define intelligence or AGI" has absolutely no bearing on whether LLMs or the like are effectively performing many tasks humans consider essential to intelligence. These semantic hideaways would be a sensible area to take shelter in if this were 1999 and machine intelligence were a far-off fantasy, but ignoring much of the realities emerging across the economy and society is a rhetorical flourish you can only pull off for so long.
As much as I’d like to continue debating with you I think we will just continue talking past each other.
I didn’t address the possibility of AGI, the consensus or “delta” of smart people, the utility of LLMs, or the prospect of OpenAI et al. someday making a profit in my original comment. You raised those issues and then put words in my mouth.
My original comment amounted to an Occam’s Razor argument: we can explain layoffs in terms of tax changes, economic and political uncertainty, over-hiring during COVID, and management incompetence. The impact of AI remains to be seen. The hype and fear-mongering serves the AI companies in a blatantly obvious way, so we should take their claims skeptically. Maybe it will all come true, but even if we get AGI next week that won’t explain the layoffs in tech and other sectors that started several years ago.
In a few years we will have more evidence to support (or not) AGI, whatever it means. And we will have more evidence of the real economic impact of AI. Right now no one can claim expertise, just opinions.
I feel that it’s important to make a note of the kinds of discussions people have on this site which commonly recur yet offer no benefit to either party. Some of our disagreements lie in the realm of speculative confidence and thus whether either one of us is correct is up to our biases and trust in various sources. However, there are some points of disagreement which I feel don’t factor in the very real nuts and bolts realities and are tiresome to see brought up. However I believe you and many others are, due to a fear or animosity towards the implications of near term AGI, are engaged in motivated reasoning. Even the smartest people fall victim to this folly, even Einstein for a while could not accept the implications of quantum mechanics.
You seem like a very smart person, so I implore you to audit the motivations driving your conclusions, and whether you would be willing to entertain ideas that are correct despite being unpalatable. The ability to accept and effectively grapple with unsavory realities is a massively useful skill in a world where long held assumptions are being upended every day.
> However I believe you and many others are, due to a fear or animosity towards the implications of near term AGI, are engaged in motivated reasoning.
Not fear or animosity. I have over 40 years in software development. I’ve seen numerous fads and supposedly revolutionary tech ideas get pushed hard only to fade away, or fail to deliver. Not long ago we had to endure the metaverse hype, before that blockchain. No code/ low code has been around in some form as a a holy grail for management for decades. I haven’t stayed employable into my sixties by ignoring new things. I try to discern the useful and real from the hyped and fake. I don’t claim to know how LLMs or anything else will eventually play out, but I’ve seen this money-driven hypetrain and cults of personality before.
> Even the smartest people fall victim to this folly, even Einstein for a while could not accept the implications of quantum mechanics.
That of course works both ways.
If so-called AI offers benefits I will adapt, just as I’ve adapted to many new things during my career. If it fizzles out or falls short of the promises I will still have a job. Right now the AI coding tools add nothing for me. Other programmers may have different experiences.
Programmers with years of experience in multiple business domains, like me, have the least to fear from AI. I think excluding young people just starting their careers, at a time when an LLM can plausibly do the same work under supervision, will choke off the path to senior developer and the necessary mentoring and real-world experience. AI may create a demographic crisis even if it doesn’t actually replace senior programmers. The damage will come from management short-sightedness and short-term profit seeking, not the technology itself.
In any case the question of what AI can do or might do only has a little to do with layoffs in tech and other sectors, and that was the original topic.