The ‘white-collar bloodbath’ is all part of the AI hype machine

cnn.com

708 points

lwo32k

7 days ago


1283 comments

simonsarris 7 days ago

I think the real white collar bloodbath is that the end of ZIRP was the end of infinite software job postings, and the start of layoffs. I think its easy to now point to AI, but it seems like a canard for the huge thing that already happened.

just look at this:

https://fred.stlouisfed.org/graph/?g=1JmOr

In terms of magnitude the effect of this is just enormous and still being felt, and never recovered to pre-2020 levels. It may never. (Pre-pandemic job postings indexed to 100, its at 61 for software)

Maybe AI is having an effect on IT jobs though, look at the unique inflection near the start of 2025: https://fred.stlouisfed.org/graph/?g=1JmOv

For another point of comparison, construction and nursing job postings are higher than they were pre-pandemic (about 120 and 116 respectively, where pre-pandemic was indexed to 100. Banking jobs still hover around 100.)

I feel like this is almost going to become lost history because the AI hype is so self-insistent. People a decade from now will think Elon slashed Twitter's employee count by 90% because of some AI initiative, and not because he simply thought he could run a lot leaner. We're on year 3-4 of a lot of other companies wondering the same thing. Maybe AI will play into that eventually. But so far companies have needed no such crutch for reducing headcount.

  • rglover 6 days ago

    IMO this is dead on. AI is a hell of a scapegoat for companies that want to save face and pretend that their success wasn't because of cheap money being pumped into them. And in a world addicted to status games, that's a gift from the heavens.

    • esperent 6 days ago

      ZIRP is an American thing? In that case maybe we could try comparisons with the job markets in other developed Western countries that didn't have this policy. If it was because of ZIRP, then their job markets should show clearly different patterns.

      • rglover 5 days ago

        ZIRP was a central banking thing, not just an American phenomenon. At least in the tech industry, the declines we're seeing in job opportunities are a result of capital being more expensive for VCs, meaning less investments are made (both in new and existing businesses), meaning there's less cash to hire and expand with. It just felt like the norm because ZIRP ran more or less uninterrupted for 10 years.

        You're right that we should see comparisons in other developed countries, but with SV being the epicenter of it all, you'd expect the fallout to at least appear more dramatic in the U.S.

        And an overwhelming number of (focusing exclusively on the U.S.) tech "businesses" weren't businesses (i.e., little to no profitability). At best they were failed experiments, and at worst, tax write-offs for VCs.

        So, what looked like a booming industry (in the literal, "we have a working, profitable, cash-flowing business here" sense) was actually just companies being flooded with investment cash that they were eager to spend in pursuit of rapid growth. Some found profitability, many did not.

        Again, IMO, AI isn't so much the cause as it is the bandage over the wound of unprofitability.

      • Armisael16 6 days ago

        There isn’t anything magically about precisely zero percent interest rates; the behavior we see is mostly a smooth extension of slightly higher rates, which the EU was at.

        And of course ZIRP was pioneered in Japan, not the US.

  • perrygeo 6 days ago

    Such an important point, I've seen and suspected the end of ZIRP being a much much greater influence on white collar work than we suspect. AI is going to take all the negative press but the flow of capital is ultimately what determines how the business works, which determines what software gets built. Conway's law 101. The white collar bloodbath is more of a haircut to shed waste accumulated during the excesses of ZIRP.

    • hoosieree 6 days ago

      AI also happens to be a perfect scapegoat: CEOs who over-hired get to shift the blame to this faceless boogeyman, and (bonus!) new hires are more desperate/willing to accept worse compensation.

    • steveBK123 6 days ago

      ZIRP and then the final gasp of COVID bubble over hiring.

      At least in my professional circles the number of late 2020-mid 2022 job switchers was immense. Like 10 years of switches condensed into 18-24 months.

      Further lot of experiences and anecdotes talking to people who saw their company/org/team double or triple in size when comparing back to 2019.

      Despite some waves of mag7 layoffs we are still I think digesting what was essentially an overhiring bubble.

    • steve_adams_86 6 days ago

      Is it negative press for AI, or is it convincing some investors that it’s actually causing a tectonic shift in the workforce and economy? It could be positive in some sense. Though ultimately negative, because the outcomes are unlikely to reflect a continuation of the perceived impact or imaginary progress of the technology.

  • e40 7 days ago

    Also section 174’s amortization of software development had a big role.

    • Lu2025 6 days ago

      I agree, R&D change is what triggered 2022 tech layoffs. Coders used to be free, all this play with Metaverse and such was on public dime. As soon as a company had to spend real money, it all came crashing down.

      • rbultje 6 days ago
        5 more

        This is a weird take. Employees are supposed to be business expenses, that's the core idea of running a business: profit = revenue - expenses, where expenses are personnel / materials, and pay taxes over profit. Since the R&D change, businesses can't fully expense employees and need to pay (business) taxes over their salaries. Employees - of course - still pay personal taxes also (as was always the case).

        • e40 6 days ago
          3 more

          Yeah, free is a bit of a odd take. ! ZIRP + section 174 was a huge simultaneous blow to tech.

          I would add one more: me too-ism from CEOs following Musk after the twitter reductions. I think many tech CEOs (e.g., Zuck) hate their workforce with a passion and used the layoff culture to unwind things and bring their workforce to heel (you might be less vocal in this sort of environment... think of the activists that used to work at Google).

          • Lu2025 6 days ago
            2 more

            > me too-ism from CEOs following Musk after the twitter reductions

            I see evidence of a collusion. My friends at several tech companies (software and hardware) received very similar sounding emails in similar time frame. I think the goal was "salary compression". Management was terrified of the turnover and salary growth so they decided to act. They threw a bunch of people on the labor market at once to cool it down. It would normalize eventually but you don't need long. Fired H1-B holders have to find a new job within 2 months or self deport.

            • e40 6 days ago

              Totally agree. They wanted to mess with supply/demand to lower salaries. A lot of very highly paid people were laid off or forced out. RTO is really about shedding people, too, so let's not forget about that.

        • sharpshadow 6 days ago

          If a software engineer in a R&D project is using a AI service to develop the software, does the bill count as company business expense or does it fall under section 174?

    • jki275 6 days ago

      That's about to get repealed it looks like.

      • latchkey 6 days ago
        15 more

        TACO

        • immibis 6 days ago
          14 more

          For those unaware, the "TACO trade" is when Wall Street investors trade based on the principle that "Trump Always Chickens Out". For example, buying in a tariff-induced dip on the principle that he'll probably repeal the tariffs.

          Now that someone's said to Trump's face that Wall Street thinks he always chickens out, he may or may not stop doing it.

          • JumpCrisscross 6 days ago
            9 more

            > Now that someone's said to Trump's face that Wall Street thinks he always chickens out, he may or may not stop doing it

            The point is he’s powerless not to. The alternative is allowing a bond rout to trigger a bank collapse, probably in rural America. He didn’t do the prep that produces actual leverage. (Xi did.)

            • alfiedotwtf 6 days ago
              4 more

              This was the most interesting thing I found during the past few weeks - even “The US President is the most powerful man in the world” can’t win a war against the bond market.

              • JumpCrisscross 6 days ago
                3 more

                > even “The US President is the most powerful man in the world” can’t win a war against the bond market

                "You will not find it difficult to prove that battles, campaigns, and even wars have been won or lost primarily because of logistics" (D. D. Eisenhower).

                Trump did zero preparation for this trade war. It's still unclear what the ends are, with opposing and contradictory aims being messaged. We launched the war simultaneously against everyone. The formula used to calculate tariffs doesn't make sense. And Trump decided to blow out the deficit and kneecap U.S. state capacity at the same time he's negotiating against himself on trade.

                The U.S. President can take on the bond market. Most simply by taking the budget into surplus, thereby threatening its existence. But Trump didn't do that. He didn't even pretend he was going to do that. Instead, he's strategically put himself in a position where he has to chicken out, and it honestly seems like he's surrounded himself with people who are too high, drunk and/or stupid to see that. He's the poker player who shows up at the table, goes all in, looks at his cards and folds in one move.

                • immibis 6 days ago
                  2 more

                  There's no end - it's just Trump following his learned or innate behaviour.

                  Same behaviour that bankrupted every institution he's ever been in charge of before. The definition of insanity is doing the same thing again and expecting different results.

                  It's possible he'll stop chickening out to win his internal argument against that reporter who said he always chickens out. Feeling like he's winning seems to be important to him and he holds grudges for a long time. In that case the American economy goes bye bye.

                  We already know he wants to end the dollar reserve currency status, because he said so - trade deficit and reserve currency status are different words for the same thing.

                  • fuzzfactor 5 days ago

                    Trump has never been good enough for the financial structures he has been finagled into the top position of.

                    So many dumpster fires but only a few official bankruptcies, well that's always what's on the table and anything goes.

                    Back in the 20th century almost everybody knew that Trump was not trustworthy, especially not with money, give me a break, that's what made him such a tragic/comic character.

                    It's almost like people forget with any org where he is the ultimate decision-maker, if there is challenging debt with no quick way out, he is more likely than most to declare bankruptcy. Otherwise it would require acumen he has never had to right a faltering ship. Plus he would be bogged down when he wanted to shift his focus to schemes that were more promising to him personally. Like other pie-in-the-sky deals back then, or something like his memecoin today. So many times in different orgs with different/leading personalities it's only a declaration away anyhow. Not normally on the menu for the best of the real decent businessmen, but what do you do when you get one that's far from the best and not even decent?

                    If there were some deep insight into his personal financial situation over the years, especially recently, there might be a more accurate picture whether he would be inclined to "one day" just decide to declare the whole USA bankrupt and move on to greener pastures himself. Or if the decision has already been made, who knew? Or would believe it yet anyway?

                    Any President could always have made more money doing something else, the whole time it's only been a matter of integrity, or lack thereof.

            • harmmonica 6 days ago
              3 more

              Can you expand on "probably in rural America"? Do you just mean that those smaller community banks are more at risk if rates rise? If so, because they issue more variable rate debt? Or is there something else?

              edit: grammar

              • JumpCrisscross 6 days ago

                > Do you just mean that those smaller community banks are more at risk if rates rise? If so, because they issue more variable rate debt? Or is there something else?

                Current issue is community banks have 3x the commercial real estate exposure of other banks [1]. They're also less liquid and have a lower ROA. So in cases where the shock comes from outside the financial sector, they tend to be the first we worry about.

                [1] https://www.fdic.gov/quarterly-banking-profile 33% vs 11% of total assets

            • lazide 6 days ago

              Never assume a narcissist will take the sane way out when their game blows up in their face.

          • wonderwonder 6 days ago

            yeah, I thought the same thing. Steel tariff announcement is the first real test. Announcing the US Steel merger? / purchase? at the same time I think is part of the plan. I think he is going to stick this one out to prove them wrong. Would be interesting to see if TACO is even real, I could see someone on wall street opening a ton of puts, making the story up and then leaking the fake story to the reporter.

            Mission accomplished.

          • darepublic 6 days ago
            2 more

            Don't look a gift taco in the mouth

            • esaym 6 days ago

              This

          • latchkey 6 days ago

            And the reason why I said it is because 174 is part of Trump's Cut^3 bill from 2017. DOE 174.

  • leptons 6 days ago

    What's happening now is similar to what happened during the 2000's "dot-com bubble burst". Having barely survived that time, I saw this one coming and people told me I was crazy when I told them to hold on to their jobs and quit job-hopping, because the job-hopper is very often the first one to get laid off.

    In 2000 I was moved cities and I had a job lined-up at a company that was run by my friends, I had about 15 good friends working at the company including the CEO, and I was guaranteed the job in software development at the company. The interview was supposed to be just a formality. So I moved, and went in to see the CEO, and he told me he could not hire me, the funding was cut and there was a hiring freeze. I was devastated. Now what? Well I had to freelance and live on whatever I could scrape together, which was a few hundred bucks a month, if I was lucky. Fortunately the place I moved into was a big house with my friends who worked at said company, and since my rent was so low at the time, they covered me for a couple of years. I did eventually get some freelance work from the company, but things did not really recover until about 2004 when I finally got a full-time programming job, after 4 very difficult years.

    So many tech companies over-hired during covid, there was a gigantic bubble happening with FAANG and every other tech company at the time. The crash in tech jobs was inevitable.

    I feel bad for people who got left out in the cold this time, I know what they are going through.

    • yellow_lead 6 days ago

      Those are some great friends. Aside from job hoppers, I noticed there are a lot of company loyalists getting canned too though (i.e worked at MSFT 10 years)

      • leptons 6 days ago

        It's not exactly the same this time around, the dot-com bubble was a bit different, but both then and now were preceded by huge hiring bubbles and valuations that were stupid. Now it's a little different 25 years later, tech has advanced and AI means cutting the fat out of a lot of companies, even Microsoft.

        AI is somewhat creating a similar bubble now, because investors still have money, and the current AI efforts are way over-hyped. 6.5 billion paid to aquihire Jony Ive is a symptom of that.

  • jameslk 7 days ago

    Keynes suggested that by 2030, we’d be working 15 hour workweeks, with the rest of the time used for leisure. Instead, we chose consumption, and helicopter money gave us bullshit jobs so we could keep buying more bullshit. This is fairly evident by the fact when the helicopter money runs out, all the bullshit jobs get cut.

    AI may give us more efficiency, but it will be filled with more bullshit jobs and consumption, not more leisure.

    • autobodie 7 days ago

      Keynes lived in a time when the working class was organized and exerting its power over its destiny.

      We live in a time that the working class is unbelievably brainwashed and manipulated.

      • pif 6 days ago

        > Keynes lived in a time when the working class ...

        Keynes lived in a time when the working class could not buy cheap from China... and complain that everybody else was doing the same!

      • kergonath 7 days ago
        32 more

        He was extrapolating, as well. Going from children in the mines to the welfare state in a generation was quite something. Unfortunately, progress slowed down significantly for many reasons but I don’t think we should really blame Keynes for this.

        > We live in a time that the working class is unbelievably brainwashed and manipulated.

        I think it has always been that way. Looking through history, there are many examples of turkeys voting for Christmas and propaganda is an old invention. I don’t think there is anything special right now. And to be fair to the working class, it’s not hard to see how they could feel abandoned. It’s also broader than the working class. The middle class is getting squeezed as well. The only winners are the oligarchs.

        • FabHK 6 days ago
          6 more

          > progress slowed down significantly for many reasons

          I think progress (in the sense of economic growth) was roughly in line with what Keynes expected. What he didn't expect is that people, instead of getting 10x the living standard with 1/3 the working hours, rather wanted to have 30x the living standard with the same working hours.

          • Ray20 6 days ago
            4 more

            It's not really clear where he got this from.

            Throughout human history, starting with the spread of agriculture, increased labor efficiency has always led to people consuming more, not to them working less.

            Moreover, throughout the 20th century, we saw several periods in different countries when wages rose very rapidly - and this always led to a temporary average increase in hours worked. Because when a worker is told "I'll pay you 50% more" - the answer is usually not "Cool, I can work 30% less", but "Now I'm willing to work 50% more to get 2x of the pay".

            • jjk166 6 days ago
              3 more

              > Throughout human history, starting with the spread of agriculture, increased labor efficiency has always led to people consuming more, not to them working less.

              Can you give a single example where that happened?

              During the industrial revolution it was definitely not what happened. In the late 1700s laborers typically averaged around 80 hours per week. In the 1880s this had decreased to around 60 hours per week. In the 1920s the average was closer to 48 hours per week. By the time Keynes was writing, the 40 hour work week had become standard. Average workweek bottomed out in the mid 1980s in the US and UK at about 37 hours before starting to increase again.

              • Ray20 5 days ago
                2 more

                >80 hours per week >60 hours per week

                That never was the case (except for short periods after salary increases).

                And this is not a question where there could be any speculation: in those days there were already people collecting such statistics, and we have a bunch of diaries describing the actual state of affairs, both from the workers themselves and from those who organized their labor - and everything shows that few people worked more than 50 hours a week on average.

                Most likely, the myth about 80 hours a week stems from the fact that such weeks really were common, but it was work in the format of "work a week or two or a month for 80 hours, then a week or two or a month you don't work, spend money, arrange your life"

                There is also agriculture, which employed a significant part of the population in the past. There, on average, there was usually even less than 40 hours of productive work, it's just that timing is of great importance there, and there are bottlenecks, and when necessary, you have to work 20 hours a day, which is compensated by periods when the workload is much less than 6 hours a day.

                • jjk166 19 hours ago

                  It most certainly was the case. As you correctly point out, people were collecting such statistics at the time, we know how much they worked and they worked a lot. In London from 1750 to 1800 the average male laborer worked over 4000 hours per year, and the typical year had 307 workdays. We have records of employment that list who worked which days at particular businesses, and court cases where witnesses testified about their work schedules, and we know of complaints from people at this time about the excessive amount of time they worked.

                  Take the Philadelphia carpenters' strike in 1791, where they were on strike demanding a reduction in hours to a 60 hour work week. The strike was unsuccessful. In the 1820s there was a so called "10 Hour Day" labor movement in New York City (note that at this time people worked 6 days a week). In the 1840s mill workers in Massachusetts attempted to get the state legislature to intervene and reduce their 74 hour workweeks. This was also unsuccessful. Martin Van Buren signed an executive order limiting workdays for federal employees to 10 hours per day. The first enforceable labor law in the US came in 1874, which set a limit of 60 hours in a workweek for women in Massachusetts.

        • ireadmevs 7 days ago
          25 more

          There’s no middle class. You either have to work for a living or you don’t.

          • dagw 6 days ago
            18 more

            You either have to work for a living or you don’t

            The words 'have to' are doing a lot of work in that statement. Some people 'have to' work to literally put food on the table, other people 'have to' work to able to making payments on their new yacht. The world is full of people who could probably live out the rest of their lives without working any more, but doing so would require drastic lifestyle changes they're not willing to make.

            I personally think the metric should be something along the lines of how long would it take from losing all your income until you're homeless.

            • nosianu 6 days ago
              2 more

              > I personally think the metric should be something along the lines of how long would it take from losing all your income until you're homeless.

              What income? Income from job, or from capital? A huge difference. Also a lot harder to lose the latter, gross incompetence or a revolution, while the former is much easier.

              • dagw 6 days ago

                Yea, should have been clearer. Income from work (or unemployment benefits) in this case. Someone who works to essentially supplement their income, but could live off their capital, is in a very different position than someone for whom work is their only source of income or wealth.

            • dgfitz 6 days ago
              11 more

              The sentence works without those two words. “You either work for a living or you don’t.”

              Now what?

              • dagw 6 days ago
                10 more

                Now it comes down to how you define 'for a living'. You still need to differentiate between people who work to survive, people who work to finance their aspirational lifestyle, and people who have all the money they could possibly need and still work because they either see it as a calling or they just seem to like working. Considering all these people in the same 'class' is far too simplistic.

                • dgfitz 6 days ago
                  8 more

                  Enh, to me it’s not, either you work or you don’t.

                  • dagw 6 days ago
                    7 more

                    So someone on the edge of poverty, balancing two or three minimum wage jobs just to make ends meet, should be considered part of the same class as the CEO of Microsoft or Google? Hell most people on the Forbes list 'work' in at least some meaning of the word, even if many of them effectively work for themselves.

                    What about the trust fund kid working part time at an art gallery just because they like the scene and hanging out with artists? Same class?

                    And on the flip side, are pensioners, the unemployed, and people on permanent disability part of the same class as the dilettante children of billionaires?

                    • dgfitz 5 days ago
                      6 more

                      Are we talking about the verb or an ideal? Either you work or you don’t.

                      • dagw 5 days ago
                        4 more

                        Are we talking about the verb or an ideal?

                        We are talking about class, and if we should be making distinctions between groups of people who work for a living based on their wealth, income, and economic stability. I believe there is a fundamental class difference between people who work, but are rich enough to stop working whenever they want, those who can't quite stop working but are comfortable enough to easily go 6 month without a pay check, and people who are only a couple of missed pay checks away from literal homelessness.

                        • dgfitz 5 days ago
                          3 more

                          I guess I lost the plot. Same point, either you work or you don’t. I grew up not knowing if I would have dinner that night because we were so poor. I learned I needed to work to eat. I don’t care that rich people are rich, I only care about myself and my family.

                          Coddling poor people is so severely out of touch with their reality, they most likely resent the hell out of you for it, I know I did.

                          • komali2 5 days ago
                            2 more

                            Nobody's coddling anyone here. Acknowledging the reality of class in society isn't doing anything but analysis.

                            The original claim was a proposal to increase the resolution of class analysis one degree "higher" than Marx and no longer differentiate between the modern proletariat (working class), bourgeois (middle class), and aristocracy (upper class), in this case proposing to lump together the bourgeois and proletariat because they both have to work or they'll starve to death.

                            In this world, being born from the orifice of an aristocrat means you never have to work (have meaning, "or you'll die of exposure"). That's a frank reality. If your reaction to being born from a non aristocratic orifice is to shrug your shoulders and accept reality, great, nobody's trying to take that from you.

                            However you seem to be taking it a step further and suggesting that the people pointing out that this nature of society is unfair are somehow wrong to do so. I disagree. I think is perfectly valid to be born from whatever orifice and declare the obvious unfairness of the situation and the work to balance things out for people. That's not coddling, it's just ensuring that we all benefit in a just way from the work of your grandfather. Cause right now, someone has stolen the value of his work from you, and that's why you (and I) had to work so hard to get where we are today.

                            If you love that you had to work so hard, fine. I could take it or leave it. Instead of working a double through school I would have preferred to focus more on my studies and get higher grades, find better internships instead of slinging sandwiches. Personally I look at the extraordinarily wealth of the aristocrat class and I think, "is it more important that they're allowed to own 3 yachts or that all the children of our society can go to college?" I strongly believe any given country will be much stronger if it has less yachts and more college-educated people. Or people with better access to healthcare. Or people with better transit options to work. Etc.

                            • dagw 4 days ago

                              I strongly believe any given country will be much stronger if it has less yachts and more college-educated people.

                              And even you strongly disagree with that statement, it is important to have framework within which your opinion of that statement can be analysed.

                      • happymellon 4 days ago

                        There was the articles on AI, that linked to how its used in Microsoft.

                        Satya Nadella doesn't read his emails, and doesn't write responses. He subscribes to podcasts and then gets them summarised by AI.

                        He turns up to the office and takes home obscene amounts of money for doing nothing except play with toys and pretend he's working.

                        They are "working", but they are actually just playing. And I think thats the problem with some of these comments, they aren't distinguishing between work and what is basically a hobby.

                        > What about the trust fund kid working part time at an art gallery just because they like the scene and hanging out with artists?

                        Its a hobby. They don't have to do it, and if they get fired for gross misconduct then they could find alternative things to pass the time.

            • mantas 6 days ago
              2 more

              Homeless or loose current house? Downsizing and/or moving to cheaper places could go a long way. Yet loosing current level of housing is what most people think want to avoid.

              • dagw 6 days ago

                Either work, but homeless is more absolute. For some downsizing means moving into their car and for others it means moving into a 3000 sq ft house, with a smaller pool, in the third nicest neighbourhood in town. But yea, losing your house and being forced to drastically downsize against your will is no doubt traumatic in both cases.

            • larrled 6 days ago
              2 more

              “from losing all your income until you're homeless.”

              I’m willing to bet you haven’t lived long enough to know that’s a more or less a proxy for old age. :) That aside, even homeless people acquire possessions over time. If you have a lot of homeless in your neighborhood, try to observe that. In my area, many homeless have semi functional motor homes. Are they legit homeless, or are they “homeless oligarchs”? I can watch any of the hundreds of YouTube channels devoted to “van life.” Is a 20 year old who skipped college which their family could have afforded, and is instead living in an $80k van and getting money from streaming a “legit homeless”? The world is not so black and white it will turn out in the long run.

          • d4mi3n 7 days ago
            4 more

            While you’re not wrong in what differentiates those with wealth to those without, I think ignores a lot of nuance.

            Does one have savings? Can they afford to spend time with their children outside of working day to day? Do they have the ability to take reasonable risks without chancing financial ruin in pursuit of better opportunities?

            These are things we typically attribute to someone in the middle class. I worry that boiling down these discussions to “you work and they don’t” misses a lot of opportunity for tangible improvement to quality of life for large number of people.

            • hobs 6 days ago
              3 more

              It doesn't - its a battle cry for the working classes (ie anyone who actually works) to realize they are being exploited by those that simply do not.

              If you have an actual job and an income constrained by your work output, you could be middle class, but you could also recognize that you are getting absolutely ruined by the billionaire class (no matter what your level of working wealth)

              • SpicyLemonZest 6 days ago
                2 more

                I'm really not convinced that I and my CEO share a common class interest against the billionaires, and I'm not particularly interested in standing together to demand that both of us need to be paid more.

                • hobs 6 days ago

                  I don't know how to convince you that both of you are struggling against each other when you should be in common cause, but in my experience if the CEO thinks even more they are a temporarily embarrassed billionaire then I can see why you'd resent them. That doesn't change the facts of the matter though.

          • freefrog334433 6 days ago

            Traditionally there were the English upper class, who had others work for them, and the working class who did. Doctors and Bankers were the middle class, because they owned houses with 6-8 servants running it, so while they worked, they also had plenty of people working for them.

            I agree with your point. Now doctors are working class as well.

          • laughing_man 6 days ago

            That's reductive. The middle class in the US commonly describes people who have access to goods and services in moderation. You aren't poor just because you can't retire.

      • eastbound 7 days ago
        9 more

        It is very possible that foreign powers use AI to generate social media content in mass for propaganda. If anything, the internet up to 2015 seemed open for discussion and swaying by real people’s opinion (and mockery of the elite classes), while manipulation and manufactured consent became the norm after 2017.

        • kergonath 7 days ago

          > It is very possible that foreign powers use AI to generate social media content in mass for propaganda.

          No need for AI. Troll farms are well documented and were in action before transformers could string two sentences together.

        • amarcheschi 7 days ago
          5 more

          Italian party Lega (in the government coalition) has been using deep fakes for some time now. It's not only ridiculous, it's absolutely offensive to the people they mock - von Der leyen, other Italian politicians... -

          • genewitch 7 days ago
            4 more

            Queen Ursula deserves to be mocked.

            • karmakurtisaani 6 days ago
              3 more

              Even from an angle that destabilizes the EU and so directly benefits Russia?

              • genewitch 3 days ago

                My answer to this would be, i think: "well, if my mocking Ursula Von Der Leyen destabilizes the EU then maybe the EU shouldn't exist."

                right?

                pact of steel?

                anyone?

              • eastbound 6 days ago

                Yes. She’s not an elected representative. And she’s been utterly ineffective at threatening Russia with her soft stance (Yes, in war, strong words are weak actions). Her place is back in Hunger Games, starving everybody for the greater good of the elite class.

        • rusk 7 days ago

          This is a pre-/post- Snowden & Schrems, which challenged the primary economic model of the internet as a surveillance machine.

          All the free money dried up and the happy clapping Barney the Dinosaur Internet was no more!

      • hoseyor 6 days ago
        5 more

        He also lived in a time when the intense importance and function of a moral and cultural framework for society was taken for granted. He would have never imagined the level of social and moral degeneration of today.

        I will not go into specifics because the authoritarians still disagree and think everything is fine with degenerative debauchery and try to abuse anyone even just pointing to failing systems, but it all does seem like civilization ending developments regardless of whether it leads to the rise of another civilization, e.g., the Asian Era, i.e., China, India, Russia, Japan, et al.

        Ironically, I don’t see the US surviving this transitional phase, especially considering it essentially does not even really exist anymore at its core. Would any of the founders of America approve of any of America today? The forefathers of India, China, Russia, and maybe Japan would clearly approve of their countries and cultures. America is a hollowed out husk with a facade of red, white, and blue pomp and circumstance that is even fading, where America means both everything and nothing as a manipulative slogan to enrich the few, a massive private equity raid on America.

        When you think of the Asian countries, you also think of distinct and unique cultures that all have their advantages and disadvantages, the true differences that make them true diversity that makes humanity so wonderful. In America you have none of that. You have a decimated culture that is jumbled with all kinds of muddled and polluted cultures from all over the place, all equally confused and bewildered about what they are and why they feel so lost only chasing dollars and shiny objects to further enrich the ever smaller group of con artist psychopathic narcissists at the top, a kind of worst form of aristocracy that humanity has yet ever produced, lacking any kind of sense of noblesse oblige, which does not even extend to simply not betraying your own people.

        • komali2 6 days ago
          3 more

          That a capitalist society might achieve a 15 hour workweek if it maintained a "non debauched culture" and "culture homogeneity" is an extraordinary claim I've never seen a scrap of evidence for. Can you support this extraordinary claim?

          That there's any cultural "degenerative debauchery" is an extraordinary claim. Can you back up this claim with evidence?

          "Decimated," "muddled," and "polluted" imply you have an objective analysis framework for culture. Typically people who study culture avoid moralizing like this because one very quickly ends up looking very foolish. What do you know that the anthropologists and sociologists don't, to where you use these terms so freely?

          If I seem aggressive, it's because I'm quite tired of vague handwaving around "degeneracy" and identity politics. Too often these conversations are completely presumptive.

          • chucksmash 6 days ago
            2 more

            > That there's any cultural "degenerative debauchery" is an extraordinary claim. Can you back up this claim with evidence?

            What's the sense in asking for examples? If one person sees ubiquitous cultural decay and the other says "this is fine," I think the difference is down to worldview. And for a pessimist and an optimist to cite examples at one another is unlikely to change the other's worldview.

            If a pessimist said, "the opioid crisis is deadlier than the crack epidemic and nobody cares," would that change the optimist's mind?

            If a pessimist said, "the rate of suicide has increased by 30% since the year 2000," would that change the optimist's mind?

            If a pessimist said, "corporate profits, wealth inequality, household debt, and homelessness are all at record highs," ...?

            And coming from the other side, all these things can be Steven Pinker'd if you want to feel like "yes there are real problems but actually things are better than ever."

            There was a book that said something about "you will recognize them by their fruit." If these problems are the fruit born of our culture, it's worth asking how we got here instead of dismissing it with "What do you know that the anthropologists and sociologists don't?"

            • komali2 5 days ago

              Sure some things are subjective but wide-ranging and vague claims are unactionable and therefore imo should simply be ignored. If someone's going to say something like that I think it's worth challenging them to get specific and actionable.

              I also wholeheartedly disagree that, vaguely, diversity has something to do with the reduction of material conditions, or gay people, or whatever tf, so I wanted to allow the op the opportunity to be demonstrably wrong. They wouldn't take it of course, because there's no evidence for what they claim, because it's a ridiculous assertion.

              The reasons things are they way they are today are identifiable and measurable. Rent is high because mostly because housing is an investment vehicle and supply is locked by a functional cartel. Homelessness is high mostly because of a lack of universal healthcare. Crime is continually dropping despite what the media says, and immigrants commit a lower crime per capita than any other demographic group - but the jails remain full because the USA engages in a demonstrably ineffective retributive justice system.

              I'm so tired of conservatives walking around flinging every which way their feelings as facts. Zizek has demonstrated the potential value of a well considered conservative ideology, and unfortunately today all we get from that side is vague (or explicit) bigotry.

              The OP didn't just claim that there's cultural degeneracy happening (which again, they didn't definite very well), they blamed real-world outcomes on it. That's a challengeable premise.

        • mlinhares 6 days ago

          Oh the prized Asian magic, more civilized, less mixed, the magical place.

          Capitalism arrives for everyone, Asia is just late for the party. Once it eventually financializes everything, the same will happen to it. Capitalism eventually eats itself, doesn't matter the language or how many centuries your people might have.

      • 1776smithadam 6 days ago

        Keynes didn't anticipate social media

    • ccorcos 6 days ago

      If you work 15 hours/week then presumably someone who chose to work 45 hours/week would make 3x more money.

      This creates supply-demand pressure for goods and services. Anything with limited supply such as living in the nice part of town will price out anyone working 15 hours/week.

      And so society finds an equilibrium…

      • jjk166 6 days ago
        3 more

        Presumably the reduction to a 15 hour workweek would be much the same as the reduction to the 40 hour workweek - everyone takes the same reduction in total hours and increase in hourly compensation encoded in labor laws specifically so there isn't this tragedy of the commons.

        • ccorcos 3 days ago
          2 more

          Unless the law forbids working more than 15 hours per week, the numbers will shift around but the supply-demand market equilibrium will remain approximately the same.

          If minimum wage goes up 40/15 = 267%, then the price of your coffee will go up 267% because the coffeeshop owner needs to pay 267% more to keep the cafe staffed.

          The 40 hour work week is something a cultural equilibrium. But we've all heard of doctors, lawyers, and bankers working 100h weeks which affords them some of the most desirable real estate in the world...

          • jjk166 20 hours ago

            > Unless the law forbids working more than 15 hours per week, the numbers will shift around but the supply-demand market equilibrium will remain approximately the same.

            Require anyone working over 15 hours to be paid time and a half overtime. If you want to hire one person to work 40 hours per week, that is 30% more expensive than hiring 3 people to work the same number of hours. In some select instances sure, having a single person do the job is worth the markup, and some people will be willing to work those hours, just like today you have some people working over 40, but in general the market will demand reduction in working hours.

            Similarly, there is a strong incentive to work enough hours to be counted as a full time employee, so the marginal utility of that 35th hour is pretty high currently, whereas if full time benefits and labor protections started at 15 hours, then the marginal utility of that 35th hour would be substantially less.

            > If minimum wage goes up 40/15 = 267%, then the price of your coffee will go up 267% because the coffeeshop owner needs to pay 267% more to keep the cafe staffed.

            That would be true if 100% of the coffee shop's revenue went to wages. Obviously that's not the case. In reality, the shop is buying ingredients, paying rent for the space, paying off capex for the coffee making equipment, utilizing multiple business services like accounting and marketing, and hopefully at the end of the day making some profit. Realistically, wages for a coffee shop are probably 20-30% of revenue. So to cover the increased cost of labor, prices would have to rise 53%. Note that in this scenario you also have 267% more money to spend on coffee.

            Of course there are some more nuances as prices in general inflate. Ultimately though, the equilibrium you reach is that people working minimum wage for a full workweek wind up able to afford 1 minimum-wage workweek worth of goods and services. This holds true in the long term regardless of what level minimum wage is or how long a workweek is. Indeed you could just as easily have everyone's wages stay exactly the same but we are all working less, then we all have less money and there is a deflationary effect but in the long term we wind up at the same situation. Ideally, you'd strike a balance between these two which reaches the same end state with a reasonably steady money supply.

            > The 40 hour work week is something a cultural equilibrium.

            No, it isn't. It is an arbitrary convention, one in a long series which had substantially different values in the past. It has remained constant because it is encoded in law in such a way that it is no longer subject to simple pressures of labor supply and demand.

            > But we've all heard of doctors, lawyers, and bankers working 100h weeks which affords them some of the most desirable real estate in the world...

            There are a lot more than just doctors and lawyers and bankers working long hours. 37% of americans work 2 full time jobs, and most of them aren't exactly in a position to afford extremely desirable real estate. If the workweek were in a equilibrium due to supply and demand, wouldn't these people just be working more hours at their regular jobs?

    • tim333 7 days ago

      I think something Keynes got wrong there and much AI job discussion ignores is people like working, subject to the job being fun. Look at the richest people with no need to work - Musk, Buffett etc. Still working away, often well past retirement age with no need for the money. Keynes himself, wealth and probably with tenure working away on his theories. In the UK you can quite easily do nothing by going on disability allowance and doing nothing and many do but they are not happy.

      There can be a certain snobbishness with academics where they are like of course I enjoy working away on my theories of employment but the unwashed masses do crap jobs where they'd rather sit on their arses watching reality TV. But it isn't really like that. Usually.

      • trinix912 6 days ago
        11 more

        The reality of most people is that they need to work to financially sustain themselves. Yes, there are people who just like what they do and work regardless, but I think we shouldn't discount the majority which would drop their jobs or at least work less hours had it not been out of the need for money.

        • tim333 6 days ago
          10 more

          Although in democracies we've largely selected that system. I've been to socialist places - Cuba and Albania before communism collapsed where a lot of people didn't do much but were still housed and fed (not very well - ration books) but no one seems to want to vote that stuff in.

          • trinix912 6 days ago
            5 more

            The thing about those systems is you'd have to forgo the entire notion about private property and wealth as we currently know it for it to work out. Even then, there would be people who wouldn't want to work/contribute and the majority who would contribute the bare minimum (like you're saying). The percentage of people who'd work because they like it wouldn't be much higher than it is now. Or it might be even lower, as money wouldn't be as much of a factor in one's life.

            • hx8 6 days ago
              4 more

              It seems like a democratic system could both maintain private property and make sure all of their citizens have basic needs are satisifed (food, housing, education, medical). I don't see how these two are mutually exclusive, unless you take a hardline that taxation is theft.

              • AnimalMuppet 6 days ago

                I think more people take a soft line. Taxation isn't theft, but too much taxation is theft.

                I don't know that I've ever heard this rationally articulated. I think it's a "gut feel" that at least some people have.

                If taxes take 10% of what you make, you aren't happy about it, but most of us are OK with it. If taxes take 90% of what you make, that feels different. It feels like the government thinks it all belongs to them, whereas at 10%, it feels like "the principle is that it all belongs to you, but we have to take some tax to keep everything running".

                So I think the way this plays out in practice is, the amount of taxes needed to supply everyones' basic needs is across the threshold in many peoples' minds. (The threshold of "fairness" or "reasonable" or some such, though it's more of a gut feel than a rational position.)

              • Ray20 6 days ago
                2 more

                >food, housing, education, medical

                Literally unlimited needs, term "basic" does not apply to them.

                • hx8 4 days ago

                  I'm not sure what you mean by "unlimited needs". These things are defiantly finite, and can be basic.

          • ptero 6 days ago

            While they didn't do much at work and could coast forever, they still had to show up and sit out the hours. And this does seem to correlate highly with ration books. Which are also not amazon-fulfilled, but require going to a store, waiting in line, worring that the rations would run out, yada yada.

            I'll take capitalism with all its warts over that workers paradise any day.

          • sotix 6 days ago
            3 more

            How did you visit Albania before communism collapsed? I thought it was closed off from the world.

            • tim333 6 days ago
              2 more

              Well it was in the middle period when some communism collapsed but Albania was communist still. They did tourist day trips from Corfu to raise some hard currency. It's only about a mile from Albania at the closest point.

              • sotix 5 days ago

                Yeah, I’ve been to that part of the world. That’s really cool. I didn’t know it was available to tour at that time.

      • timacles 6 days ago
        4 more

        What percentage of people would you say like working for fun? Would you really claim they make up a significant portion of society?

        Even myself, work a job that I enjoy building things that I’m good at, that is almost stress free, and after 10-15 years find that I would much rather spend time with my family or even spend a day doing nothing rather than spend another hour doing work for other people. the work never stops coming and the meaninglessness is stronger than ever.

        • ghaff 6 days ago

          I think a lot of people would work fewer hours and probably retire earlier if money were absolutely not in the equation. That said, it's also true that there are a lot of things you realistically can't do on your own--especially outside of software.

        • tim333 6 days ago
          2 more

          Well - I guess you are maybe typical in quite liking the work but wanting to do less hours? I saw some research that hunter gatherers work about 20 hours a week - maybe that's an optimum.

          • chipsrafferty 6 days ago

            A lot of people like the work they do, but they also like the things they do when they aren't working - more.

      • navane 6 days ago

        Meanwhile your examples for happy working are all billionaires who do w/e tf they want, and your example of sad non working are disabled people.

    • Slow_Hand 6 days ago

      Not to undercut your point - because you’re largely correct - but this is my reality. I have a decent-paying job in which I work roughly 15 hrs a week. Sometimes more when work scales up.

      That said, I’m not what you’d call a high-earning person (I earn < 100k) I simply live within my means and do my best to curb lifestyle creep. In this way, Keynes’ vision is a reality, but it’s a mindset and we also have to know when enough wealth is enough.

      • oblio 6 days ago
        8 more

        You're lucky. Most companies don't accept that. Frequently, even when they have part time arrangements, the incentives are such that middle managers are incentivized to squeeze you (including squeezing you out), despite company policies and HR mandates.

        • Slow_Hand 6 days ago

          I am lucky. I work for a very small consultancy (3 people plus occassional contractors) and am paid a fraction of our net income.

          The arrangement was arrived at because the irregular income schedule makes an hourly wage or a salary a poor option for everyone involved. I’m grateful to work for a company where the owners value not only my time and worth but also value a similar work routine themselves.

        • ghaff 6 days ago
          3 more

          40 hours/week is of course just an established norm for a lot of people and companies. But two 20 hour/week folks tend to cost more than one 40 hour/week person for all sorts of reasons.

          • babuloseo 6 days ago
            2 more

            source?

            • ghaff 6 days ago

              Well, for starters people probably want health insurance in the US which often starts at some percentage of full-time. Various other benefits. Then two people are probably just more overhead to manage than one. Though they may offer more flexibility.

        • tonyedgecombe 6 days ago
          3 more

          Which is a shame because I bet most knowledge workers aren't putting in more than three or fours hours of solid work. The rest of the time they are just keeping a seat warm.

          • gibbitz 5 days ago
            2 more

            Spoken like middle management. If a knowledge worker is only putting in 4 hours they're either mismanaged or dead weight. Fire their manager and see if they are more effective, if not, then let them go. As a developer I routinely work 9 hour days without lunch and so do the others on my team and most people I've worked with as a developer. Myths like the 10% developer and lazy 4 hour knowledge workers are like the myth of the welfare queen. We really need to be more aware that when we complain about 5% of people that it becomes 100% to those outside of the field.

            • tonyedgecombe 4 days ago

              >As a developer I routinely work 9 hour days without lunch and so do the others on my team and most people I've worked with as a developer.

              I've come across people like you and they don't produce as much value as they think.

      • howtoquitwell 3 days ago

        I'm working hard on this one. I'm down to a three-day week, and am largely keeping the boundaries around those other four.

        It came about late last year when the current employer started going getting gently waved off in early funding pitches. That resulted in some thrash, forced marches to show we could ship, and the attendant burnout for me and a good chunk of the team I managed. I took a hard look at where the company was and where I was, and decided I didn't have another big grind in me right now.

        Rather than just quit like I probably would have previously, I laid it out to our CEO in terms of what I needed: more time taking care of my family and myself, less pressure to deliver impossible things, and some broad idea of what I could say "no" to. Instead of laughing in my face, he dug in, and we had a frank conversation about what I _was_ willing to sign up for. That in turn resulted in a (slow, still work-in-progress) transition where we hired a new engineering leader and I moved into a customer-facing role with no direct reports.

        Now I to work a part-time schedule, so I can do random "unproductive" things like repair the dishwasher, chaperone the kid's field trip, or spend the afternoon helping my retired dad make a Costco run. I can reasonably stop and say, "I _could_ pay someone to do that for me, but I actually have time this week and I can just get it done" and sometimes I...actually do, which is kind of amazing?

        ...and it's still fucking hard to watch the big, interesting decisions and projects flow by with other people tackling them and not jump in and offer to help. B/c no matter what a dopamine ride that path can be, it also leads to late nights and weekends working and traveling and feeling shitty about being an absentee parent and partner.

    • seydor 6 days ago

      Most of the people are leisuring af work (for keynes era standards) and also getting paid for it

    • onlyrealcuzzo 4 days ago

      > Keynes suggested that by 2030, we’d be working 15 hour workweeks, with the rest of the time used for leisure.

      I suspect he didn't factor in how may people would be retired and on entitlements.

      We're not SUPER far from that now, when you factor in how much more time off the average person has now, how much larger of percentage of the population is retired, and how much of a percentage is on entitlements.

      The distribution is just very unequal.

      I.E. if you're the median worker, you've probably seen almost no benefit, but if you're old or on entitlements, you've seen a lot of benefits.

    • JumpCrisscross 6 days ago

      > Keynes suggested that by 2030, we’d be working 15 hour workweeks

      Most people with a modest retirement account could retire in their forties to working 15-hour workweeks somewhere in rural America.

      • steveBK123 6 days ago
        3 more

        The trade is you need to live in VHCOL city to earn enough and have a high savings rate. Avoid spending it all on VHCOL real estate.

        And then after living at the center of everything for 15-20 years be mentally prepared to move to “nowhere”, possibly before your kids head off to college.

        Most cannot meet all those conditions and end up on the hedonic treadmill.

        • JumpCrisscross 6 days ago
          2 more

          > you need to live in VHCOL city to earn enough and have a high savings rate

          Yes to the latter, no to the former. The states with the highest savings rates are Connecticut, New Jersey, Minnesota, Massachussetts and Maryland [1]. Only Massachussetts is a top-five COL state [2].

          > then after living at the center of everything for 15-20 years be mentally prepared to move to “nowhere”

          This is the real hurdle. Ultimately, however, it's a choice. One chooses to work harder to access a scarce resource out of preference, not necessity.

          [1] https://en.wikipedia.org/wiki/List_of_U.S._states_by_savings...

          [2] https://en.wikipedia.org/wiki/List_of_U.S._states_by_savings...

          • steveBK123 4 days ago

            CT & NJ being top of the list points to the great NYC metropolitan wage premium though doesn't it? MA at #4 picks up Boston, MD at #5 picks up DC, etc.

            CA probably nowhere on the list because its such a small state that any Silicon Valley premium gets diluted at the state level average.

            I am not finding a clear definition of this index but it appears to be $saved/$income (or $saved/$living expenses) right? So 114% in CT dollars is probably way more than 102% Kansas dollars..

            It's also worth noting the point I was making is - if you take a "one years NYC income in savings" amount of money and relocate to say, New Mexico.. the money goes a lot further than trying to do the opposite!

    • runeks 4 days ago

      Keynes also convinced us that high unemployment and high inflation couldn't happen at the same time. This was proven wrong in the early 1970s.

    • laughing_man 6 days ago

      It's more likely 15% of the workforce will have jobs. They'll be working eighty hour weeks and making just enough to keep them from leaving.

    • raincom 6 days ago

      Now one has to work 60 hours to afford housing(rent/mortgage) and insurance (health, home, automotive). Yes, food is cheap if one can cook.

    • WorkerBee28474 6 days ago

      > Keynes suggested that by 2030, we’d be working 15 hour workweeks

      Yeah, I'd say I get up to 15 hours of work done in a 40 hour workweek.

    • latentsea 5 days ago

      It's still not 2030 yet. It could still happen.

    • gosub100 6 days ago

      > Instead, we chose consumption

      instead, corporations chose to consume us

    • SarahC_ 7 days ago

      "Bullshit jobs" are the rubbish required to keep the paperwork tidy, assessed and filed. No company pays someone to do -nothing-.

      AI isn't going to generate those jobs, it's going to automate them.

      ALL our bullshit jobs are going away, and those people will be unemployed.

      • tim333 6 days ago
        18 more

        I foresee programers replaced by AI and the people who programed becoming pointy haired bosses to the AI.

        • dgfitz 6 days ago
          5 more

          I for see that when people only employ AI for programming, it quickly hits the point where they train on their own (usually wrong) code and it spirals into an implosion.

          When kids stop learning to code for real, who writes GCC v38?

          This whole LLM is just the next bitcoin/nft. People had a lot of video cards and wanted to find a new use for them. In my small brain it’s so obvious.

          • tim333 6 days ago
            2 more

            LLMs maybe but there will be other algorithms.

            • dgfitz 6 days ago

              For sure, same point though.

          • nhod 6 days ago
            2 more

            i dunno, i have gotten tons of real work done with LLM’s. i just had o3 edit a contract and swap out pieces of it to make it work with SOW’s instead of embed the terms directly in the contract. i used to have to do that myself and have a lawyer review it. (i’ve been working with contracts for 30 years, i know enough now to know most basic contract law even though IANAL.) i’ve vibe coded a whole bunch of little things i would never have done myself or hired someone to do. i have had them extract data in seconds that would have taken forever. there is without question real utility in LLM’s and they are also without question getting better very fast.

            to compare that to NFT’s is pretty disingenuous. i don’t know anyone who has ever accomplished anything with an NFT. (i’m happy to be wrong about that, and i have yet to find a single example).

            • dgfitz 6 days ago

              There is without question value to LLMs, I absolutely agree.

              Trying to make them more than they are is the issue I have. Let them be great at crunching words, I’m all about that.

              Pretending that OpenAI is worth billions of dollars is a joke, when I can get 90% of the value the provide for free, on my own mediocre hardware.

        • hansmayer 6 days ago
          12 more

          Ha-ha, this is very funny :) Say, have you ever tried seriously using the AI-tools for programming? Because if you do, and still believe this, I may have a bridge/Eiffel Tower/railroad to sell you.

          • vidarh 6 days ago
            10 more

            The majority of my code over theast few months has been written by LLMs. Including systems I rely on for my business daily.

            Maybe consider it's not all on the AI tools if they work for others but not for you.

            • hansmayer 6 days ago
              8 more

              Sure man, maybe also share that bit with your clients and see how excited they'll be to learn their vital code or infrastructure may be designed by a stochastical system (*reliable a solid number of times).

              • vidarh 6 days ago
                7 more

                My clients are perfectly happy about that, because they care about the results, not FUD. They know the quality of what I deliver from first-hand experience.

                Human-written code also needs reviews, and is also frequently broken until subjected to testing, iteration, and reviews, and so our processes are built around proper qa, and proper reviews, and then the original source does not matter much.

                It's however a lot easier to force an LLM into a straighjacket of enforced linters, enforced test-suite runs, enforced sanity checks, enforced processes at a level that human developers would quit over, and so as we build out the harness around the AI code generation, we're seeing the quality of that code increase a lot faster than the quality delivered by human developers. It still doesn't beat a good senior developer, but it does often deliver code that handles tasks I could never hand to my juniors.

                (In fact, the harness I'm forcing my AI generated code through was written about 95%+ by an LLM, iteratively, with its own code being forced through the verification steps with every new iteration after the first 100 lines of code or so)

                • hansmayer 6 days ago
                  6 more

                  So to summarise - the quality of code you generated with LLM is increasing a lot faster, but somehow never reaching senior level. How is that a lot faster? I mean if it never reaches the (fairly modest) goal. But that's not the end of it. Your mid-junior LLMs are also enforcing quality gates and harnesses on the rest of your LLM-mid-juniors. If only there was some proof for that, like a project demo, so it could at least look believable...

                  • vidarh 6 days ago
                    5 more

                    It's a lot faster compared to new developers who still cost magnitudes more from day 1. It's not cost effective to hand every task to someone senior. I still have juniors on teams because in the long term we still need actual people who need a path to becoming senior devs, but in financial terms they are now a drain.

                    You can feel free not to believe it, as I have no plans to open up my tooling anytime soon - though partly because I'm considering turning it into a service. In the meantime these tools are significantly improving the margins for my consulting, and the velocity increases steadily as every time we run into a problem we make the tooling revise its own system prompt or add additional checks to the harness it runs to avoid it next time.

                    A lot of it is very simple. E.g a lot of these tools can produce broken edits. They'll usually realise and fix them, but adding an edit tool that forces the code through syntax checks / linters for example saved a lot of pain. As does forcing regular test and coverage runs, not just on builds.

                    For one of my projects I now let this tooling edit without asking permission, and just answer yes/no to whether it can commit once it's ready. If no, I'll tell it why and review again when it thinks it's fixed things, but a majority of commit requests are now accepted on the first try.

                    For the same project I'm now also experimenting with asking the assistant to come up with a todo list of enhancements for it based on a high level goal, then work through it, with me just giving minor comments on the proposed list.

                    I'm vaguely tempted to let this assistant reload it's own modified code when tests pass and leave it to work on itself for a a while and see what comes of it. But I'd need to sandbox it first. It's already tried (and was stopped by a permissions check) to figure out how to restart itself to enable new functionality it had written, so it "understands" when it is working on itself.

                    But, by all means, you can choose to just treat this as fiction if it makes you feel better.

                    • hansmayer 6 days ago
                      4 more

                      No, I am not disputing whatever productivity gains you seem to be getting. I was just curious if it LLMs feeding data into each other can work that well, knowing how long it took OpenAI to make ChatGPT properly count the number of "R"s in the word "strawberry". There is this effect called "Habsburg AI". I reckon the syntax-check and linting stuff is straightforward, as it adds a deterministic element to it, but what do you do about the more tricky stuff like dreamt up functions and code packages? Unsafe practices like sensitive exposing data in cleartext, Linux commands which are downright the opposite of what was prompted, etc? That comes up a fair amount of times and I am not sure that LLMs are going to self-correct here, without human input.

                      • vidarh 5 days ago
                        3 more

                        It doesn't stop them from making stupid mistakes. It does reduce the amount of time I have to deal with the stupid mistakes that they know how to fix if the problem is pointed out to them, so that I can focus on more focused diffs of cleaner code.

                        E.g. a real example: The tooling I mentioned at one point early on made the correct functional change, but it's written in Ruby and Ruby allows defining methods multiple times in the same class - the later version just overrides the former. This would of course be a compilation error in most other languages. It's a weakness of using Ruby with a careless (or mindless) developer...

                        But Rubocop - a linter - will catch it. So forcing all changes through Rubocop and just returning the errors to LLM made it recognise the mistake and delete the old method.

                        It lowers the cognitive load of the review. Instead of having to wade through and resolve a lot of cruft and make sense of unusually structured code, you can focus on the actual specific changes and subject those to more scrutiny.

                        And then my plan is to experiment with more semantic checks of the same style as what Rubocop uses, but less prescriptive, of the type "maybe you should pay extra attention here, and explain why this is correct/safe" etc. An example might be to trigger this for any change that involves reading a key or password field or card number whether or not there is a problem with it, and both trigger the LLM to "look twice" and indicate it as an area to pay extra attention to in a human review.

                        It doesn't need to be perfect, it just need to provide enough of a harness to make it easier for humans in the loop to spot the remaining issues.

                        • hansmayer 5 days ago
                          2 more

                          Right, so you understand that any dev who already uses for example Github Copilot with various code syntax extensions already achieves whatever it is that your new service is delivering? I'd spare myself the effort if I were you.

                          • vidarh 5 days ago

                            It didn't start with the intent of being a service; I started with it because there were a number of things that Copilot or tools like Claude Code doesn't do well enough that annoyed me, and spending a few hours was sufficient to get to the point where it's now my primary coding assistant because it works better for me for my stack, and because I can evolve it further to solve the specific problems I need solved.

                            So, no, I'll keep doing this because doing this is already saving me effort for my other projects.

            • Lu2025 6 days ago

              > written by LLMs

              Writing code is often easier than reading it. I suspect that coders soon will face what translators face now: fixing machine output at 2x to 3x less pay.

          • tim333 6 days ago

            I tried and they weren't that good. I'm gazing into the future a little.

      • antonvs 6 days ago

        > "Bullshit jobs" are the rubbish required to keep the paperwork tidy, assessed and filed.

        It's also the jobs that involve keeping people happy somehow, which may not be "productive" in the most direct sense.

        One class of people that needs to be kept happy are managers. What makes managers happy is not always what is actually most productive. What makes managers happy is their perception of what's most productive, or having their ideas about how to solve some problem addressed.

        This does, in fact, result in companies paying people to do nothing useful. People get paid to do things that satisfy a need that managers have perceived.

      • r0s 6 days ago

        AI is going to 10x the amount of bullshit, fully automating the process.

        NONE of the bullshit jobs are going away, there will simply be bigger, more numerous bullshit.

    • pmlnr 7 days ago

      Keynes was talking about work in every sense,including house chore. We're well below 15 hours of house chores by now, so that part became true.

      • LeonB 7 days ago
        8 more

        Washing machines created a revolution where we could now expend 1/10th of the human labour to wash the same amount of clothes as before. We now have more than 10 times as much clothes to wash.

        I don’t know if it’s induced demand, revealed preference or Jevon’s paradox, maybe all 3.

        • B1FF_PSUVM 6 days ago

          > We now have more than 10 times as much clothes to wash.

          OK, but I doubt we're washing 10 times as much clothes, unless are people wearing them for one hour between washes...

        • jjk166 6 days ago

          > We now have more than 10 times as much clothes to wash.

          Citation needed.

        • tim333 6 days ago
          5 more

          I saw some research once that the hours women spend doing housework hasn't changed. I think because human nature, not anything to do with the tech.

          • AuryGlenz 6 days ago
            3 more

            That's nonsense. It used to take women a full workday per week just to wash clothes.

            • AStonesThrow 6 days ago

              Well, now we can own more clothes! And we can wash them more often! And rather than specialist washerwomen, everyone can/must use the laundry-room robots!

          • Lu2025 6 days ago

            [flagged]

      • itishappy 7 days ago

        We've got 10 whole hours left over for "actual" work!

        (Quotes because I personally have a significantly harder time doing bloody housework...)

      • leoedin 7 days ago
        6 more

        Clearly you don’t have children!

        • antonvs 6 days ago
          3 more

          Life pro tip: teach your children to do chores.

          • aleph_minus_one 6 days ago
            2 more

            > Life pro tip: teach your children to do chores.

            Before teaching your children to do chores: x hours per week for chores

            After teaching your children to do chores: y hours per weeks to have annoying discussions with the child, and X hours per week cautioning the children to do the chores, and ensuring that your children do the chore properly. Here X > x.

            Additional time for you: -((X-x)+y), where X>x and additionally y > 0.

            • nrclark 5 days ago

              I did a lot of chores growing up. Looking back, X>x was true for the first few months of each new chore; but X died down to zero as time went on.

        • tim333 6 days ago
          2 more

          I was thinking it's a function of the social setting. Single bloke 1h/week. Couple 5h/week. With kids continuous. Or some such.

          • leoedin 6 days ago

            I imagine standards have also shifted. It just wouldn’t have been possible to wash a child’s clothes after one wear before the invention of the washing machine. People also had far less clothing that they could have even needed to wash.

      • autobodie 7 days ago
        2 more

        Source? Keynes was a serious economist, not a charlitan futurist.

        • itishappy 7 days ago

          John Maynard Keynes (1930) - Economic Possibilities for our Grandchildren

          > For many ages to come the old Adam will be so strong in us that everybody will need to do some work if he is to be contented. We shall do more things for ourselves than is usual with the rich to-day, only too glad to have small duties and tasks and routines. But beyond this, we shall endeavour to spread the bread thin on the butter-to make what work there is still to be done to be as widely shared as possible. Three-hour shifts or a fifteen-hour week may put off the problem for a great while. For three hours a day is quite enough to satisfy the old Adam in most of us!

          http://www.econ.yale.edu/smith/econ116a/keynes1.pdf

          https://www.aspeninstitute.org/wp-content/uploads/files/cont...

  • digitcatphd 7 days ago

    As of now yes. But we are still in day 0.1 of GenAI. Do you think this will be the case when o3 models are 10x better and 100x cheaper? There will be a turning point but it’s not happened yet.

    • godelski 7 days ago

      Yet we're what? 5 years into "AI will replace programmers in 6 months"?

      10 years into "we'll have self driving cars next year"

      We're 10 years into "it's just completely obvious that within 5 years deep learning is going to replace radiologists"

      Moravec's paradox strikes again and again. But this time it's different and it's completely obvious now, right?

      • hn_throwaway_99 7 days ago
        17 more

        I basically agree with you, and I think the thing that is missing from a bunch of responses that disagree is that it seems fairly apparent now that AI has largely hit a brick wall in terms of the benefits of scaling. That is, most folks were pretty astounded by the gains you could get from just stuffing more training data into these models, but like someone who argues a 15 year old will be 50 feet tall based on the last 5 years' growth rate, people who are still arguing that past growth rates will continue apace don't seem to be honest (or aware) to me.

        I'm not at all saying that it's impossible some improvement will be discovered in the future that allows AI progress to continue at a breakneck speed, but I am saying that the "progress will only accelerate" conclusion, based primarily on the progress since 2017 or so, is faulty reasoning.

        • godelski 6 days ago
          8 more

            > it seems fairly apparent now that AI has largely hit a brick wall in terms of the benefits of scaling
          
          What's annoying is plenty of us (researchers) predicted this and got laughed at. Now that it's happening, it's just quiet.

          I don't know about the rest, but I spoke up because I didn't want to hit a brick wall, I want to keep going! I still want to keep going! But if accurate predictions (with good explanations) aren't a reason to shift resource allocation then we just keep making the same mistake over and over. We let the conmen come in and people who get too excited by success that they get blind to pitfalls.

          And hey, I'm not saying give me money. This account is (mostly) anonymous. There's plenty of people that made accurate predictions and tried working in other directions but never got funding to test how methods scale up. We say there's no alternatives but there's been nothing else that's been given a tenth of the effort. Apples and oranges...

          • antonvs 6 days ago
            4 more

            > What's annoying is plenty of us (researchers) predicted this and got laughed at. Now that it's happening, it's just quiet.

            You need to model the business world and management more like a flock of sheep being herded by forces that mostly don't have to do with what actually is going to happen in future. It makes a lot more sense.

            • godelski 6 days ago

                > mostly don't have to do with what actually is going to happen
              
              Yet I'm talking about what did happen.

              I'm saying we should have memory. Look at predictions people make. Reward accurate ones, don't reward failures. Right now we reward whoever makes the craziest predictions. It hasn't always been this way, so we should go back to less crazy

            • cgio 6 days ago

              Practically no one is herded by what is actually going to happen, hardly even by what is expected to happen. Business pretends that it is driven by expectations, but is mostly driven by the past, as in financial statements. What is the bonus we can get this year? There is of course the strategic thinking, I don't want to discount that part of business, but it is not the thing that will drive most of these, AI as a cost saving measure, decisions. This is the unimaginative part of AI application and as such relegated to the unimaginative managers.

            • johnnyanmac 6 days ago

              > It is difficult to get a man to understand something, when his salary depends on his not understanding it.”

              It's all a big hype bubble and not only is no one in the industry willing to pop it, they actively defend against popping a bubble that is clearly rupturing on its own. It's endemic of how modern businesses no longer care about a proper 10 year portfolio and more about how to make the next quarter look good.

              There's just no skin in the game, and everyone's ransacking before the inevitable fire instead of figuring out how to prevent the fire to begin with.

          • oblio 6 days ago

            > What's annoying is plenty of us (researchers) predicted this and got laughed at. Now that it's happening, it's just quiet.

            Those people always do that. Shouting about cryptocurrencies and NFTs from the rooftops 3-4 years ago, now completely gone.

            I suspect they're the same people, basically get rich quick schemers.

          • brrt 6 days ago
            2 more

            Sure, you were right.

            But if you had been wrong and we would now have had superintelligence, the upside for its owners would presumably be great.

            ... Or at least that's the hypothesis. As a matter of fact intelligence is only somewhat useful in the real world :-)

            • generic92034 6 days ago

              I am not sure the owners would keep being that in case of real superintelligence, though.

        • HDThoreaun 6 days ago
          7 more

          I dont see any wall. Gemini 2.5 and o3/o4 are incredible improvements. Gen AI is miles ahead of where it was a year ago which was miles ahead of where it was 2 years ago.

          • dismalaf 6 days ago
            3 more

            The actual LLM part isn't much better than a year ago. What's better is that they've added additional logic and made it possible to intertwine traditional, expert-system style AI plus the power of the internet to augment LLMs so that they're actually useful.

            This is an improvement for sure, but LLMs themselves are definitely hitting a wall. It was predicted that scaling alone would allow them to reach AGI level.

            • munksbeer 5 days ago
              2 more

              > It was predicted that scaling alone would allow them to reach AGI level.

              This is a genuine attempt to inform myself. Could you think to those sort of claims from experts at the top?

              • hn_throwaway_99 4 days ago

                There were definitely people "at the top" who were essentially arguing that more scale would get you to AGI - Ilya Sutskever of OpenAI comes to mind (e.g. "next-token prediction is enough for AGI").

                There were definitely many other prominent researchers who vehemently disagreed, e.g. Yann LeCun. But it's very hard for a layperson (or, for that matter, another expert) to determine who is or would be "right" in this situation - most of these people have strong personalities to put it mildly, and they often have vested interests in pushing their preferred approach and view of how AI does/should work.

          • asadotzler 6 days ago
            3 more

            The improvements have less to do with scaling than adding new techniques like better fine tuning and reinforcement learning. The infinite scaling we were promised, that only required more content and more compute to reach god tier has indeed hit a wall.

            • munksbeer 5 days ago
              2 more

              I probably wasn't paying enough attention, but I don't remember that being the dominating claim that you're suggesting. Infinite scaling?

              • hn_throwaway_99 4 days ago

                People were originally very surprised that you could get so much functionality by just pumping more data and adding more parameters to models. What made OpenAI initially so successful is that they were the first company willing to make big bets on these huge training runs.

                After their success, I definitely saw a ton of blog posts and general "AI chatter" that to get to AGI all you really needed to do (obviously I'm simplifying things a bit here) was get more data and add more parameters, more "experts", etc. Heck, OpenAI had to scale back it's pronouncements (GPT 5 essentially became 4.5) when they found that they weren't getting the performance/functionality advances they expected after massively scaling up their model.

        • mark_l_watson 6 days ago

          I basically agree with you also, but I have a somewhat contrarian view of scaling -> brick wall. I feel like applications of powerful local models is stagnating, perhaps because Apple has not done a good job so far with Apple Intelligence.

          A year ago I expected a golden age of local model intelligence integrated into most software tools, and more powerful commercial tools like Google Jules to be something used perhaps 2 or 3 times a week for specific difficult tasks.

          That said, my view of the future is probably now wrong, I am just saying what I expected.

      • jjani 7 days ago
        9 more

        > Yet we're what? 5 years into "AI will replace programmers in 6 months"?

        Realistically, we're 2.5 years into it at most.

        • hansmayer 6 days ago
          8 more

          No, the hype cycle started around 2019, slowly at first. The technology this is built with is more like 20 years old, so no, we are not 2.5 years at most really.

          • jjani 6 days ago

            If you can quote anyone well-known saying we'd be replacing programmers in 6 months back in 2019, I'd be interested to read it.

          • micromacrofoot 6 days ago
            6 more

            we're 2.5 years into the current hype trend, no way was this mainstream until at least 2022

            • godelski 6 days ago
              5 more

              GPT3 dropped in 2020. That's when it hit mainstream

              • AuryGlenz 6 days ago
                2 more

                GPT3 wasn't that impressive. GPT 3.5 is when it became "oh wow, this could really change things," and that was 2022.

              • og_kalu 5 days ago

                GPT-3 shook the research world but it was by no means mainstream until the ChatGPT release in Nov 2022.

              • micromacrofoot 5 days ago

                I feel like no one was really talking about this stuff until midjourney and dalle, but I can agree to disagree

      • tim333 7 days ago
        20 more

        Four years into people mocking "we'll have self driving cars next year" while they are on the street daily driving around SF.

        • xorcist 6 days ago
          5 more

          They are self driving the same way a tram or subway can be self driving. They traffic a tightly bounded designated area. They're not competing with human drivers. Still a marvel of human engineering, just quite expensive compared with other forms of public transport. It just doesn't compete in the same space and likely never will.

          • tim333 6 days ago
            4 more

            They are literally competing with human uber drivers in the area they operate and also having a much lower crash and injury rate.

            I admit they don't operate everywhere - only certain routes. Still they are undoubtedly cars that drive themselves.

            I imagine it'll be the same with AGI. We'll have robots / AIs that are much smarter than the average human and people will be saying they don't count because humans win X Factor or something.

            • chipsrafferty 6 days ago

              Self-driving vehicles can only exist in cities of extreme wealth like SF. Try running them in Philadelphia and see what happens.

            • hansmayer 6 days ago
              2 more

              How are they competing, if their routes are limited?

              • mediaman 6 days ago

                The cotton gin processed short fiber cotton, but not long fiber cotton.

                Did the cotton gin therefore not compete with human labor?

        • hansvm 6 days ago

          They're driving, but not well in my (limited) interactions with them. I had a waymo run me completely out of my lane a couple months ago as it interpreted 2 lanes of left turn as an extra wide lane instead (or, worse, changed lanes during the turn without a blinker or checking its sensors, though that seems unlikely).

        • HarHarVeryFunny 6 days ago
          8 more

          Yes, but ...

          The argument that self-driving cars should be allowed on public roads as long as they are statistically as safe as human drivers (on average) seems valid, but of course none of these cars have AGI... they perform well in the anticipated simulator conditions in which they were trained (as long as they have the necessary sensors, e.g. Waymo's lidar, to read the environment in reliable fashion), but will not perform well in emergency/unanticipated conditions they were not trained on. Even outside of emergencies, Waymos still sometimes need to "phone home" for remote assistance in knowing what to do.

          So, yes, they are out there, perhaps as safe on average as a human (I'd be interested to see a breakdown of the stats), but I'd not personally be comfortable riding in one since I'm not senile, drunk, teenager, hothead, distracted (using phone while driving), etc - not part of the class that are dragging the human safety stats down. I'd also not trust a Tesla where penny pinching, or just arrogant stupidity, has resulted in a sensor-poor design liable to failure modes like running into parked trucks.

          • jkestner 6 days ago
            4 more

              I'd not personally be comfortable riding in one since I'm not senile, drunk, teenager, hothead, distracted (using phone while driving), etc - not part of the class that are dragging the human safety stats down.
            
            The challenge is that most people think they’re better than average drivers.
            • HarHarVeryFunny 6 days ago
              3 more

              I'm not sure what the "challenge" is there, but certainly true in terms of human psychology.

              My point was that if you are part of one of these accident-prone groups, you are certainly worse than average, and are probably safer (both for yourself, and everyone around you) in a Waymo. However, if you are an intelligent non-impaired experienced driver, then maybe not, and almost certainly not if we're talking about emergency and dangerous situations which is where it really matters.

              • YokoZar 6 days ago
                2 more

                How can you know if you're a good driver in an emergency situation? We don't exactly get a lot of practice.

                • HarHarVeryFunny 6 days ago

                  Sure, you don't know how well any specific driver is going to react in an emergency situation, and some are going to be far worse than others (e.g. panicking, or not thinking quickly enough), but the human has the advantage of general intelligence and therefore NOT having to rely on having had practice at the specific circumstance they find themselves in.

                  A recent example - a few weeks ago I was following another car in making a turn down a side road, when suddenly that car stops dead (for no externally apparent reason), and starts backing up fast about to hit me. I immediately hit my horn and prepare to back up myself to get out of the way, since it was obvious to me - as a human - that they didn't realize I was there, and without intervention would hit me.

                  Driving away I watch the car in my rear view mirror and see it pull a U-turn to get back out of the side road, making it apparent why they had stopped before. I learned something, but of course the driverless car is incapable of learning, and certainly has no theory of mind, and would behave same as last time - good or bad - if something similar happened again.

          • johnnyanmac 6 days ago
            2 more

            In my lens, as long as companies don't want to be held liable for an accident, the shouldn't be on roads. They need to be extremely confident to the point of putting their money where their mouths are. That's true "safety".

            That's the main difference with a human driver. If I take an Uber and we crash, that driver is liable. Waymo would fight tooth and nail to blame anything else.

            • oblio 6 days ago

              Mercedes is doing this for specific places and conditions.

          • tim333 5 days ago

            Well, it depends on the details. I'd trust a Waymo as much as an Uber but I'm pretty skeptical of the Tesla stuff they are launching in Austin.

        • godelski 6 days ago
          4 more

          I'm quoting Elon.

          I don't care about SF. I care about what I can but as a typical American. Not as an enthusiast in one of the most technologically advanced cities on the planet

          • horns4lyfe 6 days ago
            3 more

            They’re in other cities too…

            • godelski 6 days ago
              2 more

              Other cities still isn't available to average American.

              You read the words but missed their meaning

      • roenxi 7 days ago
        15 more

        As far as I've seen we appear to already have self driving vehicles, the main barriers are legal and regulatory concerns rather than the tech. If a company wanted to put a car on the road that beetles around by itself there aren't any crazy technical challenges to doing that - the issue is even if it was safer than a human driver the company would have a lot of liability problems.

        • RivieraKid 7 days ago

          This is just not true, Waymo, MobilEye, Tesla and Chinese companies are not bottlenecked by regulations but by high failure rate and / or economics.

        • risyachka 6 days ago

          They are only self-driving in a very controlled environments of few very good mapped out cities with good roads in good weather.

          And it took what like 2 decades to get there. So no, we don't have self-driving even close. Those examples look more like hard-coded solution for custom test cases.

        • jeffreygoesto 7 days ago

          What? If that stuff works, no liability will have to be executed. How can you state that it works and claim liability problems at the same time?

        • apwell23 7 days ago
          11 more

          > the main barriers are legal and regulatory concerns rather than the tech

          they have failed in sfo, phoenix and other cities that rolled red carpet for them

          • roenxi 7 days ago
            10 more

            Pretty solid evidence that self driving cars already exist though.

            • godelski 6 days ago
              2 more

              As prototypes, yes. But that's like pointing to Japanese robots in the 80's and expecting robot butlers any day now. Or maybe Boston dynamics 10 years ago. Or when OpenAI was into robotics.

              There's a big gap between seeing something work in the lab and being ready for real world use. I know we do this in software, but that's a very abnormal thing (and honestly, maybe not the best)

              • xnx 6 days ago

                Waymo is doing 250k paid rides/week.

            • laserlight 6 days ago
              2 more

              When people say “we'll have self-driving cars next year”, I understand that self-driving cars will be widespread in the developed world and accessible to those who pay a premium. Given the status quo, I find it pointless to discuss the semantics of whether they exist or not.

              • godelski 6 days ago

                Especially considering it would be weird to say "we'll have <something> next year" when we've technically had it for decades.

                And more specifically, I'm referencing Elon where the context is that its going to be a software push into Teslas that people already own

            • antonvs 6 days ago
              3 more

              You're confusing "exist" with "viable".

              When someone talks about "having" self-driving cars next year, they're not talking about what are essentially pilot programs.

              • roenxi 6 days ago

                I don't think that is a reasonable generalisation. A lot of people would have been talking about the first person to take a real trip in a car that drives itself. A record that is in the past.

                Not to mention that HN gets really tetchy about achieving specifically SAE Level 6 when in practice some pretty basic driver assist tools are probably closer to what people meant. It reminds me of a gentlemen I ran into who was convinced that the OpenAI DoTA bot with a >99% win rate couldn't really be said to be playing the game. If someone can take their hands off the wheel for 10 minutes we're there in a common language sense; the human in the car isn't actively in control.

              • FabHK 6 days ago

                Good point. On the "exist" interpretation, we've "had" flying cars for several decades.

            • pydry 7 days ago
              2 more

              I remember one reason phoenix was chosen as a trial location coz it was supposed to be one of the easiest places to drive.

              It's pretty damning that it failed there.

              • jkestner 6 days ago

                Yeah, it’s a big grid with wide streets. Did it fail there? If so I imagine it’s just due to lack of business—there are almost no taxis in Phoenix. Mostly just from the airport.

      • hansmayer 6 days ago

        100% this. I always argue that groundbreaking technologies are clearly groundbreaking from the start. It is almost a bit like a film, if you have to struggle to get into it in the first few minutes, you may as well spare yourself watching the rest.

      • tsunamifury 7 days ago
        14 more

        [flagged]

        • seanhunter 7 days ago
          4 more

          I consulted a radiologist more than 5 years after Hinton said that it was completely obvious that radiologists would be replaced by AI in 5 years. I strongly suspect they were not an AI.

          Why do I think this?

          1) They smelled slightly funny. 2) They got the diagnosis wrong.

          OK maybe #2 is a red herring. But I stand by the other reason.

          • HDThoreaun 6 days ago
            2 more

            I know a radiologist and talk a decent bit about AI usage in the field. Every radiologist today is making heavy use of AI. They pre screen everything and from what I understand it has led to massive productivity gains. It hasnt led to job losses yet but theres so much money on the line it really feels to me like we're just waiting for the straw that broke the camels back. No one wants to be the first to fully get rid of radiologists but once one hospital does the rest will quickly follow suit.

            • lazide 6 days ago

              One word - liability.

          • fulafel 6 days ago

            The quote appears to be “We should stop training radiologists now, it’s just completely obvious within five years deep learning is going to do better than radiologists.”

            So there's some room for interpretation, the weaker interpretation is less radical (that AI could beat humans in radiology tasks in 5 years).

        • godelski 7 days ago

          I named 3 things...

          You're going to have to specify which 2 you think happened

        • hengheng 7 days ago
          6 more

          I have a fusion reactor to sell to you.

          • laserlight 6 days ago
            5 more

            Some people are ahead of you by 3.5 years [0]:

            > Helion has a clear path to net electricity by 2024, and has a long-term goal of delivering electricity for 1 cent per kilowatt-hour. (!)

            [0] https://blog.samaltman.com/helion

            • antonvs 6 days ago
              2 more

              You're missing the big picture. Helion can still make their goal. Once they have a working fusion reactor they can use the energy to build a time machine.

              • laserlight 6 days ago

                Of course, silly me. I should put more practice time into 4D chess.

            • alphager 6 days ago
              2 more

              We're halfway into 2025 and you're cutting a goal they should have reached by 2024. Did they reach that goal?

        • croes 7 days ago

          Where did it happen?

          They try it, but it’s not reliable

        • apwell23 7 days ago

          did you by any chance send money to nigerian prince ?

      • gardenhedge 6 days ago

        Over ten years for the we'll have self driving car spiel

    • directevolve 7 days ago

      We’re already heading toward the sigmoid plateau. The GPT 3 to 4 shift was massive. Nothing since had touched that. I could easily go back to the models I was using 1-2 years ago with little impact on my work.

      I don’t use RAG, and have no doubt the infrastructure for integrating AI into a large codebase has improved. But the base model powering the whole operation seems stuck.

      • threeseed 7 days ago
        3 more

        > I don’t use RAG, and have no doubt the infrastructure for integrating AI into a large codebase has improved

        It really hasn't.

        The problem is that a GenAI system needs to not only understand the large codebase but also the latest stable version of every transitive dependency it depends on. Which is typically in the order of hundreds or thousands.

        Having it build a component with 10 year old, deprecated, CVE-riddled libraries is of limited use especially when libraries tend to be upgraded in interconnected waves. And so that component will likely not even work anyway.

        I was assured that MCP was going to solve all of this but nope.

        • HumanOstrich 7 days ago
          2 more

          How did you think MCP was going to solve the issue of a large number of outdated dependencies?

          • threeseed 7 days ago

            Those large number of outdated dependencies are in the LLM "index" which can't be rapidly refreshed because of the training costs.

            MCP would allow it to instead get this information at run-time from language servers, dependency repositories etc. But it hasn't proven to be effective.

      • chrsw 6 days ago
        3 more

        > I could easily go back to the models I was using 1-2 years ago with little impact on my work.

        I can't. GPT-4 was useless for me for software development. Claude 4 is not.

        • directevolve 6 days ago
          2 more

          Interesting, what type of dev work do you do? Performance does vary widely across languages and domains.

          • chrsw 6 days ago

            Embedded software for robotics.

    • solumunus 7 days ago

      I use LLM’s daily and live them but at the current rate of progress it’s just not really something worth worrying about. Those that are hysterical about AI seem to think LLM’s are getting exponentially better when in fact diminishing returns are hitting hard. Could some new innovation change that? It’s possible but it’s not inevitable or at least not necessarily imminent.

      • kbelder 6 days ago

        I agree that the core models are only going to see slow progression from here on out, until something revolutionary happens... which might be a year from now, or maybe twenty years. Who knows.

        But we are going to see a huge explosion in how those models are integrated into the rest of the tech ecosystem. Things that a current model could do right now, if only your car/watch/videogame/heart monitor/stuffed animal had a good working interface into an AI.

        Not necessarily looking forward to that, but that's where the growth will come.

    • threeseed 7 days ago

      How are we in 0.1 of GenAI ? It's been developed for nearly a decade now.

      And each successive model that has been released has done nothing to fundamentally change the use cases that the technology can be applied to i.e. those which are tolerant of a large percentage of incoherent mistakes. Which isn't all that many.

      So you can keep your 10x better and 100x cheaper models because they are of limited usefulness let alone being a turning point for anything.

      • Flemlo 7 days ago
        5 more

        A decade?

        The explosion of funding, awareness etc only happened after gpt-3 launch

        • hyperadvanced 7 days ago

          Funding is behind the curve. Social networks existed in 2003 and Facebook became a billion dollar company a decade later. AI horror fantasies from the 90’s still haven’t come true. There is no god, there is no Skynet.

        • sbm_au 6 days ago

          AlphaGo beating the top human player was in 2016. To my memory, that was one of the first public breakthroughs of the new era of machine learning.

          Around 2010 when I was at university, a friend did their undergraduate thesis on neural networks. Among our cohort it was seen as a weird choice and a bit of a dead-end from the last AI winter.

        • imtringued 7 days ago
          2 more

          That was five years ago not yesterday.

          • Flemlo 7 days ago

            I didn't say yesterday.

            Nonetheless it took openai til Nov 2022 for 1 Million users.

            The overall awareness and breakthrough was probably not at 2020.

    • nothercastle 7 days ago

      I think they will be 10-100x cheaper id be really surprised if we even doubled the quality though

    • makeitdouble 7 days ago

      How does it work if they get 10x better in 10 years ? Everything else will have already moved on and the actual technology shift will come from elsewhere.

      Basically, what if GenAI is the Minitel and what we want is the internet.

    • nradov 7 days ago

      10× better by what metric? Progress on LLMs has been amazing but already appears to be slowing down.

      • jaggederest 7 days ago
        8 more

        All these folks are once again seeing the first 1/4 of a sigmoid curve and extrapolating to infinity.

        • drodgers 7 days ago
          7 more

          No doubt from me that it’s a sigmoid, but how high is the plateau? That’s also hard to know from early in the process, but it would be surprising if there’s not a fair bit of progress left to go.

          Human brains seem like an existence proof for what’s possible, but it would be surprising if humans also represent the farthest physical limits of what’s technologically possible without the constraints of biology (hip size, energy budget etc).

          • leoedin 7 days ago
            2 more

            Biological muscles are proof that you can make incredibly small and forceful actuators. But the state of robotics is nowhere near them, because the fundamental construction of every robotic actuator is completely different.

            We’ve been building actuators for 100s of years and we still haven’t got anything comparable to a muscle. And even if you build a better hydraulic ram or brushless motor driven linear actuator you will still never achieve the same kind of behaviour, because the technologies are fundamentally different.

            I don’t know where the ceiling of LLM performance will be, but as the building blocks are fundamentally different to those of biological computers, it seems unlikely that the limits will be in any way linked to those of the human brain. In much the same way the best hydraulic ram has completely different qualities to a human arm. In some dimensions it’s many orders of magnitudes better, but in others it’s much much worse.

            • lazide 6 days ago

              Biological muscles come with a lot of baggage, very constrained operating environments, and limited endurance.

              It’s not just that ‘we don’t know how to build them’, it’s that the actuators aren’t a standalone part - and we don’t know how to build (or maintain/run in industrial enviroments!) the ‘other stuff’ economically either.

          • audunw 7 days ago

            I don’t think it’s hard to know. We’re already seeing several signs of being near the plateau in terms of capabilities. Most big breakthrough these days seems to be in areas where we haven’t spent the effort in training and model engineering. Like recent improvements in video generation. So of course we could get improvements in areas where we haven’t tried to use ML yet.

            For text generation, it seems like the fast progress was mainly due to feeding the models exponentially more data and exponentially more compute power. But we know that the growth in data is over. The growth in compute has a shifted from a steep curve (just buy more chips) to a slow curve (have to make exponentially more factories if we want exponentially more chips)

            Im sure we will have big improvements in efficiency. Im sure nearly everyone will use good LLMs to support them in their work, and they may even be able to do all they need to do on-device. But that doesn’t make the models significantly smarter.

          • jaggederest 7 days ago

            The wonderful thing about a sigmoid is that, just as it seems like it's going exponential, it goes back to linear. So I'd guess we're not going to see 1000x from here - I could be wrong, but I think the low hanging fruit has been picked. I would be surprised in 10 years if AI were 100x better than it is now (per watt, maybe, since energy devoted to computing is essentially the limiting factor)

            The thing about the latter 1/3rd of a sigmoid curve is, you're still making good progress, it's just not easy any more. The returns have begun to diminish, and I do think you could argue that's already happening for LLMs.

          • formerly_proven 7 days ago

            Progress so far has been half and half technique and brute force. Overall technique has now settled for a few years, so that's mostly in the tweaking phase. Brute force doesn't scale by itself and semiconductors have been running into a wall for the last few years. Those (plus stagnating outcomes) seem decent reasons to suspect the plateau is neigh.

          • GoblinSlayer 7 days ago

            Human brains are easy to do, just run evolution for neural networks.

      • elif 7 days ago
        15 more

        with autonomous vehicles, the narrative of imperceptibly slow incremental change about chasing 9's is still the zeitgeist despite an actual 10x improvement in homicidality compared to humans already existing.

        There is a lag in how humans are reacting to AI which is probably a reflexive aspect of human nature. There are so many strategies being employed to minimize progress in a technology which 3 years ago did not exist and now represents a frontier of countless individual disciplines.

        • intended 7 days ago
          13 more

          This is my favorite thing to point out from the day we started talking about autonomous vehicles on tech sites.

          If you took a Tesla or a Waymo and dropped into into a tier 2 city in India, it will stop moving.

          Driving data is cultural data, not data about pure physics.

          You will never get to full self driving, even with more processing power, because the underlying assumptions are incorrect. Doing more of the same thing, will not achieve the stated goal of full self driving.

          You would need to have something like networked driving, or government supported networks of driving information, to deal with the cultural factor.

          Same with GenAI - the tooling factor will not magically solve the people, process, power and economic factors.

          • yusina 7 days ago

            > You would need to have something like networked driving, or government supported networks of driving information, to deal with the cultural factor.

            Or actual intelligence. That observes its surroundings and learns what's going on. That can solve generic problems. Which is the definition of intelligence. One of the obvious proofs that what everybody is calling "AI" is fundamentally not intelligent, so it's a blatant misnomer.

          • binoct 7 days ago
            5 more

            One of my favorite things to question about autonomous driving is the goalposts. What do you mean the “stated goal of full self driving”, which is unachievable? Any vehicle, anywhere in the world, in any conditions? That seems an absurd goal that ignores the very real value in having vehicles that do not require drivers and are safer than humans but are limited to certain regions.

            Absolutely driving is cultural (all things people do are cultural) but given 10’s of millions of miles driven by Waymo, clearly it has managed the cultural factor in the places they have been deployed. Modern autonomous driving is about how people drive far more than the rules of the road, even on the highly regulated streets of western countries. Absolutely the constraints of driving in Chennai are different, but what is fundamentally different? What leads to an impossible leap in processing power to operate there?

            • LegionMammal978 7 days ago
              3 more

              > What do you mean the “stated goal of full self driving”, which is unachievable? Any vehicle, anywhere in the world, in any conditions? That seems an absurd goal that ignores the very real value in having vehicles that do not require drivers and are safer than humans but are limited to certain regions.

              I definitely recall reading some thinkpieces along the lines of "In the year 203X, there will be no more human drivers in America!" which was and still is clearly absurd. Just about any stupidly high goalpost you can think of has been uttered by someone in the world early on.

              Anyway, I'd be interested in a breakdown on reliability figures in urban vs. suburban vs. rural environments, if there is such a thing, and not just the shallow take of "everything outside cities is trivial!" I sometimes see. Waymo is very heavily skewed toward (a short list of) cities, so I'd question whether that's just a matter of policy, or whether there are distinct challenges outside of them. Self-driving cars that only work in cities would be useful to people living there, but they wouldn't displace the majority of human driving-miles like some want them to.

              • jhbadger 6 days ago
                2 more

                I mean, even assuming the technical challenges to self-driving can be solved, it is obvious that there will still be human drivers because some humans enjoy driving, just as there are still people who enjoy riding horses even after cars replaced horses for normal transport purposes. Although as with horses, it is possible that human driving will be seen as secondary and limited to minor roads in the future.

            • intended 6 days ago

              I'd apprecite that we dont hurry past the acknowledgement that self driving will be a cultural artifact. Its been championed as a purely technical one, and pointing this out has been unpopular since day 1, because it didn't gel with the zeitgeist.

              As others will attest, when adherence to driving rules is spotty, behavior is highly variable and unpredictable. You need to have a degree of straight up agression, if you want to be able to handle an auto driver who is cheating the laws of physics.

              Another example of something thats obvious based on crimes in India; people can and will come up to your car during a traffic jam, tap your chassis to make it sound like there was an impact, and then snatch your phone from the dashboard when you roll your window down to find out what happened.

              This is simply to illustrate and contrast how pared down technical intuitions of "driving" are, when it comes to self driving discussions.

              This is why I think level 5 is simply not happening, unless we redefine what self driving is, or the approach to achieving it. I feel theres more to be had from a centralized traffic orchestration network that supplements autonomous traffic, rather than trying to solve it onboard the vehicle.

          • atleastoptimal 7 days ago
            5 more

            Why couldn’t an autonomous vehicle adapt to different cultures? American driving culture has specific qualities and elements to learn, same with India or any other country.

            Do you really think Waymos in SF operate solely on physics? There are volumes of data on driver behavior, when to pass, change lanes, react to aggressive drivers, etc.

            • huntertwo 6 days ago

              Yeah exactly. It’s kind of absurd to take the position that it’s impossible to have “full self driving” because Indian driving is different than American driving. You can just change the model you’re using. You can have the model learn on the fly. There are so many possibilities.

            • intended 6 days ago
              3 more

              Because this statement, unfortunately, ends up moving the underlying goal posts about what self driving IS.

              And the point that I am making, is that this view was never baked into the original vision of self driving, resulting in predictions of a velocity that was simply impossible.

              Physical reality does not have vibes, and is more amenable to prediction, than human behavior. Or Cow behavior, or wildlife if I were to include some other places.

              • huntertwo 6 days ago
                2 more

                Marketers gonna market. But if we ignore the semantics of what full self driving actually means for a minute, there is still a lot of possibilities for self driving in the future. It takes longer than we perceive initially because we don’t have insight into the nuances needed to achieve these things. It’s like when you plan a software project, you think it’s going to take less time than it does because you don’t have a detailed view until you’re already in the weeds.

                • intended 6 days ago

                  To quote someone else, if my grandmother had wheels, she would be a bicycle.

                  This is a semantic discussion, because it is about what people mean when they talk about self driving.

                  Just ditching the meaning is unfair, because goddamit, the self driving dream was awesome. I am hoping to be proved wrong, but not because we moved our definition.

                  Carve a separate category out, which articulates the updated assumptions. Redefining it is a cop out and dare I say it, unbecoming of the original ambition.

                  Networked Autonomous vehicles?

          • gwicks56 6 days ago

            "If you took a Tesla or a Waymo and dropped into into a tier 2 city in India, it will stop moving."

            Lol. If you dropped the average westerner into Chennai, they would either: a) stop moving b) kill someone

        • yusina 7 days ago

          > a technology which 3 years ago did not exist

          Decades of machine learning research would like to have a word.

    • ricardobayes 7 days ago

      Frankly, we don't know. That "turning point" that seemed so close for many tech, never came for some of them. Think 3D-printing that was supposed to take over manufacturing. Or self-driving, that is "just around the corner" for a decade now. And still is probably a decade away. Only time will tell if GenAI/LLMs are color TV or 3D TV.

      • kergonath 7 days ago
        2 more

        > Think 3D-printing that was supposed to take over manufacturing.

        3D printing is making huge progress in heavy industries. It’s not sexy and does not make headlines but it absolutely is happening. It won’t replace traditional manufacturing at huge scales (either large pieces or very high throughput). But it’s bringing costs way down for fiddly parts or replacements. It is also affecting designs, which can be made simpler by using complex pieces that cannot be produced otherwise. It is not taking over, because it is not a silver bullet, but it is now indispensable in several industries.

        • godelski 7 days ago

          You're misunderstanding the parent's complaint and frankly the complaints with AI. Certainly 3D printing is powerful and hasn't changed things. But you forgot that 30 years ago people were saying there would be one in every house because a printer can print a printer and how this would revolutionize everything because you could just print anything at home.

          The same thing with AI. You'd be blind or lying if you said it hasn't advanced a lot. People aren't denying that. But people are fed up being constantly being promised the moon and getting a cheap plastic replica instead.

          The tech is rapidly advancing and doing good. But it just can't keep up with the bubble of hype. That's the problem. The hype, not the tech.

          Frankly, the hype harms the tech too. We can't solve problems with the tech if we're just throwing most of our money at vaporware. I'm upset with the hype BECAUSE I like the tech.

          So don't confuse the difference. Make sure you understand what you're arguing against. Because it sounds like we should be on the same team, not arguing against one another. That just helps the people selling vaporware

      • Ray20 6 days ago
        2 more

        >Think 3D-printing that was supposed to take over manufacturing

        This was never the case, and this is obvious to anyone who has ever been to factories that doing mass-produced plastic

        >Or self-driving, that is "just around the corner" for a decade now.

        But it is really around the corner, all that remains is to accept it. That is, to start building and modifying the road infrastructure and changing the traffic rules to enable effective integration self-driving cars into road traffic.

        • arthurbrown 6 days ago

          What modifications to infrastructure are you anticipating needing?

    • xnx 6 days ago

      > 5 years into "AI will replace programmers in 6 months"?

      Programmers that don't use AI will get replaced by those that do. (no just by mandate, but by performance)

      > 10 years into "we'll have self driving cars next year"

      They're here now. Waymo does 250K paid rides/week.

      • player1234 3 days ago

        How have you measured this performance boost?

    • johnnyanmac 6 days ago

      There's a lot of "when" people are betting on, and not a lot of action to back it. If "when" is 20 years, then I still got plenty career ahead of me before I need to worry about that.

    • AvAn12 6 days ago

      Remember when RPA was going to replace everyone?

      • AvAn12 6 days ago

        Or low-code / no-code?

    • croes 7 days ago

      If not when.

    • apwell23 7 days ago

      > Do you think this will be the case when o3 models are 10x better and 100x cheaper?

      why don't you bring it up then.

      > There will be a turning point but it’s not happened yet.

      do you know something that rest of us don't ?

  • fallingknife 6 days ago

    ZIRP had little to do with it. Tech is less levered than any other major industry. What happened is that growth expectations for large tech companies were way out of line with reality and finally came back down to earth when the market finally realized that the big tech cos are actually mature profitable companies and not just big startups. The fact that this happened at the same time ZIRP ended is a coincidence.

  • wonderwonder 6 days ago

    Saw something similar the other day. X was awash with stories that IBM was laying off several thousand people in their HR dept. being let go due to Ai. Then over the course of the day the story shifted to IBM was outsourcing them all to India. Was a very interesting transition, seemed intentional.

    • Lu2025 6 days ago

      IBM seemed to outsource recruiting to Indian firms too and it's awful. The accounts who contact me on LinkedIn are grossly unprofessional and downright nasty.

  • xorcist 6 days ago

    > because he simply thought he could run a lot leaner

    Because he suddenly had to pay interest on that gigantic loan he (and his business associates) took to buy Twitter.

    It may not be the only reason for everything that happened, but it sure is simple and has some very good explanatory powers.

    • huntertwo 6 days ago

      Other companies have different reasons to cut costs, but the incentive is still there.

      • xorcist 6 days ago

        Stocks are valued against the risk free interest, or so the saying goes.

        Doubling interest rate from .1% to .2% does a lot for your DCF models, and in this case we went from zero (or in some cases negative) to several percentage units. Of course stock prices tanked. That's what any schoolbook will tell you, and that's what any investor will expect.

        Companies thus have to start turning dials and adjust parameters to make number go up again.

  • hombre_fatal 6 days ago

    Why would you interpret data cut off at 2020 so that you're just looking at a covid phenomenon? The buttons don't seem to do anything on that site, but why not consider 2010-2025?

    That said, the vibe has definitely shifted. I started working in software in uni ~2009 and every job I've had, I'd applied for <10 positions and got a couple offers. Now, I barely get responses despite 10x the skills and experience I had back then.

    Though I don't think AI has anything to do with it, probably more the explosion of cheap software labor on the global market, and you have to compete with the whole world for a job in your own city.

    Kinda feels like some major part of the gravy train is up.

    • lbotos 6 days ago

      It looks like that specific graph only starts in 2020...

      • hombre_fatal 6 days ago

        Why not just find one that starts in 2022 then. It would look even more dire.

  • niuzeta 6 days ago

    FRED continues to amaze me with the kind of data they have availab.e

    • brfox 6 days ago

      That's from Indeed. And, Indeed has fewer job postings overall [https://fred.stlouisfed.org/series/IHLIDXUS]. Should we normalize the software jobs with the total number of Indeed postings? Is Indeed getting less popular or more popular over this time period? Data is complicated

      • simonsarris 6 days ago

        Look at that graph again. It's indexed to 100 in Feb 1, 2020. It's now at 106. In other words, after all the pandemic madness, the total number of job postings on indeed is slightly larger than it was before, not smaller.

        But for software, it's a lot smaller.

  • oblio 6 days ago

    > People a decade from now will think Elon slashed Twitter's employee count by 90% because of some AI initiative, and not because he simply thought he could run a lot leaner.

    That part is so overblown. Twitter was still trying to hit moonshots. X is basically in "keep the lights on" mode as Musk doesn't need more. Yeah, if Google decides it doesn't want to grow anymore, it can probably cut it's workforce by 90%. And it will be as irrelevant as IBM in maximum 10 years.

    • aikinai 6 days ago

      What moonshots has Twitter gone for in the last decade? Feature velocity is also higher since the acquisition.

      • oblio 6 days ago

        "Moonshots" was probably a bad term. Twitter devs used to be very active in open source, in Scala, actors, etc in particular. Fairly sure that's all dead. From most reports the majority of current Twitter devs are basically visa-shackled to the company.

      • nova22033 6 days ago

        What happened to X, the payment app?

  • 1vuio0pswjnm7 5 days ago

    ZIRP jobs, n., jobs the compensation for which is derived from zero interest loans, often in the form of venture capital, instead of reserves, profits or other sources

    "When interest rates return to normal levels, the ZIRP jobs will disappear." -- Wall Street analyst

  • lozenge 7 days ago

    Macroeconomic policy always changes, recessions come and go, but it's not a permanent change in the way e-commerce or AI is.

  • bawolff 7 days ago

    Honestly, if anything i think AI is going to reverse the trend. Someone is going to have to be hired to clean up after them.

    • notepad0x90 7 days ago

      I think they said that about outsourcing software dev jobs. The reality is somewhere in the middle. extreme cases will need cleanup but overall it's here to stay, maybe with more babysitting.

      • godelski 7 days ago

        I think the reality is Lemon Market Economics. We'll sacrifice quality for price. People want better quality but the truth is that it's a very information asymmetric game and it's really hard to tell quality. If it wasn't, we could all just rely on Amazon reviews and tech reviewers. But without informed consumers, price is all that matters even if it creates a market nobody wants.

    • tempodox 7 days ago

      If anyone will actually bother with cleaning up.

    • xkcd1963 7 days ago

      Thats the impression I got. Things overall get just worse in quality because people rely too much on low wages and copy pasting LLM answers

      • hattmall 7 days ago

        I think that's true in software development. A lot of the focus is on coding because that's really the domain of the people interested in AI, because ultimately they ARE software. But the killer app isn't software, it's anything where the operation is formulaic, but the formula can be tedious to figure out, but once you know it you can confirm that it's correct by working backwards. Software has far too many variables, not least of which is the end user. On the other hand things like accounting, finance, and engineering are far more suitable for trained models and back testing for conformity.

      • autobodie 7 days ago
        3 more

        Get worse for who? The ruling class will simply never care how bad things get for working people if things are getting better for the ruling class.

        • shswkna 7 days ago

          The central problem with this statement is that we expect others to care, but we do not expect this from ourselves.

          We have agency. Whether we are brainwashed or not. If we cared about ourselves, then we don’t need another class, or race, or whatever other grouping to do this for us.

        • xkcd1963 7 days ago

          I meant just regular products as example if I login to bitpanda on browser the parts that would hold the translation hold the keys for translations instead. Just countless examples and many security issues as well.

          Regarding class struggle I think class division always existed but we the mass have all the tools to improve our situation.

  • gmerc 7 days ago

    The flaw with the Zirp narrative that companies managed to raise more money than ever before the moment they had a somewhat believable narrative instead of the crypto/web3/metaverse nonsense.

  • intalentive 6 days ago

    Yes. Tech is clearly a beneficiary of the Cantillon Effect.

  • _rm 6 days ago

    Always disheartening how much people forget and tolerate the underlying deliberate human absurdity that created these events.

    Almost no one has seen a world where the price of money wasn't centrally planned, a committee of experts deciding it based on gut feel like they did in command economies like the Soviet union.

    And then thousands of people's lives are disrupted as the interest rate swings wildly due purely to government action (corona lockdowns and fed zirp response), and it all somehow just ends up people talking about AI instead.

    The true wrongdoers get absolutely no consequences, and we all just carry on like there's no problem. Often because our taxes go to paying hordes of academics and economists to produce layers and layers of sophisticated propaganda that of course this system is the best one.

    Absurd and shitty world.

  • leflambeur 7 days ago

    It's simply the old Capital vs Labor struggle. CEOs and VCs all sing in the same choir, and for the past 3 years the tune is "be leaner".

    p.s.: I'm a big fan of yours on Twitter.

    • saubeidl 7 days ago

      Except Labor in Tech is unique in that it has zero class consciousness and often actively roots for their exploiters.

      If we were to unionize, we could force this machine to a halt and shift the balance of power back in our favor.

      But we don't, because many of us have been brainwashed to believe we're on the same side as the ones trying to squeeze us.

      • GoblinSlayer 7 days ago
        3 more

        >If we were to unionize

        Last time it was tried the union coerced everyone to root for their exploiters. People that unionize aren't magically different.

        • le-mark 6 days ago
          2 more

          What “last time” are you referring to specifically?

          • salawat 6 days ago

            I am also curious.

      • wnc3141 6 days ago

        I think the issue at play here is the quickly changing job descriptions, RSU's and the higher paid bunch benefiting from very unequal pay across a job category.

    • godelski 7 days ago

        > the tune is "be leaner".
      
      Seems like they're happy to start cutting limbs to lose weight. It's hard to keep cutting fat if you've been aggressively cutting fat for so long. If the last CEO did their job there shouldn't be much fat left
      • chii 7 days ago
        2 more

        > If the last CEO did their job there shouldn't be much fat left

        funny how that fat analogy works...because the head (brain) has a lot more fat content than muscles/limbs.

        • godelski 7 days ago

          I never thought to extend the analogy like that, but I like it. It's showing. I mean look how people think my comments imply I don't know what triage is. Not knowing that would be counter to everything I'm saying, which is that a lot of these value numbers are poor guestimates at best. Happens every time I bring this up. It's absurd to think we could measure everything in terms of money. Even economists will tell you that's silly

      • leflambeur 7 days ago
        4 more

        yet this will continue until it grounds to a halt.

        It's amazing and cringy the level of parroting performed by executives. Independent thought is very rare amongst business "leaders".

        • godelski 7 days ago
          3 more

          Let's make the laptops thinner. This way we can clean the oil off of the keyboard, putting it on the screen.

          At this point I'm not sure it's lack of independent thought so much as lack of thought. I'm even beginning to question if people even use the products they work on. Shouldn't there be more pressure from engineers at this point? Is it yes men from top to bottom? Even CEOs seem to be yes men in response to share holders but that's like being a yes man to the wind.

          When I bring this stuff up I'm called negative, a perfectionist, or told I'm out of touch with customers and or understand "value". Idk, maybe they're right. But I'm an engineer. My job is to find problems and fix them. I'm not negative, I'm trying to make the product better. And they're right, I don't understand value. I'm an engineer, it's not my job to make up a number about how valuable some bug fix is or isn't. What is this, "Whose Line Is It Anyways?" If you want made up dollar values go ask the business monkeys, I'm a code monkey

          • andsoitis 7 days ago
            2 more

            > I'm an engineer, it's not my job to make up a number about how valuable some bug fix is or isn't.

            So you think all bugs are equally important to fix?

            • godelski 7 days ago

              No, of course not. That would be laughably absurd. So do you think I'm trolling or you're misunderstanding? Because who isn't familiar with triage?

              Do you think every bug's monetary value is perfectly aligned with user impact? Certainly that isn't true. If it were we'd be much better at security and would be more concerned with data privacy. There's no perfect metric for anything, and it would similarly be naïve to think you could place a dollar value on everything, let alone accurately. That's what I'm talking about.

              My main concern as an engineer is making the best product I can.

              The main concern of the manager is to make the best business.

              Don't get confused and think those are the same things. Hopefully they align, but they don't always.

  • treyd 6 days ago

    That inflection point seems to more specifically start at the day of the new administration's inauguration.

  • jt2190 6 days ago

    It’s a shame that this is the top comment because it’s backward looking (“here’s why white-collar workers lost their jobs in the last year”) instead of looking forward and noticing that even if interest rates are reduced back to zero these jobs will not be performed by humans ever again. THAT is the message here. These workers need to retrain and move on.

    • SR2Z 6 days ago

      > even if interest rates are reduced back to zero these jobs will not be performed by humans ever again

      It's not like companies laid off whole functions. These jobs will continue to be performed by humans - ZIRP just changes the number of humans and how much they get paid.

      > These workers need to retrain and move on.

      They only need to "retrain" insofar as they keep up with the current standards and practices. Software engineers are not going anywhere.

  • Mistletoe 6 days ago

    This is so cool. Had no idea FRED had data like this. They have everything.

    • cyanydeez 6 days ago

      give trump a few more years, and that probably will change.

  • Lu2025 6 days ago

    > unique inflection near the start of 2025

    I wonder what happened in January 2025...

  • fennecfoxy 4 days ago

    Inflation & mismanagement.

  • bootsmann 7 days ago

    [flagged]

    • pydry 7 days ago

      Trump didnt kick off the layoffs.

      It was the war with Russia that drove the fed to raise interest rates in 2022 - a measure that was intended to curb inflation triggered by spikes in the prices of economic inputs (gas, oil, fertilizer, etc.).

      The tech layoffs started later that year.

      Widespread job cuts are an intended effect of raising interest rates - more unemployed = less spending = keeps a lid on inflation.

      AI is just cashing in on the trend.

      • bojan 6 days ago
        24 more

        "War with Russia" sounds like someone willingly started that war, and Russia was the target.

        Of course, nothing is further from the truth. "Russian invasion of Ukraine" is what should be written there.

        • andyferris 6 days ago
          2 more

          Fully agreed, but I suspect that was written that way because the Fed was rather more worried about the fact Russia was at war and under western sanctions, than that Ukraine was busy defending itself.

          Perhaps "Russia's war" would have been a better phrasing that captures both spirits (but it's not a phrase you hear said much).

          • pbhjpbhj 6 days ago

            You think "Russia's war" captures more of the global relevance than "Russia's invasion of Ukraine"?

            For example, Ukraine was a very important food supplier -- one of the top grain suppliers in the World -- and the invasion caused shortages of some foods. Another example is that Ukraine provided a good source of iron ore for EU-based manufacture. If nothing else that would be important to USAmericans as indicating a market opportunity.

            Without that invasion and Putin's inspiration, would Trump have threatened invasion of USA's neighbours? That's got to be vital to USA finances too.

        • didntcheck 6 days ago
          6 more

          I don't see how that follows at all. "War with x" is a factual statement with no implications of moral culpability in either direction

        • pydry 6 days ago
          14 more

          Your demands of an absolute committment to maintaining the domestic establishment's war narrative while making a technical point have been noted. Slava Ukraini.

          • SpicyLemonZest 6 days ago
            2 more

            Who, precisely, do you consider to be the domestic establishment? Neither the President of the United States nor his Secretary of Defense subscribe to this narrative.

            • pydry 6 days ago

              The deep state, most European leaders + states plus a huge chunk of Congress.

              Also even some of Trump's team.

              The establishment doesnt suddenly get swapped out because a new President gets in (even if the tides are shifting).

          • sham1 6 days ago
            11 more

            Truth is the best narrative, and it's better than – perhaps unconsciously – downplaying the culpability of the Russian Federation for the war.

            Heroyam Slava.

            • pydry 6 days ago
              10 more

              It's curious how the "true narrative" people fight with such a passion for so frequently coincides with the dominant war narrative currently pushed by the imperial power center they live under.

              That is, until 10 years later when they have a new narrative about a different military rival. They quietly stop pushing the old narrative and everyone quietly admits the old one was kinda bullshit all along.

              E.g. my views didnt change one iota since 2003 but these views at some point magically stopped conferring a "Saddam sympathizer" moniker from people who demanded unthinking ideological commitment.

              It works the same way with people who live under and unthinkingly consume Russian imperialist propaganda too. The more passionate ones make routine demands for ideological purity similar to the one above.

              • dcow 6 days ago

                What annoys me is that war is war. Of course there’s an aggressor and defender that’s how war works. A bunch of people die. War happens because there is no peaceful answer to, “who is right”. There is no global true narrative. Does it really matter which side is justified? No. The side that wins writes history. But that doesn’t stop a bunch of people so far removed from the war it will never affect them even one tiny bit from burning cycles arguing about which narrative is the nice “true” one. Likely so they can feel good and comfy and morally superior when they close their eyes at night.

              • N7lo4nl34akaoSN 6 days ago
                8 more

                How would you characterize the conflict?

                • pydry 6 days ago
                  7 more

                  Proxy war between two rival imperial power centers dueling for influence over a strategic chunk of land and sea.

                  • N7lo4nl34akaoSN 6 days ago

                    so "war with Russia" becomes "US proxy war with Russia over Ukraine" - fair enough

                  • mopsi 6 days ago
                    5 more

                    From earlier:

                      That is, until 10 years later when they have a new narrative about a different military rival. They quietly stop pushing the old narrative and everyone quietly admits the old one was kinda bullshit all along. /---/ It works the same way with people who live under and unthinkingly consume Russian imperialist propaganda too. 
                    
                    It certainly does. The Russian war against Ukraine began with unmarked soldiers, nicknamed "little green men," and Russia denying any involvement, claiming instead that Ukraine was in the midst of a civil war. When the latest Russian weapons appeared in Ukraine, Russia claimed that tourists must've bought them from military surplus stores.

                    Then we went through a lot of bullshit - that Ukrainian nationalists were committing genocide in Donbas, or that Ukraine was secretly developing nuclear and biological weapons.

                    Now, 10 years later, the narrative has shifted to how this has always been a major confrontation with the USA and NATO, a "proxy war". No doubt, it will shift many more times. Looking forward to when the current "supreme commander" Putin will be regarded as a failure, much like Gorbachev, and blamed for causing the difficult 2030s.

                    • pydry 6 days ago
                      4 more

                      It began with a false flag terrorist attack on civilians in Maidan square which was used to usurp a democratically elected president who was extremely popular in the east and south.

                      Nobody has ever been jailed for this terrorist attack and all the evidence points to Ukrainian fascists being culpable, including:

                      * The Berkut who were there being tried and the trial falling through because all of them were too obviously very far away from the protestor-controlled hotel where the snipers nest was set up.

                      * A Ukrainian war hero who had no reason to lie who was there telling people who was responsible (before being thrown in jail).

                      * A group of the snipers (mercenaries who were there who never got paid) went public.

                      It was as much a proxy war back then, it was just fought under the surface with NGO agitators instead of weapons deliveries.

                      • mopsi 5 days ago
                        3 more

                        Has there ever been a terrorist attack that was not a "false flag" according to internet loonies? :D

                        The fact that you have to make something like this up within the first ten words of your narrative really shows just how detached from reality it is.

                        I wonder what narratives will dominate after the war, when reality sets in: hundreds of thousands dead and never returning home; several times as many disabled, many of them severely; the might and pride of the Russian military sunk or blown up; returned soldiers running massive criminal rings like in the 1990s; state budget empty from massive military spending, leaving people to survive on their own as safety nets crumble. Some conspiracy story about snipers 10+ years ago in another country doesn't really cut it, and getting beaten in an imagined confrontation with the "collective West" sounds really pathetic too, especially when the other side didn't even step into the boxing ring. The USAF hasn't flown a single sortie against Russia, yet strategic bombers are already burning on airfields like in the opening hours of Operation Barbarossa.

                        • pydry 5 days ago
                          2 more

                          >Has there ever been a terrorist attack that was not a "false flag" according to internet loonies? :D

                          Reichstag fire. All the nutter conspiracy theorists think Hitler did it. Obviously you know better.

                          >The fact that you have to make something like this up

                          Evidence doesnt mean much to some people. They will follow the narrative of their leaders whether it is dictated via Moscow blabbing about biolabs or Washington that they allied with freedom loving democrats in Ukraine rather than Nazi goons.

                          • mopsi 5 days ago

                            Excellent example. In case you're not aware (as the snark suggests), the broad consensus among historians since the 1960s holds that the Reichstag was indeed not set on fire by the Nazis.

      • nxm 6 days ago
        11 more

        2 trillion in unnecessary Covid related spending when Covid impact was winding down was the key reason for inflation. "$2000 checks!" was the campaign slogan

        • xorcist 6 days ago

          People sometimes conveniently forgets that inflation historically has taken some 12-24 months to trickle through the economic system. That was the case this time, too. And the first inflationary impulses, famous for being "transitory", was actually before the Russian invasion of Eastern Europe.

        • johnnyanmac 6 days ago
          7 more

          We're blaming the 1000 dollar stimulus checks to the people and not the massive PPP loans that the government never bothered to collect on? It's amazing how well billionaires trained us to fight amongst one another as they ransack in broad daylight.

          • Workaccount2 6 days ago
            5 more

            No. The checks were mostly meaningless.

            The near zero interest rates, pause on student loan payments, pause on rent payments, doubling of unemployment pay, and then the dustings of stimulus checks and bonus childcare checks, all while most white collar workers just continued working like nothing happened, created an incredibly cash rich environment that most people have never seen before.

            And the PPP loans handouts to business owners just to throw more gas on the fire.

            • ghaff 6 days ago
              2 more

              A lot of things were going on in the early 2020s that at least anecdotally seem to have disproportionately affected software jobs so I’m skeptical it’s purely an interest rate phenomenon. But the consensus does seem to be that software has gone from being a ridiculously easy job market by professional job standards to at least a moderately challenging one.

              Software in the US has (aside from maybe finance) been an almost uniquely well-compensated field. That will probably adjust over time especially given the inflow of grads primarily in it for the money.

              • mrkramer 6 days ago

                >A lot of things were going on in the early 2020s that at least anecdotally seem to have disproportionately affected software jobs so I’m skeptical it’s purely an interest rate phenomenon.

                Software industry was over hiring probably ever since dot-com bubble because after the bubble burst revenue and profits were rapidly growing and it never really stopped. I would rather blame the managers who constantly pushed for more workers instead of increasing the productivity of the existing workforce.

            • dh2022 6 days ago

              Add the Inflation Reduction Act which did the exact opposite of its title (increased government spending when the labor market was extremely tight)

          • fallingknife 6 days ago

            It's all the same money printing. The issue is that people generally believe that emergency measures were justified in early 2020 when the crisis hit and there were so many unknowns, but not justified a year later when the virus was already endemic and the vaccine was out.

        • mempko 6 days ago

          You have it backwards, inflation causes an increase in the money supply. When prices rise, it forces people to take on more debt causing an increase in the money supply. Those 2000 checks actually probably dampened inflation for a short while. Most people used those checks to pay down debt (which destroys money).

        • bgwalter 6 days ago

          That's one of the factors. In Europe at least the other factor is high energy prices after the broken turbine theater and subsequent destruction of Nord Stream.

          Prices and unemployment really started to rise after that. The EU buys overpriced LNG from the US, so the US is somewhat isolated from that. But the US is not isolated against the general economic downturn worldwide.

          Politicians do not care. Merz, with barely 25% approval of the German population, continues the policies outlined by Hegseth during his visit to the EU. Trump still plays theater to appease his MAGA base, but Senators Rubio and Graham increasingly start holding the reins.

      • frontfor 6 days ago
        4 more

        I don’t believe the war specifically drove the Fed to raise interest rates. Inflation and asset prices have risen sharply a year prior to the war.

        • andyferris 6 days ago
          2 more

          There was a specific and particular expectation (and even patience) for inflation to drop naturally as the supply chains again reached equilibrium after Covid.

          Russia's invasion of Ukraine however caused a whole bunch of economic inputs like energy and fertilizer to spike, and central banks world wide didn't want economies to "get used to" constant high inflation rates, causing a perpetual problem.

          • Workaccount2 6 days ago

            Work from home was the wrench in the governments plan. If the pandemic happened in 2000, the stimulus would have been needed, as the tools for remote work were way too poor back then.

            But instead all the productivity workers just switched to their home office and things just kept working. The stimulus should have been shut off in early-mid 2021 when this was abundantly clear. But the government let it run because people were so jubilant in the money shower.

        • larrled 6 days ago

          Biden credited the inflation to Putin, claiming that 70% was due to Putin’s price hikes.

          That was not entirely true.

          Trump’s pandemic spending (lockdowns, vaccines…), and subsequently Biden’s, but most importantly the curiously named Inflation Reduction Act were obvious drivers. You can’t stimulate an already overheated economy to the tune of 2 trillion without getting Larry Summers a bit worked up.

      • fallingknife 6 days ago

        Raising interest rates has nothing to do with the 2022 war. If it did, rates would have come back down. Interest rate increases don't help with supply/demand driven price spikes. They do help with money supply and aggregate demand driven inflation, which was the cause of our recent inflation (that started way before Russia invaded Ukraine). The war was a convenient excuse because it deflects responsibility.

        And remember when they first said inflation was "transitory" and caused by supply chain issues from the economy reopening after covid? They didn't raise interest rates then because, like I mentioned above, interest rates don't help with supply shocks. If they did, the Fed would have raised rates then.

      • brandall10 6 days ago

        Anecdotally, I detected a cooling starting in March of 2022.

        Was actively looking at this time for months prior and it went from a few recruiters a day reaching out to a few a week.

      • mempko 6 days ago

        You are wrong, Trump's 2017 Tax cut bill had a provision that kicked in that caused the layoffs. Engineers became more expensive because now companies had to amortize their costs over 5 years instead of immediately.

      • bubbleRefuge 6 days ago
        2 more

        There is no proof that higher interest rates lead to greater unemployment. In fact, macro employment kind of boomed during the referenced period. I'd posit that higher rates actually boosted macro employment stats . Why ? Because higher rates = higher income to rich people via interest income channel = higher fed budget deficits ( gov is net payer of interest) = higher GDP = lower unemployment ceterus paribus.

        • lxgr 6 days ago

          This is completely backwards. When interest rates are high, the expected returns of equity investments have to be even higher to justify the risk over risk-free fixed income assets.

          And that's only the indirect effect on equity funding; debt funding just directly becomes more expensive.

    • hoseyor 7 days ago

      [flagged]

      • quonn 7 days ago
        2 more

        Why would it have anything to do with AI? Generative AI has been widely used for two years and the drop is exactly around January 20. What happened in AI around that time?

  • csomar 7 days ago

    Elon Musk experiment is the worst anchor that can be used for comparison since the dude destabilized Twitter (re-branding, random layoffs, etc...). I'd be more interested in companies that went leaner but did it in a sane manner. The Internet user base grew between 2022 and now but Twitter might have lost users in that time period and certainly didn't make any new innovations beyond trying to charge its users more and confusing them.

idkwhattocallme 7 days ago

I worked at two different $10B+ market cap companies during ZIRP. I recall in most meetings over half of the knowledge workers attending were superfluous. I mean, we hired someone on my team to attend cross functional meetings because our calendars were literally too full to attend. Why could we do that? Because the company was growing and hiring someone to attend meetings wasn't going to hurt the skyrocketing stock. Plus hiring someone gave my VP more headcount and therefore more clout. The market only valued company growth, not efficiency. But the market always capitulates to value (over time). When that happens all those overlay hires will get axed. Both companies have since laid off 10K+. AI was the scapegoat. But really, a lot of the knowledge worker jobs it "replaces" weren't providing real value anyway.

  • hn_throwaway_99 7 days ago

    This is so true. We had a (admittedly derogatory) term we used during the rise in interest rates, "zero interest rate product managers". Don't get me wrong, I think great product managers are worth their weight in gold, but I encountered so many PMs during the ZIRP era who were essentially just Jira-updaters and meeting-schedulers. The vast majority of folks I see that were in tech that are having trouble getting hired now are in people who were in those "adjacent" roles - think agile coaches, TPMs, etc. (but I have a ton of sympathy for these folks - many of them worked hard for years and built their skills - but these roles were always somewhat "optional").

    I'd also highlight that beyond over-hiring being responsible for the downturn in tech employment, I think offshoring is way more responsible for the reduction in tech than AI when it comes to US jobs. Video conferencing tech didn't get really good and ubiquitous (especially for folks working from home) until the late teens, and since then I've seen an explosion of offshore contractors. With so many folks working remotely anyway, what does it matter if your coworker is in the same city or a different continent, as long as there is at least some daily time overlap (which is also why I've seen a ton of offshoring to Latin America and Europe over places like India).

    • catigula 7 days ago

      Off-shoring is pretty big right now but what shocks me is that when I walk around my company campus I see obscene amounts of people visibly and culturally from, mostly, India and China. The idea that literally massive amounts of this workforce couldn't possibly be filled by domestic grads is pretty hard to engage with. These are low level business and accounting analyst positions.

      Both sides of the aisle retreated from domestic labor protection for their own different reasons so the US labor force got clobbered.

      • ajmurmann 7 days ago
        48 more

        I am VERY pro-immigration. I do have concerns about the H1B program though. IMO it's not great for both immigrant workers, as well as non-immigrant workers because it creates a class of workers for whom it's harder to change employers which weakens their negotiation position. If this is the case for enough of the workforce it artificially depresses wages for everyone. I want to see a reform that makes it much easier for H1B workers to change employers.

        • Spooky23 7 days ago
          6 more

          In context of tech, H1B is great for the money people in the US and India. It suppresses wages in both countries and is a powerful plum for employee “loyalty”. There’s a whole industry of companies stoking the pipeline of cheap labor and corrupting the hiring process.

          In big dollar markets, the program is used more for special skills. But when a big bank or government contractor needs marginally skilled people onshore, they open an office in Nowhere, Arizona, and have a hard time finding J2EE developers. So some company from New Jersey will appear and provide a steady stream of workers making $25/hr.

          The calculus is that more H1=less offshore.

          The smart move would be to just let skilled workers from India, China, etc with a visa that doesn’t tie them to an employer. That would end the abusive labor practices and probably reduce the number of lower end workers or the incentive to deny entry level employment to US nationals.

          • senderista 7 days ago

            H1-B also makes CS masters programs a cash cow for US schools.

          • rightbyte 7 days ago
            4 more

            How does H1B suppress wages in India?

            • Aeolun 7 days ago
              2 more

              All those people skilled enough to get hired in the US (for massive increase in wages) don’t try to get similar positions in India, thus, nobody has to compete to pay for them.

              • bernawil 6 days ago

                I don't think so. You can argue emmigration takes away supply in the labor side. Why would prices go down? Quite the contrary. I don't think it necessarily raises salaries in India though, because that market seems to have a hard cap somewhere around 36k/year but it sure does opens up positions for newcomers.

            • antithesizer 7 days ago

              Because it surpresses wages in the US, so Indian employers do not need to offer as much compensation to keep local workers who are considering emigrating.

        • catigula 7 days ago
          31 more

          I want to use you as a bit of a sounding board, so don't take this as negative feedback.

          The problem is that the left, which was historically pro-labor, abdicated this position for racial reasons, and the right was always about maximizing the economic zone.

          • hn_throwaway_99 7 days ago
            2 more

            I saw a report recently about the political left in Denmark, who are basically one of the the only progressive movements in countries that understood what it takes to maintain support, and hence Denmark has had much less of a rise in support for far right parties than other countries in the world. Here's an article, https://www.nytimes.com/2025/02/24/magazine/denmark-immigrat....

            Basically, progressives in Denmark have argued for very strict immigration rules, the essential argument being that Denmark has an expensive social welfare state, and to get the populace to support the high taxes needed to pay for this, you can't just let anyone in who shows up on your doorstep.

            The American left could learn a ton of lessons from this. I may loath Greg Abbott for lots of reasons, but I largely support what he did bussing migrants to NYC and other liberal cities. Many people in these cities wanted to bask in the feelings of moral superiority by being "sanctuary cities", but public sentiment changed drastically when they actually had to start bearing a large portion of the cost of a flood of migrants.

            • ajmurmann 6 days ago

              Is there a reason social benefits must be available to immigrants? It seems like those could result be tied to citizenship or something like a minimum amount of lifetime taxes someone most have paid.

          • c0redump 6 days ago

            I mostly agree with you, but i think there’s something you got wrong. The democrat establishment didn’t abdicate their pro-labor position for reasons of racial equity- this was only ever a cover story.

            The real reason is that they are totally beholden to powerful business interests that benefit from mass immigration, and the ensuing suppression of American labor movements. The racial equity bit is just the line that they feed to their voters.

          • ajmurmann 6 days ago
            3 more

            I think the problems are more complex and much harder to fix and more depressing. The actual policies by the Democratic party have been "pro-worker". Biden was strongly pro-union. I am hard pressed to think of any policy by the Biden administration that was focused on racial issues. However, it seems like the perception of the Democratic party is largely mixed in with leftists who don't even like the party.

            I think the real problem is that the median voter is either unable to, has no time to or no interest to understand basic economics and second-order consequences. We see this on both sides of the aisle. Policies like caps on credit card interest rates, rent control or no tax on tips are very, very popular while also being obviously bad after thinking about it for just 1 minute.

            This is compounded by there being relatively little discussion of policies like that. They get reported on but not discussed and analyzed. This takes us back to your point about the perception of the Democratic party. The media (probably because the median voter prefers it) will instead discuss issues that are more emotionally relatable, like the border being "overwhelmed", trans athletes, etc. which makes it less likely to get people to think about economic policy.

            This causes a preference for simple policies that seem to aim straight for the goal. Rent too high? Prohibit higher rent! Credit card fees too high? Prohibit high fees! Immigrants lower wages? Have fewer immigrant!

            Telling the median voter that H1-B visa holders are lowering wages due to the high friction of changing sponsors and that the solution is to loosen the visa restrictions is gonna go over well with much of the electorate. Even only the portion of initial problem statement will likely reach most voters in the form of "H1-B visas lower wages". Someone who will simply take that simplified issue and run with cutting down further on immigration will be much more likely to succeed with how public opinion is currently formed.

            All this stuff is why I love learning about policy and absolutely loath politics.

            • DenisM 6 days ago
              2 more

              I’ve read some analysis than many swing voters supported Trump because they were unhappy with the economic situation, not due to culture wars. In their minds, and words, Trump may change at least something while democrats will certainly change nothing. Whatever pro-labor policies Biden had they didn’t move the needle.

              What do you think of that?

              • ajmurmann 6 days ago

                I think that all statistics show us that the economy was very strong, especially compared to other countries. We did the impossible and had a soft landing. However, we also learned that the public prefers unemployment over inflation, even if real wages go up. People see their wage increases as earned even if it's just a market adjustment.

                Further, I'm very disappointed that the median voter doesn't seem to understand or care about the policies they vote for. Tariffs and deportations are recipes to cause more inflation, yet here we are.

          • SpicyLemonZest 7 days ago
            23 more

            Employment-based immigration policy just isn't controversial outside of very specific bubbles. Everyone who's considered the problem seriously, left and right, realizes that the H1B system is bad a point-based system is the way to go, which is why it's been part of every immigration reform proposal for over a decade with essentially no controversy. If this were the only aspect of immigration issues, or if people felt it was important enough to pull it out of broad immigration reform, it would pass in a heartbeat.

            • Aeolun 7 days ago
              11 more

              Japan will let everyone that can get a job in (and is willing to do the immigration process for them). This seems like a perfectly fair way to do things. If you don’t have a job, and can’t find a new one in 3-6 months, you have to leave again.

              Don’t understand why other countries make it harder.

              • tjpnz 7 days ago

                Japan (the country) doesn't do this. You still need a company to sponsor you and not every company can.

              • Ray20 6 days ago

                Because other countries are not Japan, and if, say, the US were to pursue a similar policy, they would receive over 200 million immigrant workers and near-zero employment among the native population in the first two years

              • jajko 7 days ago
                6 more

                Switzerland is the same. By far the best implemented immigration policies in whole Europe, if only Germany and France egos would step down a notch, acknowledge their mistakes and take an inspiration from clearly way more successful neighbour. They have 3x more immigration than next country and it just works, long term.

                EU would flourish economically and there would be no room for ultra conservative right to gain any real foothold (which is 95% just failed immigration topic just like Brexit was).

                Alas, we are where we are, they slowly backpedal but its too little too late, as usually. I blame Merkel for half of EU woes, she really was a horrible leader of otherwise very powerful nation made much weaker and less resilient due to her flawed policies and lack of grokking where world is heading to.

                Btw she still acknowledges nothing and keeps thinking how great she was. Also a nuclear physicist who turned off all existing nuclear plants too early so Germany has to import massive amount of electricity from coal burning plants. You can't make it up.

                • throwaway2037 5 days ago

                  First, I assume you are talking about highly skilled immigration to Switzerland. Does Swiss immigration policy also apply to non-highly skilled immigration? (Leave aside refugees for this discussion.)

                  How does Switzerland keep local companies from hiring workers on low wages to compete against locals? How do they police it?

                • ajmurmann 6 days ago
                  4 more

                  Does Switzerland not take any refugees?

                  • jajko 6 days ago
                    3 more

                    Yes, some, but those are very different from economical migrants and their numbers compared to those migrants are small

                    • ajmurmann 6 days ago
                      2 more

                      What do you think caused the very high numbers of refugees in other European countries? I thought they were all supposed to be refugees from war and not economic refugees. In fact I thought economic refugees were just economic migrants and not something European countries let in under refugee rules.

                      • SpicyLemonZest 5 days ago

                        The big difference that's been highly relevant recently (https://www.swissinfo.ch/eng/politics/un-criticises-restrict...) is the application of asylum rules to civil war. You only have a right to international asylum if you can't find refuge in your home country - but what does "can't" mean, precisely, when your country is split into multiple warring factions along hazy front lines? There's a lot of room for interpretation.

              • throwaway2037 7 days ago
                2 more

                Can you give more details here? I don't fully understand your post.

                • Aeolun 5 days ago

                  Immigration based on “I have someone willing to pay me to work” (and go through the immigration process) is essentially unlimited. Immigration based on “I’m a poor refugee, please help me” is nearly nonexistent (helps they’re an island).

            • catigula 7 days ago
              11 more

              My understanding is that Bernie Sanders used to say that mass immigration was a "Koch brothers thing" and his tune on this has since changed to align with "progressive" ideas, but I might be mistaken.

              I already know that the right-wing supports h1bs, Trump himself said so.

              • gosub100 7 days ago

                He recently addressed Congress and brought up the abuse of H1B such as for entry level accounting positions. The program was to meet shortages for highly skilled positions. Now its being abused to cheat new grads out of jobs and depress wages

              • SpicyLemonZest 7 days ago
                8 more

                Even in his most immigration-skeptical era (https://www.computerworld.com/article/1367869/bernie-sanders...), Sanders always acknowledged that some companies genuinely need a skilled immigration program to hire the global best and brightest. And note his line about "offshore outsourcing companies"; the issue's become even less controversial now that the balance of H1B sponsors is shifting towards large American tech companies who genuinely pay market rate.

                • bradlys 7 days ago
                  6 more

                  What if tech roles at big tech roles actually paid more like the same prestigious firms in finance in nyc?

                  People in tech are so quick to shoot themselves in the foot.

                  • SpicyLemonZest 7 days ago
                    2 more

                    Not sure what you're aiming to get out of this comparison. Software engineers make quite a bit more at prestigious tech companies than they do at prestigious finance firms in NYC, and prestigious finance firms in NYC extensively recruit people from outside the US. Even if you want to compare engineers in tech to bankers in finance, I'm not sure Goldman is paying all that much better than OpenAI these days.

                    • throwaway2037 7 days ago

                      Why do people think Goldman pays software developers so well? They do not. They pay whatever is required compared to their competition (mostly other ibanks). There is a tiny sliver (less than 5%) of the dev staff who work in front office and are called "Strats". (Some other banks have "Strats" [Morgan?] or put you into a quant team to pay you more [JPM/UBS/etc].) They make about 25-50% more money compared to vanilla software devs in the IT division.

                  • fijiaarone 7 days ago
                    2 more

                    The job of the high paid people in finance at prestigious firms is to look nice in an expensive suit. Know many people in tech with those qualifications?

                    • bradlys 7 days ago

                      I'd be good at it but I won't get hired cause I didn't go to the right boarding school.

                      Tech has its barriers too. Most people I've met in tech come from relatively rich families. (Families where spending $70k+/yr on college is not a major concern for multiple kids - that's not normal middle class at all even for the US)

                  • throwaway2037 7 days ago

                    Regarding the first sentence, it is already true for software developers. You can (and probably will) make more money at FAANG compared to global ibanks in NYC.

                • catigula 7 days ago

                  I don't really think that is what's being discussed here.

                  Even literal Nazis were exempted from immigration controls on the basis of extreme merit.

              • DonHopkins 7 days ago

                >Trump himself said so

                TACO Trump himself said he'd reveal his health care plan in two weeks, many many years ago, many many times. But then he chickened out again and again and again and again and again. So that the buk buk buk are you talking about?

          • bernawil 6 days ago

            was the left ever truly anti-immigration? I genuinely ask. Because the last leftwing explicitly pro-union movement I can remember was the late 90s/2000s anti-globalists, the ones that used to protest the G7 summits and the like. But they were in favor of immigration, so it always seemed contradictory. Anyway, it's not like the right doesn't have its own equally contradictory positions.

        • bdangubic 7 days ago
          2 more

          amen! that will never happen though, nothing ever happens here that helps the workers and whatever rights we have now are slowly dwindling (immigrants or otherwise…)

          • andrekandre 7 days ago

              > nothing ever happens here that helps the workers and whatever rights we have now are slowly dwindling
            
            its almost as if we need a 'workers party' or something... though i'd imagine first-past-the-post in the u.s makes that difficult.
        • kstrauser 7 days ago
          8 more

          I agree with all of that. I've seen employers treat workers with H1B visas as slaves, basically. Local employees had a pretty decent work-life balance, but H1B employees got calls at 8PM on a Friday night to add a feature. And why not? What were they going to do quit (and have, what is it, something like 48 hours to get out of the country)?

          I felt enormous sympathy for my coworkers here with that visa. Their lives sucked because there was little downside for sociopathic managers to make them suck.

          Most frustrating was when they were doing the same kind of work I was doing, like writing Python web services and whatnot. We absolutely could hire local employees to do those things. They weren't building quantum computers or something. Crappy employers gamed the system to get below-market-rate-salary employees and work them like rented mules. It was infuriating.

          • lokar 7 days ago
            7 more

            It sucks that people are treated that way.

            While working at Google I worked with many many amazing H1B (and other kinds) visa holders. I did 3 interviews a week, sat on hiring committees (reading 10-15 packets a week) and had a pretty good gauge of what we could find.

            There was just no way I could see that we could replace these people with Americans. And they got paid top dollar and had the same wlb as everyone else (you could not generally tell what someone’s status was).

            • kstrauser 7 days ago

              I fully, completely support the idea of visa programs running like that. If you want to pay top dollar for someone with unique skills to move here and help build our economy, I am fully behind this.

              But wanna use it as a way to undercut American jobs with 80-hour-a-week laborers, as I've personally witnessed? Nah.

              My criticisms against the H1B program are completely against the companies who abuse it. By all means, please do use it to bring in world-class scientists, researchers, and engineers!

            • c0redump 6 days ago

              This was true up until pretty recently. CS has come to be seen as a “prestigious” degree, and SWE as a “prestigious” career. Lots of kids who, 10 years ago, would have studied law, medicine, finance, or hard sciences, are studying CS. At my alma mater, CS is the largest major by a huge margin. The result of all this is there is a massive supply of smart and capable American citizens with formal training trying to break in to the job market, with limited success, due in no small part to the labor oversupply caused by immigration.

              https://www.linkedin.com/posts/jamesfobrien_tech-jobs-have-d...

            • guestbest 7 days ago
              4 more

              If the foreign candidates were so much superior than locally born candidates as you explained, why not just open a campus in that country and thus save the best employees from having to uproot from their native culture?

              • lokar 7 days ago
                2 more

                Good question. In many cases they did. The Zurich office has people from all over Europe.

                But, for existing teams they wanted (reasonably) to avoid splitting between locations. So you need someone local.

                • disgruntledphd2 7 days ago

                  I think the real reason for hiring locally is both that communication works better, and that the higher ups don't want to give the impression that their jobs could also be outsourced.

              • ajmurmann 6 days ago

                Time zones can be a real issue even with remote work. There are of course also arguments for in-person collaboration.

      • yobbo 7 days ago
        9 more

        > The idea that literally massive amounts of this workforce couldn't possibly be filled by domestic grads

        One theory is that the benefit they might be providing over domestic "grads" is lack of prerequisites for promotion above certain levels (language, cultural fit, and so on). For managers, this means the prestige of increased headcount without the various "burdens" of managing "careerists". For example, less plausible competition for career-ladder jobs which can then be reserved for favoured individuals. Just a theory.

        • boredatoms 7 days ago
          3 more

          I think that would backfire as the intrinsic culture of the company changes as it absorbs more people. Verticals would form from new hires who did manage to get promoted

          • catigula 7 days ago

            It's also not correct to view people as atomized individuals. People band together on shared culture and oftentimes ethnicity.

          • bradlys 7 days ago

            Which is exactly what has happened. Anyone in the industry for 15 years can easily see this.

        • A4ET8a8uTh0_v2 7 days ago
          2 more

          I will admit that this is the most plausible explanation of this phenomenon that explains the benefit to managers I have read on this issue so far.

          • catigula 7 days ago

            Putting aside economic incentives, which the wealthy were eager to reap, the vast majority of the technical labor force in this country came and still comes from (outside of SF) a specific race and we have huge incentives that literally everyone reading this has brushed up against, whether in support or against, to alter that racial makeup.

            Obviously the only real solution to creating an artificial labor shortage is looking externally from the existing labor force. Simply randomly hiring underserved groups didn't really make sense because they weren't participants.

            Where I work, we have two main goals when I'm involved in the technical hiring process: hire the cheapest labor and try to increase diversity. I'm not necessarily against either, but those are our goals.

        • throwaway2037 7 days ago
          3 more

          Careerists: What does this term mean?

          • TexanFeller 6 days ago
            2 more

            People more concerned about getting a promotion than they are taking pride in doing quality work that makes a difference. Corporate rubrics for promotion have little to do with doing great work and careerists focus heavily on playing these stupid games set up by HR execs.

            • throwaway2037 2 days ago

              Former President Obama (of the US) calls this a "false choice". Can you be both focused on the next promotion and providing lots of value in your current role? I think the answer is yes. Of course, there are people whom seem to produce nothing, but get promoted... in the case of software engineers, they are mostly promoted on the principle of "competance" -- You are a good software dev... so now your run this team (regardless if they are a good manager!).

      • spoaceman7777 7 days ago

        It's also worth noting that it's almost entirely native born Americans that are pushing back against nepotism. Extreme nepotism is still the norm (an expectation even) in most South and East Asian cultures. And it's quite readily acknowledged if you speak to newer hires who haven't realized yet that it is best kept quiet.

        It's a hard truth for many Americans to swallow, but it is the truth nonetheless.

        Not to say there isn't an incredible amount of merit... but the historical impact of rampant nepotism in the US is widely acknowledged, and this newer manifestation should be acknowledged just the same.

      • gedy 7 days ago
        4 more

        I was working at a SoCal company a couple years ago (where I’m from), and we had a lot of Chinese and Indian folks. I remember cracking up when one of the Indian fellows pulled me aside and asked me where I was from, because I sounded so different with my accent and lingo. He thought I was from some small European country, lol.

        • catigula 7 days ago
          2 more

          Just to note interpersonally I find pretty much any group to be great on average but being a participant of US labor and sympathetic to other US laborers this is clearly not something I can support.

          • hluska 7 days ago

            You can’t support having a good enough relationship with coworkers from outside of your country that you can relate cheerful anecdotes about them?

        • tcdent 7 days ago

          The language I use being from southern California has, on more than one occasion, sparked conversation about it.

          Sorry, dude, it's like, all I know.

      • therealpygon 7 days ago

        My opinion is that off-shore teams are also going to be some of the jobs more easily replaced, because many of these are highly standardized with instructions due to the turnover they have. I wouldn’t be surprised if these outsourcing companies are already working toward that end. They are definitely automating and/or able to collect significant training data from the various tools they require their employees to use for customers.

      • lostlogin 7 days ago
        4 more

        > The idea that literally massive amounts of this workforce couldn't possibly be filled by domestic grads is pretty hard to engage with.

        I hear this argument where I live for various reasons, but surely it only ever comes down to wages and/or conditions?

        If the company paid a competitive rate (ie higher), locals would apply. Surely blaming a lack of local interest is rarely going to be due to anything other than pay or conditions?

        • catigula 7 days ago
          3 more

          The company having access to the global labor force is the problem we're explicitly discussing. This isn't seen as something desirable by US workers.

          • lanstin 7 days ago
            2 more

            I was born in NC, and I mostly have experienced the large amount of immigration as a positive. Most of the people I grew up were virulently anti-intellectuals, mocking math and science learning, and most of them have gone on to be realtors and business folks, bankers even. All the people I've met from China or South Asia (the two demographics I work most closely worth) value learning and science and math - not as some "lets have STEM summer camps" but when they meet some new 8 year old will ask them to solve some math problems (like precisely 1 of my kids' dozens of relatives).

            I enjoy meeting the very smart people from all sorts of backgrounds - they share the values of education and hard work that my parents emphasized, and they have an appreciation for what we enjoy as software engineers; US born folks tend to have a bit of entitlement, and want success without hard work.

            I interview a fair number of people, and truly first rate minds are a limited resource - there's just so many in each city (and not everyone will want to or be able to move for a career). Even with "off-shoring" one finds after hiring in a given city for a while, it gets harder, and the efficient thing to do is to open a branch in a new city.

            I don't know, perhaps the realtors from my class get more money than many scientists or engineers, and certainly more than my peers in India (whose salaries have gone from 10% of mine to about 40% of mine in the past decade or two), but the point is the real love of solving novel problems - in an industry where success leads to many novel problems.

            Hard work, interesting problems, and building things that actual people use - these are the core value prop for software engineering as a career; the money is pretty new and not the core; finding people who share that perspective is priceless. Enough money to provide a good start to your children and help your family is good, but never the heart of the matter.

            • tilne 6 days ago

              Money is absolutely the heart of the matter for employees and employers. I think it’s ridiculous to suggest otherwise.

      • underlipton 7 days ago
        3 more

        We all get 5 conspiracy theories before we advance from "understandably suspicious, given the complexity of the modern world" to "reliable tinfoil purchasers", and one of mine is that the prevalence of Indian execs and, to a lesser extent, Indian and Chinese workers in tech is a backdoor concession to countries who could open a demographic can of whoop-ass on us if they really wanted to. We let them bleed off the ambitious intellectuals who could become a political issue for their elite, and ours get convenient scapegoats for why businesses can't hire, train, and pay domestic workers well. As far as top men are concerned, it's a good deal.

        Nadella ascending to the leadership of Micro"I Can't Believe It's Not Considered A State-Sponsored Defense Corp"soft is what got my mildly xenophobic (sorry) gears turning.

        • hluska 7 days ago
          2 more

          Edited:

          Actually disregard, this isn’t worth it, but I don’t grant any freebies.

      • jayd16 7 days ago
        7 more

        I mean, aren't 3 out of 8 humans from India or China? If the company is big enough to appeal to a global applicant pool its a bit expected.

        • sokoloff 7 days ago
          6 more

          It’s presumably (from context) a company campus in the US that they’re taking about. I wouldn’t expect 3 of 8 legally authorized to work in the US people to be Chinese or Indian combined.

          Other than a few international visitors, I’d expect the makeup to look like the domestic tech worker demographics rather than like the global population demographics.

          • bradlys 7 days ago
            4 more

            Also, anyone who has worked in these companies also know it’s much larger than 3 out of 8… comical to act like it’s only 3/8.

            • senderista 7 days ago
              3 more

              I estimate AWS engineering is maybe 80% Indian and another 10% Chinese. Less at higher levels though.

              • bradlys 7 days ago
                2 more

                It always blows my mind that 75% of H1B admittance is Indian. Then you live in SFBA for 10 years and it's not really a surprise anymore.

                • c0redump 6 days ago

                  Certain suburbs of Seattle (Redmond, bothell) are pretty much entirely Indian

          • apex3stoker 7 days ago

            I think most software companies hire from computer science graduates from US colleges. It’s likely that international students makes up a large percentage of these graduates.

      • renewiltord 7 days ago
        6 more

        [flagged]

        • VonTum 7 days ago
          2 more

          What a weird crabs-in-a-bucket argument against unions. "Don't empower yourself and the rest of your colleagues because they might get powerful enough to kick you out"?

          The whole reason H1Bs were invented is to disempower the existing workforce. Not reaching for a (long overdue) tool of power for tech workers is playing right into their hand.

        • catigula 7 days ago
          3 more

          The funny thing is that you're not wrong and this is yet another feather in the cap of "foreign labor are literal scabs" argument.

          • throwaway2037 5 days ago

            This comment made me laugh. I have not seen the term "scab" since the late 1980s when there were a bunch of union strikes in my area. It is funny to see it applied to white collar (office) workers.

            Edit: I found this funny quote describing a scab from the early 1900s:

            https://en.wikipedia.org/wiki/Jack_London#Diatribe_about_sca...

                > After God had finished the rattlesnake, the toad, and the vampire, he had some awful substance left with which he made a scab. A scab is a two-legged animal with a corkscrew soul, a water brain, a combination backbone of jelly and glue. Where others have hearts, he carries a tumor of rotten principles.
          • renewiltord 7 days ago

            The history of unions and the past of the AFLCIO is filled with successful lobbying to prevent immigrants from becoming American. They’re not going to stop suddenly today.

            Knowing one’s enemy is key to fighting them.

    • adamtaylor_13 7 days ago

      I’m realizing that 100% of all product managers I have ever worked with were just ZIRP-PMs.

      I have never once worked with a product manager who I could describe as “worth their weight in gold”.

      Not saying they don’t exist, but they’re probably even rarer than you think.

    • boogieknite 7 days ago

      first job out of college i was one of these pms. luckily i figured it out quickly and would spend maybe 2 hours a day working, 6 hours a day teaching myself to program. i cant believe that job existed and they gave it to me. one of my teammates was moved to HR and he was distraught over how he actually had work to do

    • icedchai 7 days ago

      I worked at a small company with more PMs than developers. It was incredible how much bull it created.

    • cavisne 6 days ago

      My theory for these PM's is its basically a cheap way to take potential entrepreneurs off the market. Its hard to predict if a startup will succeed but one genre of success is having a Type A "fake it till you make it" non technical cofounder who can keep raising long enough to get product market fit.

      These types all go to the same schools and do really well, interview the same, and value the prestige of working in big tech. So it's pretty easy to identify them and offer them a great career path and take them off the market.

      Technical founders are way trickier to identify as they can be dropouts, interview poorly, not value the prestige etc.

    • aswegs8 6 days ago

      How are TPMs optional? In my experience they provide more value than PMs that don't understand technology.

      • hn_throwaway_99 6 days ago

        Perhaps the terminology differs between companies, but in my experience TPM means technical program manager. For large projects they were responsible for creating project Gantt charts, identify blockers early, and essentially "greasing the wheels" between disparate teams.

        Again, IMO the good ones added a lot of value by making sure no balls got dropped, which is easy to do with large, multi-team projects. Most of them, though, did a lot of just "status checks" and meeting updates.

  • mlsu 7 days ago

    I suspect that these "AI layoffs" are really "interest rate" layoffs in disguise.

    Software was truly truly insane for a bit there. Straight out of college, no-name CS degree, making $120, $150k (back when $120k really meant $120k)? The music had to stop on that one.

    • spamizbad 7 days ago

      Yeah, my spiciest take is that Jr. Dev salaries really started getting silly during the 2nd half of the 2010s. It was ultimately supply (too little) and demand (too much) pushing them upward, but it was a huge signal we were in a bubble.

      • LPisGood 7 days ago
        6 more

        As someone who entered the workforce just after this, I feel like I missed the peak. A ton if those people got boatloads of money, great stock options, and many years of experience that they can continue to leverage for excellent positions.

        • trade2play 7 days ago
          4 more

          I joined in 2018.

          Honestly it was 10 years too late. The big innovations of the 2010 era were maturing. I’ve spent my career maintaining and tweaking those, which does next to zero for your career development. It’s boring and bloated. On the bright side I’ve made a lot of money and have no issues getting jobs so far.

          • Aeolun 7 days ago
            2 more

            I think my career started in 2008? That was a great time to start for the purpose of learning, but a terrible one for compensation. Basically nobody knew what they were doing, and software wasn’t the ticket to free money that it became later yet.

            • dustingetz 7 days ago

              data engineering was free money for nothing at all circa 2014, they got paid about 1.5x a fullstack application developer for .5x the work because frontend/ui work was considered soft, unworthy

          • lurking_swe 7 days ago

            there’s always interesting work out there. It just doesn’t always align with ethical values, good salary, or work life balance. There’s always a trade off.

            For example think of space x, Waymo, parts of US national defense, and the sciences (cancer research, climate science - analyzing satellite images, etc). They are doing novel work that’s certainly not boring!

            I think you’re probably referring to excitement and cutting edge in consumer products? I agree that has been stale for a while.

        • idkwhattocallme 7 days ago

          Don't worry, there is always another bubble on the horizon

    • nyarlathotep_ 7 days ago

      The irony now is that 120k is basically minimum wage for major metros (and in most cases that excludes home ownership).

      Of course, that growth in wages in this sector was a contributing factor to home/rental price increases as the "market" could bear higher prices.

      • rekenaut 7 days ago
        11 more

        I feel that saying "120k is basically minimum wage for major metros" is absurd. As of 2022, there are only three metro areas in the US that have a per capita income greater than $120,000 [1] (Bay Area and Southwest Connecticut). Anywhere else in the US, 120k is doing pretty well for yourself, compared to the rest of the population. The average American working full time earns $60k [2]. I'm sure it's not a comfortable wage in some places, but "basically minimum wage" just seems ignorant.

        [1] https://en.wikipedia.org/wiki/List_of_United_States_metropol...

        [2] https://en.wikipedia.org/wiki/Personal_income_in_the_United_...

        • lamename 7 days ago
          8 more

          I disagree. Your data doesnt make the grandparent's assertion false. Cost of living != per capita or median income. Factoring in sensible retirement, expensive housing, inflation, etc, I think the $120k figure may not be perfect, but is close enough to reality.

          • BlueTemplar 7 days ago
            6 more

            Since when "minimum wage" means "sensible retirement" ?

            More like it means ending up with government-provided bare minimum handouts to not have you starve (assuming you somehow manage to stay on minimum wage all your life).

            • lamename 7 days ago
              5 more

              We agree, minimum wage doesnt mean that. And in a large metro area, that's why $120k is closer to min wage than a good standard of lliving and building retirement.

              • tekla 6 days ago
                3 more

                Absolutely absurd. I lived in NYC making well less than that for years and was perfectly comfortable.

                The "min wage" of HN seems to be "living better than 98% of everyone else"

                • lamename 6 days ago
                  2 more

                  Adjusted for inflation? Without (crippling) debt accrual and adequate emergency fund, retirement, etc? Did you have children or childcare expenses? These all knock on that total compensation quickly these days, which is the main argument in this particular thread of replies.

                  • tekla 5 days ago

                    No to kids, yes to everything else (except debt, did have lots of school loans)

          • nyarlathotep_ 7 days ago

            Correct, I mean in the sense of "living a standard of life that my parents and friends parents (all of very, very modest means) had 20 years ago when I was a teenager."

            I mean a real wage associated with standards of living that one took for granted as "normal" when I was young.

        • impossiblefork 7 days ago
          2 more

          It actually is basically minimum wage for major metros.

          If I took a job for ~100k in Washington, I'd live worse than I did as a PhD student in Sweden. It would basically suck. I'm not sure ~120k would make things that different.

          • nyarlathotep_ 6 days ago

            Yep exactly. I mean "maintaining a basic material standard of living that even non 'high-earners' had twenty years ago"

            The erosion of the standard of living in the US (and the West more broadly) is not something to be ignored in any discussion of wages.

      • alephnerd 7 days ago
        16 more

        CoL in London or Dublin is comparable to much of the US, but new grad salaries are in the $30-50k range.

        The issue is salary expectations in the US are much higher than those in much of Western Europe despite having similar CoL.

        And $120k for a new grad is only a tech specific thing. Even new grad management consultants earn $80-100k base, and lower for other non-software roles and industries.

        • ponector 7 days ago
          5 more

          I've seen recently an open position for senior dev with 60k salary and hybrid 3 days per week in London. Insane!

          • alephnerd 7 days ago
            3 more

            Yep. And costs are truly insane in Greater London. Bay Area level housing prices and Boston level goods prices, but Mississippi or Alabama level salaries.

            But that's my point - salaries are factored based on labor market demands and comparative performance of your macroeconomy (UK high finance and law salaries are comparable with the US), not CoL.

            • lurk2 7 days ago
              2 more

              > Boston level goods prices

              I’ve never been to Boston. Why are the prices high there?

              • tilne 6 days ago

                They keep throwing the tea in the harbah

          • zelphirkalt 6 days ago

            I mean, seeing an open position does not equal that position ever being filled. It can also likely be a fake position, trying to create the "we are growing and hiring!" impression, or mandated by law to be there, but made artificially worse, because they have someone internally, that they want to move to the position.

        • FirmwareBurner 7 days ago
          9 more

          >but new grad salaries are in the $30-50k range

          But in UK an Ireland they get free healthcare, paid vacation, sick leave and labor protections, no?

          • alephnerd 7 days ago
            4 more

            The labor protections are basically ignored (you will be expected to work off the clock hours in any white collar role), and the free healthcare portion gets paid out of employer's pockets via taxes so it comes out the same as a $70-80k base (and associated taxes) would in much of the US.

            There's a reason you don't see new grad hiring in France (where they actually try to enforce work hours), and they have a subsequently high youth unemployment rate.

            Though even these new grad roles are at risk to move to CEE, where their administrations are giving massive tax holidays on the tune of $10-20k per employee if you invest enough.

            And the skills gap I mentioned about CS in the US exists in Weatern Europe as well. CEE, Israel, and India are the only large tech hubs that still treat CS as an engineering disciple instead of as only a form of applied math.

            • lazyasciiart 7 days ago

              > The labor protections are basically ignored (you will be expected to work off the clock hours in any white collar role),

              I happen to have a sibling in consulting who was seconded from London to New York for a year, doing the same work for the same company, and she found the work hours in NY to be ludicrously long (and not for a significant productivity gain: more required time-at-desk). So there are varying levels of "expected to work off the clock hours".

            • 0xpgm 7 days ago
              2 more

              What is the difference between treating CS as an engineering discipline vs a branch of applied math?

              • kilpikaarna 7 days ago

                (According to this guy apparently) low level vs algorithms focus. CE or CS basically.

          • __turbobrew__ 7 days ago
            3 more

            > free healthcare

            I pay over 40% effective tax rate. Healthcare is far from free.

            • FirmwareBurner 6 days ago
              2 more

              But your health problems won't bankrupt you or make you homeless I presume.

              • GoatInGrey 6 days ago

                The vast majority of Americans, who carry health insurance, also will not be bankrupted by health problems. Though they will earn far greater amounts of money for their families by working in the US compared to the UK.

        • rcpt 7 days ago

          Maybe the EU is different but in the US there's no software engineering union. Our wages are purely what the market dictates.

          Think they're too high? You're free to start a company and pay less.

      • bravesoul2 7 days ago
        5 more

        Yeah 120k is the maximum I have earned over 20 years in the industry. I started off circa. 40k maybe that's 70k adj for inflation. Not in US.

        • lostlogin 7 days ago
          4 more

          It’s always going to be difficult to compare countries. Things like healthcare, housing, childcare, schooling, taxes and literally every single thing are going to differ.

          • bravesoul2 7 days ago
            3 more

            The arbitrage is when you are young and healthy get that US salary and save then retreat home in your 40s and 50s. Stay healthy of course.

            • adaptbrian 7 days ago
              2 more

              Lots of tech folks get burnt out without knowing it. If you're tired all the time drastically alter your diet, it could change your life for the better.

    • catigula 7 days ago

      That really only happened in HCOL areas.

      • bravesoul2 7 days ago

        HCOL wasn't the driver though. It is abundance of investment and desire to hire. If the titans could collude to pay engineer half as much, they would. They tried.

      • xp84 7 days ago

        Sure, but there was a massive concentration of such people in those areas.

  • bachmeier 7 days ago

    > I mean, we hired someone on my team to attend cross functional meetings because our calendars were literally too full to attend.

    Some managers read Dilbert and think it's intended as advice.

    • trhway 7 days ago

      AI has been also consuming Dilbert as part of its training...

      • DonHopkins 7 days ago
        3 more

        Worse yet, AI has been consuming Scott Adams quotes as part of its training...

        "The reality is that women are treated differently by society for exactly the same reason that children and the mentally handicapped are treated differently. It’s just easier this way for everyone. You don’t argue with a four-year old about why he shouldn’t eat candy for dinner. You don’t punch a mentally handicapped guy even if he punches you first. And you don’t argue when a women tells you she’s only making 80 cents to your dollar. It’s the path of least resistance. You save your energy for more important battles." -Scott Adams

        "Women define themselves by their relationships and men define themselves by whom they are helping. Women believe value is created by sacrifice. If you are willing to give up your favorite activities to be with her, she will trust you. If being with her is too easy for you, she will not trust you." -Scott Adams

        "Nearly half of all Blacks are not OK with White people. That’s a hate group." -Scott Adams

        "Based on the current way things are going, the best advice I would give to White people is to get the hell away from Black people. Just get the fuck away. Wherever you have to go, just get away. Because there’s no fixing this. This can’t be fixed." -Scott Adams

        "I’m going to back off from being helpful to Black Americas because it doesn’t seem like it pays off. ... The only outcome is that I get called a racist." -Scott Adams

        • dennis_jeeves2 6 days ago
          2 more

          >Worse yet

          Should have been 'better still'.

          • DonHopkins 5 days ago

            Thank you!!! It's so awesome when an unrepentant racist piece of shit chimes in to perfectly prove my point!

            I swear, folks: dennis_jeeves2 is not my sock puppet, the way Scott "plannedchaos" Adams is his own sock puppet and biggest fan.

            Scott Adams Poses as His Own Fan on Message Boards to Defend Himself:

            https://comicsalliance.com/scott-adams-plannedchaos-sockpupp...

            >Dilbert creator Scott Adams came to our attention last month for the first time since the mid to late '90s when a blog post surfaced where he said, among other things, that women are "treated differently by society for exactly the same reason that children and the mentally handicapped are treated differently. It's just easier this way for everyone."

            >Now, he's managed to provoke yet another internet maelstorm of derision by popping up on message boards to harangue his critics and defend himself. That's not news in and of itself, but what really makes it special is how he's doing it: by leaving comments on Metafilter and Reddit under the pseudonym PlannedChaos where he speaks about himself in the third person and attacks his critics while pretending that he is not Scott Adams, but rather just a big, big fan of the cartoonist.

            >And what makes it really, really special is the level of spectacular ego and hilarious self-congratulation suddenly on display in the comments when you realize they were written by Scott Adams' number one fan... Scott Adams. [...]

  • icedchai 7 days ago

    I've worked at smaller companies where half the people in the meetings were just there because they had nothing else to do. Lots of "I'm a fly on the wall" and "I'll be a note taker" types. Most of them contributed nothing.

    • xp84 7 days ago

      My friend's company (he was VP of Software & IT at a non-tech company) had a habit of meetings with no particular agenda and no decisions that needed making. Just meeting because it was on the calendar, discussing any random thing someone wanted to blab about. Not how my friend ran his team but that was how the rest did.

      Then they had some disappointing results due to their bad decision-making elsewhere in the company, and they turned to my friend and said "Let's lay off some of your guys."

      • osigurdson 7 days ago

        It is almost like once a company gets rolling, there is sufficient momentum to keep it going even if many layers aren't doing very much. The company becomes a kind of meta-economic zone where nothing really matters. Politics / fights emerge between departments / layers but has nothing to do with making a better product / service. This can go on for decades if the moat is large enough.

    • Nasrudith 7 days ago

      The first mistake is thinking that contribution must be in the form of output instead of ingestion. Of course meetings aren't often the most efficient form of doing so. More being forced to listen (at least officially) so there isn't an excuse.

      • icedchai 6 days ago

        This is true, but generally speaking there should be more people "producing" than "ingesting." This is often not the case. Most meetings are useless, and this has become much worse in modern times. Example: agile "scrum" and its daily stand ups, which inevitably turn into status reports.

        At some point in the 2000's, every manager decided they needed weekly 1:1's, resulting in even more meetings. Many of these are entirely ineffective. As one boss told me, "I've been told I need to have 1:1's, so I'm having them!" I literally sat next to him and talked every day, but it was a good time to go for coffee...

  • PeterStuer 7 days ago

    "Hiring someone gave my VP more headcount and therefore more clout"

    Which is the sole reason automation will not make most people obsolete until the VP level themselves are automated.

    • dlivingston 7 days ago

      No, not if the metric by which VPs get clout changes.

      • monkeyelite 7 days ago

        That metric is evaluated deep in the human psyche.

      • thfuran 7 days ago

        The more cloud spend the better. Take 10% of it as a bonus?

      • 0xpgm 7 days ago

        It's about to change to doing more with less headcount and higher AI spend

    • Nasrudith 7 days ago

      Automation is just one form of "face a sufficiently competitive marketplace such that the company can no longer tolerate the dead-weight loss of their egos".

  • JSR_FDED 7 days ago

    I don’t doubt there’s a lot of knowledge workers who aren’t adding value.

    I’m worried about the shrinking number of opportunities for juniors.

    • hn_throwaway_99 7 days ago

      I agree with this, but I still think that offshoring is much more responsible for this than AI.

      I have definitely seen real world examples where adding junior hires at ~$100k+ is being completely forgone when you can get equivalent output from someone making $40k offshore.

  • phendrenad2 7 days ago

    To the contrary - they were providing value to the VP who benefitted from inflated headcount. That's "real value", it's just a rogue agent is misaligned with the company's goals.

    And AI cannot provide that kind of value. Will a VP in charge of 100 AI agents be respected as much as a VP in charge of 100 employees?

    At the end of the day, we're all just monkeys throwing bones in the air in front of a monolith we constructed. But we're not going to stop throwing bones in the air!

    • idkwhattocallme 7 days ago

      True! I golfed with the president of the division on a Friday (during work) and we got to the root of this. Companies would rather burn money on headcount (counted as R&D) than show profits and pay the govt taxes. When you have 70%+ margin on your software, you have money to burn. Dividends back to shareholders was not rewarded during ZIRP. On VP's being respected. I found at the companies I worked at VPs and their directs were like Nobles in a feudal kingdom constantly quibbling/battling for territory. There were alliances with others and full on takeouts at points. One VP described it as Game of Thrones. Not sure how this all changes when your kingdom is a bunch of AI agents that presumably anyone can operate.

      • lotsofpulp 7 days ago
        5 more

        > Companies would rather burn money on headcount (counted as R&D) than show profits and pay the govt taxes

        The data does not support this. The businesses with the highest market caps are the ones with the highest earnings.

        https://companiesmarketcap.com/

        Sort by # of employees and you get a list of companies with lower market caps.

        • trade2play 7 days ago
          2 more

          Google/Facebooks earnings are so high they can afford to be wildly wasteful with headcount and still be market leaders

          • Ekaros 7 days ago

            Those two are perfect examples of burning insane amounts of money and still showing profits beyond that... Whole metaverse investment. And all the products that Google has abandoned. Even returning all the payments like Stadia...

        • versteegen 7 days ago
          2 more

          If you sort by number of employees you get companies where those employees aren't in R&D divisions.

          • lotsofpulp 7 days ago

            Their comment reads to me as if businesses hire employees (regardless of the work they do, since we are discussing employees that don't do anything) because investors consider employees as R&D (even useless ones).

            Either way, there is no data I have seen to suggest market cap correlates with number of employees. The strongest correlation I see is to net income (aka profit), and after that would be growing revenues and/or market share.

    • BriggyDwiggs42 7 days ago

      We really oughta work on setting up systems that don’t waste time on things like this. Might be hard, but probably would be worth the effort.

  • paulcole 7 days ago

    Just curious, did you put yourself in the superfluous category either time?

    • idkwhattocallme 7 days ago

      Ultimately (and sadly) yes. While I never habitually or intentionally attended meetings to just look busy, I did work on something I knew had a long shot of creating value for the business. I worked on 0-1 products that if the company was more disciplined would not (or should not) have attempted. I left both on my own accord seeing the writing on the wall.

      • dehrmann 7 days ago

        > I worked on 0-1 products that if the company was more disciplined would not (or should not) have attempted.

        You said you were at large companies, so this is a hard call to make. A lot of large companies work on lots of small products knowing they probably won't work, but one of them might, so it's still worth it to try. It's essentially the VC model.

  • 827a 7 days ago

    Half of everyone at most large companies could be retired with no significant impact to the company's ability to generate revenue. The problem has always been figuring out which half.

  • lukev 7 days ago

    Whenever I think about AI and labor, I can't help thinking about David Graeber's [Bullshit Jobs](https://en.wikipedia.org/wiki/Bullshit_Jobs).

    And there's multiple confounding factors at play.

    Yes, lots of jobs are bullshit, so maybe AI is a plausible excuse to downside and gain efficiency.

    But also the dynamic that causes the existence of bullshit jobs hasn't gone away. In fact, assuming AI does actually provide meaningful automation or productivity improvemenet, it might well be the case that the ratio of bullshit jobs increases.

    • throw234234234 5 days ago

      Its hard to automate something that is hard to define, so I see the productive jobs/workers being punished by AI moreso than those jobs. Generally I see anecdotally:

      - Value creators (i.e. the ones historically carrying companies with the 80%/20% rule) generally are the ones cautious and/or fearful of AI. The ones that carried most of the company. Their output is measurable and definable so able to be automated.

      - The people in the jobs you mention in your post conversely are usually the ones most excited about AI. The ones in meetings all day, in the corporate machine. By definition their job is already not well defined anyway - IMV this is harder to automate. They are often there for other reasons other than "productive output" - e.g. compliance, nepotism, stakeholder management, etc.

    • alvah 7 days ago

      Exactly. For as long as I can remember, in any organisation of any reasonable size I have worked in, you could get rid of the ~50% of the headcount who aren't doing anything productive without any noticeable adverse effects (on the business at least, obviously the effects on the individuals would be somewhat adverse). This being the case, there are obviously many other factors other than pure efficiency keeping people employed, so why would an AI revolution on it's own create some kind of massive Schumpeterian shockwave?

      • ryandrake 7 days ago
        4 more

        People keep tossing around this 50% figure like it's a fact, but do you really think these companies just have half their staff just not doing anything? It just seems absurd, and I honestly don't believe it.

        Everywhere I've ever worked, we had 3-4X more work to do than staff to do it. It was always a brutal prioritization problem, and a lot of good projects just didn't get done because they ended up below the cut line, and we just didn't have enough people to do them.

        I don't know where all these companies are that have half their staff "not doing anything productive" but I've never worked at one.

        What's more likely? 1. Companies are (for reasons unknown) hiring all these people and not having them do anything useful, or 2. These people actually do useful things, but HN commenters don't understand those jobs and simply conclude they're doing nothing?

        • trade2play 7 days ago

          All of the big software companies are like the parent describes, in most of their divisions.

          Managers always want more headcount. Bigger teams. Bigger scope. Promotions. Executives have similar incentives or don’t care. That’s the reason why they’re bloated.

        • alvah 7 days ago
          2 more

          Have you heard of Twitter? 80-90% reduction in numbers, visible effects to the user (resulting from the headcount cuts, not the politics of the owner)? Pretty much zero.

          • hnaccount_rng 7 days ago

            That’s a difficult example. I don’t think anyone would reasonably expect the engineering artifact twitter.com to break. But the business artifact did break. At least to a reasonable degree. The Ad revenue is still down (both business news and the ads I’m experiencing are from less well resourced brands). And yes, that has to do with “answering emails with poop emojis” and “laying off content checkers”

  • ozim 7 days ago

    Bad part is all those guys attending meetings start feeling important. They start feeling like they are doing the job.

    I’ve seen those guys it is painful to watch.

  • ivape 7 days ago

    I've said this many times, that the abundance and wealth of the tech industry basically provided vast amounts of Universal Basic Income to a variety of roles (all of agile is one example). We're at a critical moment where we actually have to look at cost-cutting on this UBI.

  • federiconafria 7 days ago

    "my VP more headcount and therefore more clout"

    This had me thinking, how are they going to get "clout", by comparing AI spending?

  • daxfohl 7 days ago

    Agree, but two questions:

    First, is AI really a better scapegoat? "Reducing headcount due to end of ZIRP" maybe doesn't sound great, but "replacing employees with AI" sounds a whole lot worse from a PR perspective (to me anyway).

    Second, are companies actually using AI as the scapegoat? I haven't followed it too closely, but I could imagine that layoffs don't say anything about AI at all, and it's mostly media and FUD inventing the correlation.

    • ledauphin 7 days ago

      the one does actually sound worse because... it's actually worse. it clarifies that the companies themselves were playing games with people's livelihoods because of the potential for profit.

      whereas "AI" is intuitively an external force; it's much harder to assign blame to company leadership.

      • daxfohl 7 days ago
        2 more

        I'd read the first as adjusting to market demand, not playing with people's lives. If if were construed as playing with lives, that could apply to basically any investment.

        • tilne 6 days ago

          Agreed. You could make the case that employment in general is playing with someone’s life.

    • leflambeur 7 days ago

      isn't the scapegoat he or she who gets sacrificed? I think engineers are that

  • __turbobrew__ 7 days ago

    Turns out 50% of white collar jobs are just daycare for adults.

  • matthest 7 days ago

    Does anyone else think the fact that companies hire superfluous employees (i.e. bullshit jobs) is actually fantastic?

    Because they don't have to do that. They could just operate at max efficiency all the time.

    Instead, they spread the wealth a bit by having bullshit jobs, even if the existence of these jobs is dependent on the market cycle.

    • nyarlathotep_ 7 days ago

      > Does anyone else think the fact that companies hire superfluous employees (i.e. bullshit jobs) is actually fantastic?

      I do.

      It's much more important that people live a dignified life and be able to feed their families than "increasing shareholder value" or whatever.

      I'm a person that would be hypothetically supportive of something like DOGE cuts, but I'd rather have people earning a living even with Soviet-style make work jobs than unemployed. I don't desire to live in a cutthroat "competitive" society where only "talent" can live a dignified life. I don't know if that's "wealth distribution" or socialism or whatever; I don't really care, nor make claim it's some airtight political philosophy.

      • andrekandre 7 days ago

          > It's much more important that people live a dignified life and be able to feed their families than "increasing shareholder value" or whatever.
        
        its just my intuition, but talking to many people around me, i get the feeling like this is why people on both "left" and "right" are in a lot of ways (for lack of a better word) irate at the system as a whole... if thats true, i doubt ai will improve the situation for either...
      • leflambeur 7 days ago
        2 more

        tech bros think not only that that system is good, but that they'd be the winners

        • tilne 6 days ago

          I think the more optimistic interpretation would be that companies eliminating bullshit jobs would provide signal on which jobs aren’t bullshit, and then individuals and the job prep/education systems could align to this.

          That’s very optimistic! I don’t fully agree with it, but I certainly know some very intelligent people that I wish were contributing more to the world than they do as a pawn in a game of corporate chess.

  • disambiguation 7 days ago

    > But really, a lot of the knowledge worker jobs it "replaces" weren't providing real value anyway.

    I think quotes around "real value" would be appropriate as well. Consider all the great engineering it took to create Netflix, valued at $500b - which achieves what SFTP does for free.

    • jsnider3 7 days ago

      Netflix's value comes from being convenient and compatible with the copyright system in a way sharing videos P2P definitely isn't.

      • disambiguation 7 days ago

        I'm not advocating for p2p, but rather drawing attention to the word "value" and what it means to create it. For example, would netflix as a piece of software hold any value if the company were to suddenly lose all its copyrights and IP licenses? Whereas something like an operating system or excel has standalone utility, netflix is only as valuable as its IP. The software isn't designed to create value, but instead to fully utilize the value of a piece of property. It's an important distinction to keep in mind especially when designing such software. Now consider that in the streaming world there isn't just netflix, but prime, Hulu, HBO, etc. Etc.

        The parent comment was complaining about certain employees contributions to "real value" or lack thereof. My question is, how do you ascertain the value of work in this context where the software isn't what's valuable but the IP is, and further how do justify working on a product thats already a solved problem and still refer to it as "creating 'real' value"?

      • lazyasciiart 7 days ago

        And their increasingly restrictive usage policies are basically testing how important the 'convenient' piece is.

tdeck 7 days ago

Maybe someone can help me wrap my head around this in a different way, because here's how I see it.

If these tools are really making people so productive, shouldn't it be painfully obvious in companies' output? For example, if these AI coding tools were an amazing productivity boost in the end, we'd expect to see software companies shipping features and fixes faster than ever before. There would be a huge burst in innovative products and improvements to existing products. And we'd expect that to be in a way that would be obvious to customers and users, not just in the form of some blog post or earnings call.

For cost center work, this would lead to layoffs right away, sure. But companies that make and sell software should be capitalizing on this, and only laying people off when they get to the point of "we just don't know what to do with all this extra productivity, we're all out of ideas!". I haven't seen one single company in this situation. So that makes me think that these decisions are hype-driven short term thinking.

  • acrooks 6 days ago

    I wonder if some of this output will take a while to be visible en masse.

    For example, I founded a SaaS company late last year which has been growing very quickly. We are track to pass $1M ARR before the company's first birthday. We are fully bootstrapped, 100% founder owned. There are 2 of us. And we feel confident we could keep up this pace of growth for quite a while without hiring or taking capital. (Of course, there's an argument that we could accelerate our growth rate with more cash/human resources)

    Early in my career, at different companies, we often solved capacity problems by hiring. But my cofounder and I have been able to turn to AI to help with this, and we keep finding double digit percentage productivity improvements without investing much upfront time. I don't think this would have been remotely possible when I started my career, or even just a few years ago when AI hadn't really started to take off.

    So my theory as to why it doesn't appear to be "painfully obvious": you've never heard of most of the businesses getting the most value out of this technology, because they're all too small. On average, the companies we know about are large. It's very difficult for them to reinvent themselves on a dime to adapt to new technology - it takes a long time to steer a ship - so it will take a while. But small businesses like mine can change how we work today and realize the results tomorrow.

    • AndrewKemendo 6 days ago

      This is exactly how it’s going down

      Companies that needed to hire 10 people to grow, only need to hire 9 now

      In less than 5 years that’s going to be 7 or 6 people

      I’m doing more with 5 engineers than I was able to do with 15 just 10 years ago

      Part of that is libraries etc have matured too but we’ve reached the point from a developer perspective that you don’t need to build new technologies, you just need to put what exists together in new ways

      All the parts exist for any technology to be built, it’s about composition and distribution at this point

    • mixmastamyk 6 days ago

      Curious, if you don’t mind mentioning what AIs you’re using (besides the obvious Claude, etc) and what for to augment your reach?

      • acrooks 4 days ago
        2 more

        I think it's important to start with identifying your bottlenecks, and work from there to determine the solutions you need. In the case of our business, I feel that my time is best spent talking to customers and prospects. These discussions directly impact revenue, retention, product strategy, etc.

        So then I start thinking ... what sort of things am I doing that take me away from talking to customers? I spend a lot of time on implementation. I spend a lot of time on administrative sales tasks (chasing people for meetings, writing proposals, negotiating contracts). I spend a lot of time on meeting prep and follow-up. And many more. So I'm always on the hunt for tools with a problem already in mind.

        In terms of specific tools...

        Claude is a great backbone for a lot. Both the chatbot but also the API. I use the chatbot to help me write proposals and review contracts. I used it to write scripting to automate our implementation process which was once quite manual and is now a button click.

        Cursor has been a game changer. In particular, it means that we spend very little time on bugfixes and small features. This keeps my CTO almost 100% focused on big picture needle-moving projects. We are now doing some research into things like Codex/Claude Code to see how we could improve this further.

        Another app that I really love is called Granola. It automatically joins all of my meetings, writes notes, reminds me what promises I made, helps me write follow-up emails, and helps me prep for meetings.

        Finally, we use an email client called Sedna (disclaimer: I used to work at Sedna) which is fully programmable. We've been building our own internal tooling (leveraging the Claude API) on top of Sedna to help automate different workflows. For example, my inbox is now perfectly prioritised. In many cases, when I receive emails from customers, an AI has already written a draft that I can review and send. I know there are a lot of out-of-the-box tools out there like Fyxer to help with things like this, but I've really appreciated the ability to get exactly what we want by building certain things ourselves.

    • aitchnyu 5 days ago

      Do existing teams (and ossified office politics) benefit from n-times faster devs? I witnessed (implied) Gantt charts so shaped that shrinking dev activities won't shrink the chart.

  • topspin 7 days ago

    "shouldn't it be painfully obvious in companies' output?"

    No.

    The bottleneck isn't intellectual productivity. The bottleneck is a legion of other things; regulation, IP law, marketing, etc. The executive email writers and meeting attenders have a swarm of business considerations ricocheting around in their heads in eternal battle with each other. It takes a lot of supposedly brilliant thinking to safely monetize all the things, and many of the factors involved are not manifest in written form anywhere, often for legal reasons.

    One place where AI is being disruptive is research: where researchers are applying models in novel ways and making legitimate advances in math, medicine and other fields. Another is art "creatives": graphic artists in particular. They're early victims and likely to be fully supplanted in the near future. A little further on and it'll be writers, actors, etc.

    • ImaCake 7 days ago

      Maybe this means that LLMs are ultimately good for small buisness. If large buisness is constrained by being large and LLMs are equally accesible to 5 people or 100 then surely what we will see is increased productivity in small companies?

      • topspin 7 days ago

        My direct experience has been that even very small tech businesses contend with IP issues as well. And they don't have the means to either risk or deliberately instigate a fight.

    • throwaway2037 7 days ago

          > One place where AI is being disruptive is research: where researchers are applying models in novel ways and making legitimate advances in math, medicine and other fields.
      
      Great point. The perfect example: (From Wiki):

          > In 2024, Hassabis and John M. Jumper were jointly awarded the Nobel Prize in Chemistry for their AI research contributions for protein structure prediction.
      
      AFAIK: They are talking about DeepMind AlphaFold.

      Related: (Also from Wiki):

          > Isomorphic Labs Limited is a London-based company which uses artificial intelligence for drug discovery. Isomorphic Labs was founded by Demis Hassabis, who is the CEO.
      • SirHumphrey 7 days ago
        4 more

        I think AlphaFold is where current AI terminology starts breaking down. Because in some real sense, AlphaFold is primarily a statistical model - yes, it's interesting that they developed it using ML techniques, but from the use standpoint it's little different than perturbation based black boxes that were used before that for 20 years.

        Yes, it's an example of ML used in science (other examples include NN based force fields for molecule dynamics simulations and meteorological models) - but a biologist or meteorologist usually cares little how the software package they are using works (excluding the knowledge of different limitation of numerical vs statistical models).

        The whole thing "but look AI in science" seem to me like Motte-and-bailey argument to imply the use of AGI-like MLLM agents that perform independent research - currently a much less successful approach.

        • vhcr 6 days ago
          3 more
          • immibis 6 days ago

            Yeah, but also AI now means LLMs and they're not LLMs.

          • SirHumphrey 4 days ago

            Not really my point.

            I specifically didn't call LLMs a statistical model - while they technically are, it's obvious they are something more. While intelligence is a hard concept to pin down, current gen LLMs already can do most (knowledge work) based things better than most people (they are better writers than most people, they can program better than most people, they are better at math than most people, have better medical knowledge than most people...). If the human is the mark of intelligence - it has been achieved.

            Alphafold is something else though. I work with something similar (specifically FNOs for biophysical simulations) and the insight that data only models perform better than physics based model is novel - I think that the Nobel prize was deservedly awarded - however the thing is still closer to a curve fit than to LLMs regarding intelligence - or in other words, it's about as "intelligence" as permutation based black boxes were.

    • csomar 7 days ago

      > where researchers are applying models in novel ways and making legitimate advances in math, medicine and other fields.

      Can you give an example, say in Medicine, where AI made a significant advancement? That is we are talking neural networks and up (ie: LLM) and not some local optimization.

      • pkroll 7 days ago
        2 more

        https://arxiv.org/abs/2412.10849

        "Our study suggests that LLMs have achieved superhuman performance on general medical diagnostic and management reasoning"

        • squigz 7 days ago

          This isn't really applying LLMs to research in novel ways.

    • bawolff 7 days ago

      Even still, in theory this should free up more money to hire more lawyers, markerters, etc. The effect should still be there presuming the market isn't saturated with new ideas .

      • xkcd1963 7 days ago
        7 more

        Something else will get expensive in the meantime, e.g. it doesn't matter how much you earn, landlords will always increase rent to the limit because a living space is a basic necessity

        • bawolff 7 days ago
          6 more

          No, landlords will increase rent as much as they can because they like money (they call it capitalism for a reason). This is true of all goods, both essential and non-essential. All businesses follow the rule of supply and demand when setting prices or quickly go out of business.

          In the scenario being discussed - if a bunch of companies hired a whole bunch of lawyers, markerters, etc that might make salaries go up due to increased demand (but probably not super high amoung as tech isnt the only industry in the world). That still first requires companies to be hiring more of these types of people for that effect to happen, so we should still see some of the increased output even if there is a limiting factor. We would also notice the salaray of those professions going up, which so far hasn't happened.

          • immibis 5 days ago
            2 more

            It's an observed effect that rent increases until everyone is just as miserable as before. Regulatory capture of the building industry might have something to do with it, but you can't just say it doesn't happen.

            • bawolff 5 days ago

              Sure I can. Go to different cities. Observe how they have different average rents. Observe how they change over time.

          • xkcd1963 7 days ago
            3 more

            you say no for no reason. read what I wrote again

            • bawolff 6 days ago
              2 more

              Perhaps you could more clearly articulate your point if you think i am missing it.

              • xkcd1963 6 days ago

                » landlords will always increase rent to the limit

                In your own words, a business will run out of business quickly if supply and demand do not match. So unless you are confident that there will be a buyer, you cannot raise prices infinitely.

                » because a living space is a basic necessity

                While most people can live without a netflix subscription (hence they cannot raise prices infinitely and still expect to find buyers) most people prefer to live in housings. A housing is a basic necessity hence as a landlord you can confidently raise prices to the barrier of an affordability limit.

                » Something else will get expensive in the meantime

                Lets assume electricity prices get really cheap because humanity discovers fusion reaction. Well guess what, now landlords will increase the rents again because they can.

                Hope I expressed myself to your liking. I mean you just walz in here and start lecturing people about capitalism, maybe you should change your careerpath and become a teacher.

    • SteveNuts 7 days ago

      >A little further on and it'll be writers, actors, etc.

      The tech is going to have to be absolutely flawless, otherwise the uncanny-valley nature of AI "actors" in a movie will be as annoying as when the audio and video aren't perfectly synced in a stream. At least that's how I see it..

      • Izkata 7 days ago
        7 more

        This was made a little over a week ago: https://www.reddit.com/r/IndiaTech/comments/1ksjcsr/this_vid...

        For most of them I'm not seeing any of those issues.

        • PeterHolzwarth 7 days ago
          3 more

          I get what you mean, but the last year has been a story of sudden limits and ceilings of capability. The (damned impressive) video you post is a bunch of extremely brief snippets strung together. I'm not yet sure we can move substantially beyond that to something transformative or pervasively destructive.

          A couple years ago, we thought the trend was without limits - a five second video would turn into a five minute video, and keep going from there. But now I wonder if perhaps there are built in limits to how far things can go without having a data center with a billion Nvidia cards and a dozen nuclear reactors serving them power.

          Again, I don't know the limits, but we've seen in the last year some sudden walls pop up that change our sense of the trajectory down to something less "the future is just ten months away."

          • genewitch 7 days ago
            2 more

            Approximately 1 second was how long AI could hold it together. If you had a lot of free time you could extend that out a bit, but it'll mess something up. So generally people who make them will run it slow-motion. This is the first clip I've seen with it at full speed.

            The quick cuts thing is a huge turnoff so if they have a 15 second clip later on, I missed it.

            When I say "1second" I mean that's what I was doing with automatic1111 a couple years ago. And every video I've seen is the same 30-60 generated frames...

            • kevinventullo 6 days ago

              For the record, the entire video was quick cuts.

        • meander_water 7 days ago

          I wonder if this is going to change the ad/marketing industry. People generally put up with shitty ads, and these will be much cheaper to produce. I dread what's coming next.

        • IncreasePosts 6 days ago

          There might be a reason it is a series of 3 second clips

        • techpineapple 6 days ago

          I mean, it’s very uncanny valley, I would not want to watch a full movie of that. It’s so close! I mean it could be next year! Or it could be 20 years,

      • kevinventullo 6 days ago

        It does not need to be flawless. It needs to be good enough to put butts in seats.

    • pera 7 days ago

      Bullshit: Chatbots are not failing to demonstrate a tangible increase in companies' output because of regulations and IP law, they are failing because they are still not good for the job.

      LLMs only exist because the companies developing them are so ridiculously powerful that can completely ignore the rule of law, or if necessary even change it (as they are currently trying to do here in Europe).

      Remember we are talking about a technology created by torrenting 82 TB of pirated books, and that's just one single example.

      "Steal all the users, steal all the music" and then lawyer up, as Eric Schmidt said at Stanford a few months ago.

    • Teever 6 days ago

      Maybe in some industries and for some companies and their products but not all.

      Like let's take operating systems as an example. If there are great productivity gains from LLMs while aren't companies like Apple, Google and MS shipping operating systems with vastly less bugs and cleaning up backlogged user feature requests?

    • throwawayffffas 6 days ago

      The things you mention in the legion of other things are actually things LLMs do better than intellectual productivity. They can spew entire libraries of marketing bs, summarize decades of legal precedents and fill out mountains of red tape checklists.

      They have trouble with debugging obvious bugs though.

  • throwaway2037 7 days ago

    Regarding the impact of LLMs on non-programming tasks, check out this one:

    https://www.ft.com/content/4f20fbb9-a10f-4a08-9a13-efa1b55dd...

        > The bank [Goldman Sachs] now has 11,000 engineers among its 46,000 employees, according to [CEO David] Solomon, and is using AI to help draft public filing documents.
    
        > The work of drafting an S1 — the initial registration prospectus for an IPO — might have taken a six-person team two weeks to complete, but it can now be 95 per cent done by AI in minutes, said Solomon.
    
        > “The last 5 per cent now matters because the rest is now a commodity,” he said.
    
    In my eyes, that is major. Junior ibankers are not cheap -- they make about 150K USD per year minimum (total comp).
    • fourside 6 days ago

      This is certainly interesting and I don’t want to readily dismiss it, but I sometimes question how reliable these CEO anecdotes are. There’s a lot of pressure to show Wallstreet that you’re at the forefront of the AI revolution. It doesn’t mean no company is achieving great results but that it’s hard to separate the real anecdotes from the hype.

    • asadotzler 6 days ago

      Claims by companies with an interest in AI without supporting documentation are just that, claims, and probably more PR and marketing than anything.

    • noisy_boy 6 days ago

      I mean that's such a text heavy area anyway. I am not an expert in filing S1 but won't a lot of it be more or less boilerplate + customisations specific to the offering? Any reasonably advanced model should be able to take you a good chunk of the way. Then iterate with a verifier type model + a few people to review; even with iterations that should definitely shorten the overall time. It seems like such a perfect use case for an LLM - what am I missing that is hidden in the scepticism of the sibling comments?

  • zkry 6 days ago

    I find that this is on point. I've seen a lot of charts on the AI-hype side of things showing exponential growth of AI agent fleets being used for software development (starting in 2026 of course). Take this article for example: https://sourcegraph.com/blog/revenge-of-the-junior-developer

    Ok, so by 2027 we should be having fleets of autonomous AI agents swarming around every bug report and solving it x times faster than a human. Cool, so I guess by 2028 buggy software will be a thing of the past (for those companies that fully adopt AI of course). I'm so excited for a future where IT projects stop going overtime and overbudget and deliver more value than expected. Can you blame us for thinking this is too good to be true?

  • hombre_fatal 6 days ago

    This is like asking if tariffs are so bad, why don't you notice large price swings in your local grocer right now?

    In complex systems, you can't necessarily perceive the result of large internal changes, especially not with the tiny amount of vibes sampling you're basing this on.

    You really don't have the pulse on how fast the average company is shipping new code changes, and I don't see why you think you would know that. Shipping new public end-use features isn't even a good signal, it's a downstream product and a small fraction of software written.

    It's like thinking you are picking up a vibe related to changes in how many immigrants are coming into the country month to month when you walk around the mall.

    • jayd16 6 days ago

      Maybe not a great analogy. The market reacted instantly and you can see prices fluctuate almost as fast as tariff policy.

  • bawolff 7 days ago

    Reistically its because layoffs have a high reputational cost. AI provides an excuse that lets companies do lay offs without suffering the reputation hit. In essence AI hype makes layoffs cheaper.

    Doesnt really matter if AI actually works or not.

    • zelphirkalt 6 days ago

      I would dispute, that there is no reputation cost, when you replace human work with LLMs.

      • bawolff 6 days ago

        Sure, i don't think its none, just less.

        It also matters a bit where the reputation cost hits. Layoffs can spook investors because it makes it look like the company is doing poorly. If the reputation hit for ai is to non-investors, then it probably matters less.

      • jayd16 5 days ago

        It's not about consumer reputation, it's about the financial reputation. Slashing headcount can look desperate. AI makes it sound innovative, or at least that's the idea.

  • casualscience 7 days ago

    In big companies, this is a bit slower due to the need to migrate entrenched systems and org charts into newer workflows, but I think you are seeing more productivity there too. Where this is much more obvious is in indie games and software where small agile teams can adopt new ways of working quickly...

    E.g. look at the indie games count on steam by year: https://steamdb.info/stats/releases/?tagid=492

    • bojan 6 days ago

      The number of critically acclaimed games remains the same though. So for now we're getting quantity, but not the quality.

      • karagenit 6 days ago

        What if the number of game critics just hasn’t increased, and since they can only play/review a fixed number of games each year due to time constraints, the number that they acclaim each year hasn’t grown? Not saying this is necessarily the case, just suggesting the possibility.

    • kaibee 6 days ago

      Has the amount of 95%+ reviews games-released increased though? And how much of that is due to the pandemic? Its anecdotal, but the game-dev discord I'm in has had a decent reduction in # of regulars since the tail end of the pandemic 24-25. And ironically, I was one of them until recently. I think people actually just had more time.

  • CMCDragonkai 7 days ago

    It's cause there are still bottlenecks. AI is definitely boosting productivity in specific areas, but the total system output is bottlenecked. I think we will see these bottlenecks get rerouted or refactored in the coming years.

    • _heimdall 7 days ago

      > AI is definitely boosting productivity in specific areas

      What makes you so sure of the productivity boost when we aren't seeing a change in output?

    • tdeck 7 days ago

      What do you think the main bottlenecks are right now?

      • CMCDragonkai 7 days ago

        Informational complexity bottlenecks. So many things are shackled to human decision making loops. If we were truly serious, we would unshackle everything and let it run wild. Would be chaotic, but chaos create strange attractors.

      • kergonath 7 days ago
        2 more

        Quality control, for one. The state of commercial software is appalling. Writing code itself is not enough to get a useable piece of software.

        LLMs are also not very useful for long term strategy or to come up with novel features or combinations of features. They also are not great at maintaining existing code, particularly without comprehensive test suites. They are good at coming up with tests for boiler plate code, but not really for high-level features.

        • fhd2 7 days ago

          Considering how software is increasingly made out of seperate components and services, integration testing can become pretty damn difficult. So quite often, the public release is the first serious integration test.

          From my experience, this stuff is rarely introduced to save developers from typing in the code for their logic. Actual reasons I observe:

          1. SaaS sales/marketing pushing their offerings on decision makers - software being a pop culture, this works pretty well. It can be hard for internal staff to push back on What Everyone Is Using (TM). Even if it makes little to no sense.

          2. Outsourcing liability, maintenance, and general "having to think about it". Can be entirely valid, but often it indeed comes from an "I don't want to think of it" kind of place.

          I don't see this stuff slowing down GenAI or not, mainly because it has usually little to do with saving time or money.

    • esperent 7 days ago

      > It's cause there are still bottlenecks

      How do you know this? What are the bottlenecks?

      • financltravsty 7 days ago
        4 more

        [flagged]

        • californical 7 days ago
          3 more

          I feel like one of us must be in a bit of our own bubble.

          The company that I work for is currently innovating very fast (not LLM related), creating so much value for other companies that they have never gotten from any other business.. I know this because when they switch to our company, they tell us how much better our software product is compared to anything they've ever used. It has tons of features that no other company has. That's all I can say without doxxing too much.

          I feel like it's unimaginative to say:

          > What more tech is there to sell besides LLM integrations?

          I have like 7 startup ideas written down in my notes app for software products that I wish I had in my life, but don't have time to work on, and can't find anything that exists for it. There is so much left to create

          • financltravsty 7 days ago
            2 more

            I speak only from a very high-level POV. From a lower-level/in the "trees" -- yes, I don't disagree whatsoever with your characterization that a single company can achieve that. I know of many many many products I use (tech even!) that I could create exponentially better alternatives for, as well.

            Now, there come a few considerations I don't believe you have factored in:

            - Just because your company has struck gold: does that mean that pathway is available or realistic enough for everyone else; and to a more important point, is it /significant/ enough that it can scoop the enormous amount of tech talent on the market currently and in the future? I don't believe so.

            - Segueing, "software products that I wish I had in my life." Yes, I too have many ideas, BUT: is the market (the TAM if you will) significant enough to warrant it? Ok, maybe it is -- how will you solve for distribution? Fulfillment is easy, but how are you going to not only identify prospective customers (your ICP), find them and communicate to them, and then convince them to buy your product, AND do this at scale, AND do this with low enough churn/CAC and high enough retention/CLTV, AND is this the most productive and profitable use of your time and resources?

            Again, ideas are easy -- we all have them. But the execution is difficult. In the SaaS/tech space, people are burned out from software. Everyone is shilling their vibe-coded SaaS or latest app. That market is saturated, people don't care. Consumer economy is suffering right now due to the overall economy and so on. Next avenue is enterprise/B2B -- cool, still issues: buyer fatigue; economic uncertainty leading to anemic budgets and paralysis while the "fog" clears. No one is buying -- unless you can guarantee they can make money or you can "weather the storm" (see: AI, and all the top-down AI mandates every single PE co and board is shoving down exec teams throats).

            I'm talking in very broad strokes on the most impactful things. Yes, there is much to create -- but who is going to create it and who is going to buy it (with what money?). This is a people problem, not a tech problem. I'm specifically talking about: "what more tech is there to sell -- that PEOPLE WILL BUY -- besides LLM integrations?" Again, I see nothing -- so I have pivoted towards finance and selling money. Money will not go out of fashion for a while (because people need it for the foreseeable future).

            Ask yourself, if you were fired right now at this moment: how easy would it be for you to get another job? Quite difficult unless you find yourself lucky enough to have a network of people that work in businesses that are selling things that people are buying. Otherwise, good luck. You would have more luck consulting -- there are many many many "niche" products and projects that need to be done on small scales, that require good tech talent, but have no hope of being productized or scaled (hint!).

            • californical 7 days ago

              It’s really interesting to hear your points.

              I do think I may struggle a bit to find something comparable to my current company, but we’re also hiring right now. And it’s a very small company in the grand scheme of things, even though we have customers much bigger.

              I guess having that experience makes me think that there must be a lot of other small companies working in their own interesting niche, providing a valuable product for a subset of major companies. You just don’t usually know they exist unless you need their specific niche.

              But I recognize your points too. It seems like the B-to-C space is really tricky right now, and likely fits closer with what you’re describing.

              I think that the flip side is that a company doesn’t need to make it big to be successful. If you can hire 5 developers and bring in $2m/yr, there’s nothing at all wrong with that as a business. Maybe we will get lucky and the market will trend towards more of those to fill in the void that you mentioned. I think it could lead to a lot of innovation and a really healthy tech world! But maybe it’s just being overly optimistic to think that might be the path forward :)

    • jayd16 7 days ago

      We'll take cheaper over faster but is that the case? If it's not cheaper or faster what is the point?

  • econ 7 days ago

    The days of hating on idea men seem over.

    I don't get it either. You hire someone in the hope for ROI. Some things work some kinda don't. Now people will be n times more productive therefore you should hire fewer people??

    That would mean you have no ideas. It says nothing about the potential.

  • AznHisoka 7 days ago

    “we'd expect to see software companies shipping features and fixes faster than ever before. There would be a huge burst in innovative products and improvements to existing products.”

    Shipping features faster != innovation or improvements to existing products

    • tdeck 7 days ago

      Granting that those don't fully overlap, is that relevant to the point? I'm not seeing either.

      • AznHisoka 7 days ago
        2 more

        Because theyre just pushing out stuff that nobody mighy even need or want to buy. Because its not even necessarily leading to more revenue. Software companies arent factories. More stuff doesnt mean more $$$ made

        • ngruhn 7 days ago

          Unfortunately, I think it does. Even if customers don't want all that extra stuff and will never use it, it sells better.

      • switchbak 6 days ago

        Our jobs are full of a lot more than just writing code. In my case it seems like it’s helping to accelerate a portion of the dev cycle, but that’s a fairly smart portion, say 20%, and even a big impact on that just gets dominated by the other phases that haven’t been accelerated.

        I’m not as bullish as some are on the impact of AI, but it does feel nice when you can deliver something in a fraction of the time it used to take. For me, it’s more useful as a research and idea exploration tool, less so about writing code. Part of that is that I’m in Scala land, so it just tends to not work as well as a more mainstream language.

        We haven’t used it to help the product management and solution exploration side, which seems to be a big constraint on our execution.

    • epgui 7 days ago

      And?

  • wcfrobert 7 days ago

    If AI makes everyone 10x engineers, you can 2x the productive output while reducing headcount by 5x.

    Luckily software companies are not ball bearings factories.

    • tikhonj 7 days ago

      unluckily, too many corporate managers seem to think they are :/

    • danenania 6 days ago

      > If AI makes everyone 10x engineers, you can 2x the productive output while reducing headcount by 5x.

      Why wouldn't you just 10x the productive output instead?

      • wcfrobert 6 days ago
        4 more

        I don't think it would be trivial to increase demand by 10x (or even 2x) that quickly. Eventually, a publicly traded company will get a bad quarter, at which point it's much easier to just reduce the number of employees. In both scenarios, there's no need for any new-hire.

        • danenania 6 days ago
          3 more

          I think there’s always demand for more software and more features. Have you ever seen a team without a huge backlog? The demand is effectively infinite.

          • tilne 6 days ago
            2 more

            Isn’t a lot of stuff in the backlog because it’s not important enough to the bottom line to prioritize?

            • danenania 5 days ago

              Right, that’s kind of the whole point. If it’s in the backlog, someone thinks it’s valuable, but you might never get to it because of other priorities. If you’re 10x more productive, that line gets pushed a lot farther out, and your product addresses more people’s needs, has fewer edge case bugs, and so on.

              If the competition instead uses their productivity boost to do layoffs and increase short term profits, you are likely to outcompete them over time.

    • hoosieree 6 days ago

      Or a 4-hour workweek.

  • 0xbadcafebee 6 days ago

    Productivity results in increased profit, not necessarily output. They don't need to innovate, make new products, or improve things. They just need to make their shit cheaper so their profit margin is higher. If you can just keep churning out more money, there is no need to improve anything.

  • godelski 7 days ago

      > shipping features and fixes faster than ever before
    
    Meanwhile Apple duplicated my gf's contract, creating duplicate birthdays on my calendar. It couldn't find duplicates despite matching name, nickname, phone number, birthdays, and that both contacts were associated with her Apple account. I manually merged and ended up with 3 copies of her birthday in my calendar...

    Seriously, this shit can be solved with a regex...

    The number of issues like these I see is growing exponentially, not decreasing. I don't think it's AI though, because it started before that. I think these companies are just overfitting whatever silly metrics they have decided are best

  • kraig911 7 days ago

    Effort in this equation isn't measured in man hours saved but dollars saved. We all know this is BS and isn't going to manifest this way. It's tantamount for giving framers a nailgun versus a hammer. We'll still be climbing the same rafters and doing the same work.

  • diego_moita 6 days ago

    That is a smart question.

    In 1987 the economist Robert Solow said "You can see the computer age everywhere but in the productivity statistics".

    We should remark he said this long before the internet, web and mobile, so probably the remark needs an update.

    However, I think it cuts through the salesmen hype. Anytime we see these kinds of claims we should reply "show me the numbers". I'll wait until economists make these big claims, will not trust CEOs and salesmen.

    • marcosdumay 6 days ago

      > so probably the remark needs an update

      Only if you want to add "internet, web, and mobile" before "age". Otherwise it doesn't need any change.

      But that phrase is about the productivity statistics, not about computers or actual productivity.

      • tilne 6 days ago
        2 more

        You’re saying that the internet, web, and mobile haven’t improved productivity?

        • marcosdumay 4 days ago

          They didn't change a bit the productivity statistics.

          The problem with computers not changing the productivity statistics is one of the great mysteries economists argue about. It's very clear nowadays that there are problems on both the "statistics" and "productivity" sides of it, but the internet, web, and mobile didn't change anything.

  • whstl 6 days ago

    Any boost of productivity in the coding part is quickly absorbed by other inefficiencies in the software-making process, unfortunately.

    AI also helps immensely in creating those other inefficiencies.

    • klabb3 6 days ago

      Intuitively I agree. In the long run, we’ll know better. But for now, nobody truly knows what the new equilibrium is.

      That said: it’s one type of work that is getting dramatically cheaper. The debate is about the scope and quality of that labor, not whether it’s cheap or fast (it is). But if anything negative (errors, faults) compound, and the correction can NOT be done with the same tools, then you must still have humans triage errors. In my experience, bad code can already have negative value (it costs more to fix than rewrite).

      In the medium term, the actual scope and ability for different tasks will remain unknown. It takes a lot of time to gather the experience to tell if something was a bad idea – just look at the graveyard of design patterns, languages and software practices. Many of them enjoyed the spotlight for a decade before the fallout hit.

      Anyway, while the abilities are unknown, AI will be used everywhere for everything – which is only wise if it’s truly better at every general task – despite every available data about it shows vastly different ability in different domains/problem types. Many of those things will be both (a) worse than humans and (b) expensive to reverse, with compounding effects.

      The funny thing is I have already seen enthusiasts basically acknowledging this but explaining that those compounding issues (think tech debt) is the right choice now because better AI will fix those issues in the future. To me, this feels like the early formations of religion (not metaphorically even). And I have a feeling that the goalpost moving from both sides will lead to an unfalsifiability deadlock in the debate.

  • antithesizer 6 days ago

    Before enterprise AI systems are allowed to spread their wings, first they need to support existing processes. Once they're able to generate the same customer-facing results relatively autonomously, then they'll have the opportunity to improve those results. So the first place to look for their impact is, I'd wager, cost-cutting. So watch those quarterly earnings reports.

  • strangattractor 7 days ago

    Most significant technology takes almost a generation to be fully adopted. I think it is unlikely we are seeing the full effect of LLM's at the moment.

    Content producers are blocking scrapers of their sites to prevent AI companies from using their content. I would not assume that AI is either inevitable or on a easy path to adoption. AI certainly isn't very useful if what it "knows" is out of date.

    • asadotzler 6 days ago

      In 10 years with the same amount of money and time that's been pumped into AI, still a financial black hole, we had the entire broadband internet build out completed and the internet was responsible for adding a trillion dollars a year to the global economy.

      • tilne 6 days ago
        2 more

        If your point is that AI/LLMs aren’t as transformative as broadband internet, I don’t think anyone here is seriously making that claim, right?

        • chasd00 6 days ago

          Maybe not here but the news and social media seems to think LLMs are even more transformative than the printing press let alone inet.

  • ccorcos 7 days ago

    AI tools seem to be most useful for little things. Fixing a little bug, making a little change. But those things aren’t always very visible or really move the needle.

    It may help you build a real product feature quicker, but AI is not necessarily doing the research and product design which is probably the bottleneck for seeing real impact.

    • droopyEyelids 7 days ago

      If they're fixing all the little bugs that should give everyone much more time to think about product design and do the research.

      • jajko 7 days ago
        2 more

        Or a lot of small fixes all over the place. Yet in reality we dont see this anywhere, not sure what exactly that means.

        Maybe overall complexity creeping up rolls over any small gains, or devs are becoming more lazy and just copy paste llms output without a serious look at it?

        My company didnt even adapt or allow use of llms in any way for anything so far (private client data security is more important than any productivity gains, which anyway seems questionable when looking around.. and serious data breaches can end up with fines in hundreds of millions ballpark easily).

        • ccorcos 6 days ago

          It’s also possible that all of these gains fixing bugs are simply improving infrastructure and stability rather than finding new customers and opening up new markets.

          Having worked on software infrastructure, it’s a thankless job. You’re most heroic work has little visibility and the result is that nothing catastrophic happened.

          So maybe products will have better reliability and fewer bugs? And we all know there’s crappy software that makes tons of money, so there isn’t necessarily a strong correlation.

      • ccorcos 7 days ago

        Assuming a well functioning business, yes.

  • grumpymuppet 7 days ago

    The problem with this sort of analysis is that it's incremental and balanced across a large institution usually.

    I think the reality is less like a switch and more like there are just certain jobs that get easier and you just need fewer people overall.

    And you DO see companies laying off people in large numbers fairly regularly.

    • simonsarris 7 days ago

      > And you DO see companies laying off people in large numbers fairly regularly.

      Sure but, so far, too regularly to be AI-gains-driven (at least in software). We have some data on software job postings and the job apocalypse, and corresponding layoffs, coincided with the end of ultra-low interest rates. If AI had a recent effect this year or last, its quite tiny in comparison.

      https://fred.stlouisfed.org/graph/?g=1JmOr

      so one can argue more is to come, but its hard to see how its had a real effect on jobs/layoffs thus far.

    • hyperadvanced 7 days ago

      Layoffs happen because cash is scarce. In fact, cash is so scarce for anything that’s not “AI” that it’s basically nonexistent for startup fundraising purposes.

  • vharish 6 days ago

    Overall, the amount of code that's being deployed to production has definitely increased.

  • hansmayer 6 days ago

    Well, it sort of evens out. You see the developers are pushed to use the AI to generate a lot of LoC-Slop, but then they have to fix all the bugs, security issues and hallucinated packages that were thrown in by the magic-machines. But at least some deluded MBA can BS about being "AI-first".

  • autobodie 7 days ago

    No, we would see profits increase, and we have been seeing profits increase.

  • mNovak 6 days ago

    I mean, if a mega corp like Google or Amazon had plus/minus 10% of their headcount, as a lay observer I don't think I'd really be able to detect the difference in output either.

    That doesn't mean it isn't a real productivity gain, but it might be spread across enough domains (bugs, features, internal tools, experiments) to not be immediately or "painfully obvious".

    It'll probably get more obvious if we start to see uniquely productive small teams seeing success. A sort of "vibe-code wonder".

  • bjt12345 7 days ago

    The problem seems to be two-fold.

    Firstly, the capex is currently too high for all but the few.

    This is a rather obvious statement, sure. But the impact is a lot of companies "have tried language models and they didn't work", and the capex is laughable.

    Secondly, there's a corporate paralysis over AI.

    I received a panicky policy statement written in legalaise forbidding employees from using LLMs in any form. Written both out of a panic regarding intellectual property leaking but also a panic about how to manage and control staff moving forward.

    I think a lot of corporates still clutch at this view that AI will push the workforce costs down and are secretly wasting a lot money failing at this.

    The waste is extraordinary, but it's other peoples money (it's actually the shareholders money) and it's seen as being all for a good cause and not something to discuss after it's gone. I can never get it discussed.

    Meanwhile, at a grass roots level, I see AI is being embraced and is improving productivity, every second IT worker is using it, it's just that because of this corporate panicking and mismanagement, it's value is not yet measured.

    • bawolff 7 days ago

      > Firstly, the capex is currently too high for all but the few.

      > This is a rather obvious statement,

      Nobody is saying companies have to make LLMs themselves.

      SASS is a thing.

      • bjt12345 7 days ago
        2 more

        By SAAS I assume you mean public LLMs, the problem is the hand-wringing occurring over intellectual property leaking from the company. Companies are actually writing policies banning their use.

        In regards to Private LLMs, the situation has become disappointing in the 6 months.

        I can only think of Mistral as being a genuine vendor.

        But given the limitations in context window size, fine tuning is still necessary, and even that requires capex that I rarely see.

        But my comment comes from the fact that I heard from several sources, smart people say "we tried language models at work and it failed".

        However in my discussion with them, they have no concept of the size of the datacentres used by the webscalers.

        • singron 6 days ago

          It's not clear to me that fine-tuning is even capex. If you fine tune new models regularly, that's opex. If you mean literally just the GPUs, you would presumably just rent them right? (Either from cloud providers for small runs or the likes of sfcompute for large runs) Or do you imagine 24/7 training?

    • tdeck 7 days ago

      This is a good reminder that every org is different. However some companies like Microsoft are aggressively pushing AI tools internally, to a degree that is almost cringe.

      • throwaway2037 7 days ago
        2 more

        I don't want to shill for LLMs-for-devs, but I think this is excellent corporate strategy by Microsoft. They are dog-fooding LLMs-for-devs. In a sense, this is R&D using real world tests. It is a product manager's dream.

        The Google web-based office productivity suite is similar. I heard a rumor that at some point Google senior mgmt said that nearly all employees (excluding accounting) must use Google Docs. I am sure that they fixed a huge number of bugs plus added missing/blocking feature, which made the product much more competitive vs MSFT Office. Fifteen years ago, Google Docs was a curiosity -- an experiment for just how complex web apps could become. Today, Google Docs is the premiere choice for new small businesses. It is cheaper than MSFT Office, and "good enough".

        • singron 6 days ago

          Google docs has gotten a little better in that time, but it's honestly surprisingly unchanged. I think what really changed is that we all stopped wanting to layout docs for printing and became happier with the simpler feature set (along with collaboration and distribution).

      • bjt12345 7 days ago

        But this is often a mixture of these two things.

        The tools are often cringe because the capex was laughable. E.g. one solution, the trial was done using public LLMs and then they switched over to an internally built LLM which is terrible.

        Or, secondly, the process is often cringe because the corporate aims are laughable.

        I've had an argument with a manager making a multi-million dollar investment in a zero coding solution that we ended up throwing in the bin years later.

        They argued that they are going with this bad product because "they don't want to have to manage a team of developers".

        They responded "this product costs millions of dollars, how dare you?"

        How dare me indeed...

        They promptly left the company but it took 5 years before it was finally canned, and plenty of people wasted 5 years of their career on a dead-end product.

  • ivape 7 days ago

    Companies are not accepting that their entire business will mostly go away. They are mostly frogs boiling in water, that's why they are kinda just incorporating these little chat bots and LLMs into their business, but the truth of the matter is it's all going away and it's impossible to believe. Take something like JIRA, it's entirely laughable because a simple LLM can handle entire project management with freaking voice with zero programming. They just don't believe that's the reality, we're talking about Kodak moment.

    Worker productivity is secondary to business destruction, which is the primary event we're really waiting for.

    • nradov 7 days ago

      That's silly. You still need a way to track and prioritize tasks even if you use voice input. Jira may be replaced with something better, built around an LLM from the ground up. But the basic project management requirements will never go away.

      • ivape 7 days ago
        12 more

        Yes, that's quite easy. I say "Hey reorganize the tasks like-so, prioritize this, like so", and if I really need to, I can go ahead and hook up some function calls but I suspect this will be unnecessary with a few more LLM iterations (if even that). You can keep running from how powerful these LLMs are, but I'll just sit and wait for the business/startup apocalypse (which is coming). Jira will not be replaced by something better, it'll be replaced by some weekend project a high schooler makes. The very fact that it's valued at over a billion dollars in the market is just going to be a profound rug pull soon enough.

        So let me keep it real, I am shorting Atlassian over the next 5 years. Asana is another, there's plenty of startup IPOs that need to be shorted to the ground basically.

        • petersellers 7 days ago
          4 more

          If replacing Jira is really as easy as you claim, then it would have happened by now. At the very least, we'd be getting hit by a deluge of HN posts and articles about how to spin up your very own project management application with an LLM.

          I think that this sentiment, along with all of the hype around AI in general, is failing to grasp a lot of the complexity around software creation. I'm not just talking about writing the code for a new application - I'm talking about maintaining that application, ensuring that it executes reliably and correctly, thinking about the features and UX required to make it as frictionless as possible (and voice input isn't the solution there, I'm very confident of that).

          • ivape 7 days ago
            3 more

            You are not understanding what I am saying. I am saying its the calm before the storm before everyone realizes they are paying a bunch of startups for literally no comparative value given AI. First the agile people are going to get fired, then the devs are just going to go "oh yeah I just manage everything in my LLM".

            I'll be here in a year, we can have this exact discussion again.

            • petersellers 7 days ago

              I understand what you are saying, I just don't agree with it.

              "AI" is not going to wholesale replace software development anytime soon, and certainly not within a year's time because of the reasons I mentioned. The way you worded your post made it sound like you believed that capability was already here - nevertheless, whether you think it's here now or will be here in a year, both estimates are way off IMO.

            • theshackleford 6 days ago

              > I’ll be here in a year

              Me too. Mostly so I can laugh though.

        • zelphirkalt 6 days ago

          If there was only one consequence, and that consequence is Jira and Atlassian being destroyed, then I am all for it!

          Realistically though, they might incorporate that high schooler's software into Jira, to make it even more bloated and they will sell it to your employer soon enough! Then team lead Chris will enter your birthday and your vacation days in it too, to enable it to also do vacation planning, without asking you. Next thing is, that Atlassian sells you out and you receive unsolicited AI calls for your holiday planning.

        • hooverd 7 days ago
          5 more

          What sort of assurances can I get from that weekend project? I think we're going to build even more obscene towers of complexity as nobody knows how anything works anymore, because they choose not to.

          • ivape 7 days ago
            3 more

            What assurances do you get from the internals of an LLM?

            • ofjcihen 6 days ago
              2 more

              I think in your rush to respond you may have accidentally made a solid point against your argument.

              • ivape 6 days ago

                No not really. The people that are behind the LLMs don't really know why it keeps getting better with more compute and data, they are literally just trying shit. Yet, the world has seen just how useful the thing is. We don't have any assurances from the damn thing, yet it's the most useful thing we ever made (at least software-wise).

          • bdangubic 6 days ago

            this is a choice to make though… smart teams will know how everything works…

        • subpixel 6 days ago

          I agree with you to a point.

          In smaller businesses some roles won’t need to be hired anymore.

          Meanwhile in big corps, some roles may transition from being the source of presumed expertise to being one neck to choke.

          I’d love it not to be true, but the truth is Jira is to projects what Slack/Teams are to messaging. When everybody is a project manager Jira gets paid more, not less.

    • badsectoracula 7 days ago

      > Take something like JIRA, it's entirely laughable because a simple LLM can handle entire project management with freaking voice with zero programming

      When I used a not-so-simple LLM to make it act as a text adventure game it could barely keep track of the items in my inventory, so TBH i am a little bit skeptical that an LLM can handle entire project management - even without voice.

      Perhaps it might be able to use tools/MCP/RPC to call out to real project management software and pretend to be your accountant/manager/whoever, but i wouldn't call that the LLM itself doing the project management task - and someone would need to write that project management software.

      • ivape 7 days ago
        2 more

        There are innovative ways to accomplish the consistency you seek for the example application you mentioned. They are coming a lot sooner than you think, but hey this thread is a bit of a poker game before the flop, I’m just placing my bet - you can call the bluff.

        We just have to wait for the cards to flip, and that’s happening on a quadratic curve (some say exponential).

        • mucha 6 days ago

          Free beer tomorrow.

  • marcosdumay 6 days ago

    I don't think extra productivity in software development ever reflected in established companies building things faster.

    The more likely scenario is that if those tools make developer so much more productive, we would see a large surge in new companies, with 1 to 3 developers creating things that were deemed too hard for them to do.

    But it's still possible that we didn't give people enough time yet.

  • wiseowise 7 days ago

    I will never understand this argument. If you have a super tool, that can magically double your output, why would you suddenly double your output publicly? So that you now work twice essentially for the same money? You use it to work less, your output stays static or marginally improves - that’s smart play.

    Note: I’m talking about your run of the mill SE waggie work, not startups where your food is based on your output.

    • conradkay 7 days ago

      That only works if you're one of very few people with the tool. Otherwise the rest of your team is now 2x as productive as you.

      • wiseowise 7 days ago
        3 more

        That’s assuming they were as productive as me in the first place.

        • imtringued 7 days ago
          2 more

          How would you know? What if they are following your strategy and are hiding their "power level"?

          • wiseowise 6 days ago

            If they were hiding their "power level" and maintaining my or pre my "power level", what incentive do they have to suddenly double it if they were hiding it in the first place?

sevensor 7 days ago

What AI is going to wipe out is white collar jobs where people sleepwalk through the working day and carelessly half ass every task. In 2025, we can get LLMs to do that for us. Unfortunately, the kind of executive who thinks AI is a legitimate replacement for actual work does not recognize the difference. I expect to see the more credulous CEOs dynamiting their companies as a result. Whether the rest of us can survive this remains to be seen. The CEOs will be fine, of course.

  • const_cast 7 days ago

    > What AI is going to wipe out is white collar jobs where people sleepwalk through the working day and carelessly half ass every task.

    The only reason this existed in the first place is because measuring performance is extremely difficult, and becomes more difficult the more complex a person's job is.

    AI won't fix that. So even if you eliminate 50% of your employees, you won't be eliminating the bottom 50%. At worst, and probably what happens on average, your choices are about as good as random choice. So you end up with the same proportion of shitty workers as you had before. At worst worst, you actively select the poorest workers because you have some shitty metrics, which happens more often than we'd all like to think.

  • cjs_ac 7 days ago

    There's a connection to the return to office mandates here: the managers who don't see how anyone can work at home are the ones who've never done anything but yap in the office for a living, so they don't understand how sitting somewhere quiet and just thinking counts as work or delivers value for the company. It's a critical failure to appreciate that different people do different things for the business.

    • Jubijub 7 days ago

      That is a hugely simplistic take that tells me you never managed people out coordinated work across many people. I mean I a more productive individually at home too, so are probably all my folks in the team. But we don’t always work independently from each others, by which point having some days in common is a massive booster

      • cjs_ac 7 days ago

        There is a spectrum: at one extremity is mandatory in-office presence every day; at the other is a fully-remote business. For any given individual, and for any given team, the approach needs to be placed on that spectrum according to what it is that that individual or team does. I'm not arguing in favour of any position on that spectrum; I'm arguing against blanket mandates that don't involve any consideration for what individuals in the business do.

  • einpoklum 7 days ago

    I haven't worked in the US; and - have not yet worked in a company where such employees exist. Some are slower, some are fast or more efficient or productive - but they're all, everyone, under the pressure of too many tasks assigned to them, and it's always obvious that more personnel is needed but budget (supposedly) precludes it.

    So, what you're describing is a mythical situation for me. But - US corporations are fabulously rich, or perhaps I should say highly-valued, and there are lots of investors to throw money at things I guess, so maybe that actually happens.

    • ryandrake 7 days ago

      No, it's the same in the US, too. I don't know what these mythical companies are where people are saying 50% of the workforce does nothing, but I've never seen such a place. Everywhere I've ever worked had way more projects to get done than people available to do them. Everyone was working at capacity.

  • xg15 7 days ago

    > What AI is going to wipe out is white collar jobs where people sleepwalk through the working day and carelessly half ass every task.

    Note that AI wipes out the jobs, but not the tasks themselves. So if that's true, as a consumer, expect more sleepwalked, half-assed products, just created by AI.

  • richardw 7 days ago

    CEO’s will be fine until their customers disappear. Are the AI’s going to click ads and buy iPhones?

  • psadauskas 7 days ago

    AIs are great at generating bullshit, so if your job involves generating bullshit, you're probably on the chopping block.

    I just wish that instead of getting more efficient at generating bullshit, we could just eliminate the bullshit.

    • TeMPOraL 7 days ago

      > AIs are great at generating bullshit, so if your job involves generating bullshit, you're probably on the chopping block.

      That covers majority of sales, advertising and marketing work. Unfortunately, replacing people with AI there will only make things worse for everyone.

    • potatoman22 7 days ago

      Some of the best applications of LLMs I've seen are for reducing bullshit. My goal for creating AI products is to let us act more like humans and less like oxen. I know it's idealistic, but I need to act with some goal.

  • throw234234234 5 days ago

    Nah - those people have the bandwidth/time to justify their value in my experience. They are also usually the people managing the productive.

    Its the people that are constantly working, and too busy to be seen, producing output and keeping the lights on who don't have time for the "games" who AI is going for. Their jobs are easier to define since they are productive and do "something" - so its easy to market AI products for these use cases. After all these people are usually not the people in charge of the purse strings in most organisations for better or worse.

  • habosa 6 days ago

    I think it’s actually going to save those people. They can vibe code themselves just enough output to survive where before they did next to nothing. In relative terms, they’ll get a much much higher productivity boost from AI than the already high-performing Staff engineer.

    Management will be thrilled.

  • leeroihe 7 days ago

    [flagged]

    • geraneum 7 days ago

      > It's a perversion of the free market

      We can, together, overcome such challenges when we accept that "The purpose of a system is what it does".

      • TeMPOraL 7 days ago

        There's a "purpose of a system", but there's also a purpose which we want that system to serve, and which prompts us to correct the system should it deviate from the goals we set for it.

      • doctorwho42 6 days ago

        That is a simplistic idea that I am scared has spread far and wide.

        A system is a tool, it does have a use/purpose in the simplistic sense. But how we use the tool is ultimately the crux of the issue, for we can use that hammer to build houses or tear them down, or to build concentration camps or use it simply to injure someone directly.

        No, the purpose of a tool/system is generally determined by the guiding philosophy of the user or society. Unfortunately society has replaced its philosophy (at least in America) with the economic system of capitalism; i.e. capitalism for capitalisms sake.

    • xanthor 7 days ago

      So you think the free market should serve social ends?

    • abletonlive 7 days ago

      Thanks for saying it out loud. I meet a lot of people like you that think the same way as part of my job and they aren't willing to say it out loud.

      It's about protecting your work, even if an LLM can do it better.

      The only way an LLM can devalue your work is if it can do it better than you. And I don't just mean quality, I mean as a function of cost/quality/time.

      Anyway, we can be enemies I don't care - I've been getting rid of roles that aren't useful anymore as much as I can. I do care that it affects them personally but I do want them to be doing something more useful for us all whatever that may be.

      • horns4lyfe 7 days ago
        2 more

        lol “I do care, but not enough to actually care”

        • abletonlive 7 days ago

          Caring doesn't mean that you stop everything you're doing to address someone's needs. That's a pretty binary world if it was the case and maybe a convenient way to look at motives when you don't want nuance.

          Caring about climate change doesn't mean you need to spend your entire life planting trees instead of doing what you're doing.

  • Der_Einzige 7 days ago

    Consulting companies like the Big 4 where this happens most are bigger/stronger than ever (primarily due to AI related consulting). Try again.

    • sevensor 7 days ago

      What makes you think productive work is what consulting companies are selling? They're there for laundering accountability. When you bring in consultants to roll out your corporate AI strategy, and it all falls apart in a few years, you can say, "we were following best practices, nobody could have anticipated X," where X is whatever failure mode ultimately tanks the AI strategy.

      • SpicyLemonZest 7 days ago
        5 more

        Do you think that it's possible in principle to have a better or worse corporate AI strategy? I do, and because I do, it seems clear that companies paying top dollar are doing so because they expect a better one. There's no reason to pay KPMG's rates if all you need is a fall guy.

        Most criticisms I see of management consulting seem to come from the perspective, which I get the sense you subscribe to, that management strategy is broadly fake so there's no underlying thing for the consultants to do better or worse on. I don't think that's right, but I'm never sure how to bridge the gap. It'd be like someone telling me that software architecture is fake and only code is real.

        • ElevenLathe 7 days ago

          I'm willing to believe that one can be better or worse at management, and that in principle somebody could coach you on how to get better.

          That said, how would we measure if our KPMG engagement worked or not? There's no control group company, so any comparison will have to be statistical or vibes-based. If there is a large enough sample size this can work: I'm sure there is somebody out there that can prove management consulting works for dentist practices in mid-size US cities or whatever, though any well-connected group that discovers this information can probably make more money by just doing a rollup of them. This actually seems to be happening in many industries of this kind. Why consult on how to be a more profitable auto repair business when you can do a leveraged buyout of 30 of them, make them all more profitabl, and pocket that insight yourself? I can understand if you're an poorly-connected individual and short on capital, but the big consulting firms are made up entirely of well-connected people who rub elbows with rich people all day.

          Fundamentally, there will never be enough data to prove that IBM engaging McKinsey on AI in 2025 will have made any difference in IBM's bottom line. There's only one IBM and only one 2025!

        • PeterStuer 7 days ago

          The fall guy market is very sensitive to credentials. I hired Joey Blows from Juice-My-AI just hasn't that CYA shield of appoval.

        • Der_Einzige 7 days ago
          2 more

          Given that "design patterns" as a concept basically doesn't exist outside of Java and a few other languages no one actually uses, I'm apt to believe that "software architecture is fake and only code is real".

          • SAI_Peregrinus 7 days ago

            Design patterns (as in commonly re-used designs that solve commonly encountered problems) exist in every language used enough to have commonly encountered problems. Gang-of-Four style named design patterns are mostly a Java thing, and repeatedly lead to the terrible outcome of (hopefully junior) developers trying to find a problem to use the design pattern they just learned about on.

      • code_for_monkey 7 days ago

        you hire consultants so you can cut staff and quality, but the CEOs were already going to do that.

    • airstrike 7 days ago

      Consulting companies don't sell productive advice. They sell management insurance.

    • code_for_monkey 7 days ago

      I think this is the kind of logic you wind up with when you start with the assumption that the Big 4 tell the truth about absolutely everything all the time

CKMo 7 days ago

There's definitely a big problem with entry-level jobs being replaced by AI. Why hire an intern or a recent college-grad when they lack both the expertise and experience to do what an AI could probably do?

Sure, the AI might require handholding and prompting too, but the AI is either cheaper or actually "smarter" than the young person. In many cases, it's both. I work with some people who I believe have the capacity and potential to one day be competent, but the time and resource investment to make that happen is too much. I often find myself choosing to just use an AI for work I would have delegated to them, because I need it fast and I need it now. If I handed it off to them I would not get it fast, and I would need to also go through it with them in several back-and-forth feedback-review loops to get it to a state that's usable.

Given they are human, this would push back delivery times by 2-3 business days. Or... I can prompt and handhold an AI to get it done in 3 hours.

Not that I'm saying AI is a god-send, but new grads and entry-level roles are kind of screwed.

  • ChrisMarshallNY 7 days ago

    This is where the horrific disloyalty of both companies and employees, comes to bite us in the ass.

    The whole idea of interns, is as training positions. They are supposed to be a net negative.

    The idea is that they will either remain at the company, after their internship, or move to another company, taking the priorities of their trainers, with them.

    But nowadays, with corporate HR, actively doing everything they can to screw over their employees, and employees, being so transient, that they can barely remember the name of their employer, the whole thing is kind of a worthless exercise.

    At my old company, we trained Japanese interns. They would often relocate to the US, for 2-year visas, and became very good engineers, upon returning to Japan. It was well worth it.

    • neilv 7 days ago

      I agree that interns are pretty much over in tech. Except maybe for an established company do do as a semester/summer trial/goodwill period, for students near graduation. You usually won't get work output worth the mentoring cost, but you might identify a great potential hire, and be on their shortlist.

      Startups are less enlightened than that about "interns".

      Literally today, in a startup job posting, to a top CS department, they're looking for "interns" to bring (not learn) hot experience developing AI agents, to this startup, for... $20/hour, and get called an intern.

      It's also normal for these startup job posts to be looking for experienced professional-grade skills in things like React, Python, PG, Redis, etc., and still calling the person an intern, with a locally unlivable part-time wage.

      Those startups should stop pretending they're teaching "interns" valuable job skills, admit that they desperately need cheap labor for their "ideas person" startup leadership, to do things they can't do, and cut the "intern" in as a founding engineer with meaningful equity. Or, if you can't afford to pay a livable and plausibly competitive startup wage, maybe they're technical cofounders.

    • FirmwareBurner 7 days ago

      >At my old company, we trained Japanese interns. They would often relocate to the US, for 2-year visas, and became very good engineers,

      Damn, I wish that was me. Having someone mentor you at the beginning of your career instead of having to self learn and fumble your way around never knowing if you're on the right track or not, is massive force multiplier that pays massive dividends over your career. It's like entering the stock market with 1 million $ capital vs 100 $. You're also less likely to build bad habits if nobody with experience teaches you early on.

      • dylan604 7 days ago

        I really think the loss of a mentor/apprentice type of experience is one of those baby-with-the-bath-water type of losses. There are definitely people with the personality types of they know everything and nothing can be learned from others, but for those of us who would much rather learn from those with more experience on the hows and whys of things rather than getting all of those paper cuts ourselves, working with mentors is definitely a much better way to grow.

      • ChrisMarshallNY 7 days ago
        4 more

        Yup. It was a standard part of their HR policy. They are all about long, long-term employment.

        They are a marquée company, and get the best of the best, direct from top universities.

        Also, no one has less than a Master's, over there.

        We got damn good engineers as interns.

        • FirmwareBurner 7 days ago
          3 more

          >Also, no one has less than a Master's, over there.

          I feel this is pretty much the norm everywhere in Europe and Asia. No serious engineering company in Germany even looks at your resume it there's no MSc. degree listed, especially since education is mostly free for everyone so not having a degree is seen as a "you problem", but also it leads to degree inflation, where only PhD or post-docs get taken seriously for some high level positions. I don't remember ever seeing a senior manager/CTO without the "Dr." or even "Prof. Dr." title in the top German engineering companies.

          I think mostly the US has the concept of the cowboy self taught engineer who dropped out of college to build a trillion dollar empire in his parents garage.

          • yardie 7 days ago

            Graduate school assistant in the US pay such shit wages compared to Europe that you would be eligible for food stamps. Opportunity cost is better spent getting your bachelors degree, finding employment, and then using that salary to pay for grad school or have your employer pay for it. I’ve worked in Europe with just my bac+3. I also had 3-4 years of applied work experience that a fresh-faced MSc holder was just starting to acquire.

          • fn-mote 7 days ago

            Possibly also because they don’t observe added value of the additional schooling.

            Also because US salaries are sky high compared to their European counterparts, so I could understand if the extra salary wasn’t worth the risk that they might not have that much extra productivity.

            I’ve certainly worked with advanced degree people who didn’t seem to be very far along on the productivity curve, but I assume it’s like that for everything everywhere.

    • geraneum 7 days ago

      > horrific disloyalty of both companies and employees

      There’s no such a thing as loyalty in employer-employee relationships. There’s money, there’s work and there’s [collective] leverage. We need to learn a thing or two from blue collars.

      • ChrisMarshallNY 7 days ago
        8 more

        > We need to learn a thing or two from blue collars.

        A majority of my friends are blue-collar.

        You might be surprised.

        Unions are adversarial, but the relationships can still be quite warm.

        I hear that German and Japanese unions are full-force stakeholders in their corporations, and the relationship is a lot more intricate.

        It's like a marriage. There's always elements of control/power play, but the idea is to maximize the benefits.

        It can be done. It has been done.

        It's just kind of lost, in tech.

        • FirmwareBurner 7 days ago
          6 more

          >It's just kind of lost, in tech.

          Because you can't offshore your clogged toilet or broken HVAC issue to someone abroad for cheap on a whim like you can with certain cases in tech.

          You're dependent on a trained and licensed local showing up at your door, which gives him actual bargaining power, since he's only competing with the other locals to fix your issue and not with the entire planet in a race to the bottom.

          Unionization only works in favor of the workers in the cases when labor needs to be done on-site (since the government enforces the rules of unions) and can't be easily moved over the internet to another jurisdiction where unions aren't a thing. See the US VFX industry as a brutal example.

          There are articles discussing how LA risks becoming the next Detroit with many of the successful blockbusters of 2025 being produced abroad now due to the obscene costs of production in California caused mostly by the unions there. Like 350 $ per hour for a guy to push a button on a smoke machine, because only a union man is allowed to do it. Or that it costs more to move across a Cali studio parking lot than to film a scene in the UK. Letting unions bleed companies dry is only gonna result them moving all jobs that can be moved abroad.

          • yardie 7 days ago

            Almost every Hollywood movie you see,that wasn’t filmed in LA, was basically a taxpayer backed project. Look at any film with international locations and in the film credits you’ll see a lots of state-backed, loans, grants, and tax credits. Large part of the film crew and cast are flown out to those locations. And if you think LA was expensive, location pay is even more so. So production is flying out the most expensive parts of the crew to save a few dollars on craft service?

          • madaxe_again 7 days ago
            4 more

            > Because you can't offshore your clogged toilet or broken HVAC issue to someone abroad for cheap on a whim like you can with certain cases in tech.

            Yet. You can’t yet. Humanoids and VR are approaching the point quite rapidly where a teleoperated or even autonomous robot will be a better and cheaper tradesman than Joe down the road. Joe can’t work 24 hours a day. Joe realises that, so he’ll rent a robot and outsource part of his business, and will normalise the idea as quickly as LLMs have become normal. Joe will do very well, until someone comes along with an economy of scale and eats his breakfast.

            • Henchman21 6 days ago
              3 more

              All the Joes I know would spend serious time hunting these robots.

              IMO, real actual people don’t want to live in the world you described. Hell, they don’t wanna live in this one! The “elites” have failed us. Their vision of the future is a dystopian nightmare. If the only reason to exist is to make 25 people at the top richer than gods? What is the fucking point of living?

              • ChrisMarshallNY 6 days ago
                2 more

                > If the only reason to exist is to make 25 people at the top richer than gods?

                You just described most medieval societies.

                It's been done before, and those 25 people are hoping to make it happen again.

                • Henchman21 6 days ago

                  Hoping is the wrong word. They’re trying harder than ever.

        • sabarn01 7 days ago

          I have been in Union shops before working in tech. In some places they are fine in others its where your worst employee on your team goes to make everyone else less effective.

    • xpe 6 days ago

      I personally care a lot about people, but if I was running a publicly traded for-profit, I would have a lot of constraints about how to care for them. (A good place to start, by the way, is not bullshitting people about the financial realities.)

      Employees are lucky when incentives align and employers treat them well. This cannot be expected or assumed.

      A lot of people want a different kind of world. If we want it, we’re gonna have to build it. Think about what you can do. Have you considered running for office?

      I don’t think it is helpful for people to play into the victim narrative. It is better to support each other and organize.

  • mechagodzilla 7 days ago

    Interns and new grads have always been a net-negative productivity-wise in my experience, it's just that eventually (after a small number of months/years) they turn into extremely productive more-senior employees. And interns and new grads can use AI too. This feels like asking "Why hire junior programmers now that we have compilers? We don't need people to write boring assembly anymore." If AI was genuinely a big productivity enhancer, we would just convert that into more software/features/optimizations/etc, just like people have been doing with productivity improvements in computers and software for the last 75 years.

    • lokar 7 days ago

      Where I have worked new grads (and interns) were explicitly negative.

      This is part of why some companies have minimum terminal levels (often 5/Sr) before which a failure to improve means getting fired.

    • 0xpgm 7 days ago

      Isn't that every new employee? The first few months you are not expected to be firing on all cylinders as you catch up and adjust to company norms

      An intern is much more valuable than AI in the sense that everyone makes micro decisions that contribute to the business. An Intern can remember what they heard in a meeting a month ago or some important water-cooler conversation and incorporate that in their work. AI cannot do that

    • alephnerd 7 days ago

      It's a monetary issue at the end of the day.

      AI/ML and Offshoring/GCCs are both side effects of the fact that American new grad salaries in tech are now in the $110-140k range.

      At $70-80k the math for a new grad works out, but not at almost double that.

      Also, going remote first during COVID for extended periods proved that operations can work in a remote first manner, so at that point the argument was made that you can hire top talent at American new grad salaries abroad, and plenty of employees on visas were given the option to take a pay cut and "remigrate" to help start a GCC in their home country or get fired and try to find a job in 60 days around early-mid 2020.

      The skills aspect also played a role to a certain extent - by the late 2010s it was getting hard to find new grads who actually understood systems internals and OS/architecture concepts, so a lot of jobs adjacent to those ended up moving abroad to Israel, India, and Eastern Europe where universities still treat CS as engineering instead of an applied math disciple - I don't care if you can prove Dixon's factorization method using induction if you can't tell me how threading works or the rings in the Linux kernel.

      The Japan example mentioned above only works because Japanese salaries in Japan have remained extremely low and Japanese is not an extremely mainstream language (making it harder for Japanese firms to offshore en masse - though they have done so in plenty of industries where they used to hold a lead like Battery Chemistry).

      • sarchertech 7 days ago
        3 more

        > by the late 2010s it was getting hard to find new grads who actually understood systems internals and OS/architecture concepts, so a lot of jobs adjacent to those ended up moving abroad to Israel, India, and Eastern Europe where universities still treat CS as engineering instead of an applied math disciple

        That doesn’t fit my experience at all. The applied math vs engineering continuum is mostly dependent on whether a CS program at a given school came out of the engineering department or the math apartment. I haven’t noticed any shift on that spectrum coming from CS departments except that people are more likely to start out programming in higher level languages where they are more insulated from the hardware.

        That’s the same across countries though. I certainly haven’t noticed that Indian or Eastern European CS grads have a better understanding of the OS or the underlying hardware.

        • alephnerd 6 days ago
          2 more

          > I certainly haven’t noticed that Indian or Eastern European CS grads have a better understanding of the OS or the underlying hardware.

          Absolutely, but that's if they are exposed to these concepts, and that's become less the case beyond maybe a single OS class.

          > except that people are more likely to start out programming in higher level languages where they are more insulated from the hardware

          I feel that's part of the issue, but also, CS programs in the US are increasingly making computer architecture an optional class. And network specific classes have always been optional.

          ---------

          Mind you, I am biased towards Cybersecurity, DevOps, DBs, and HPC because that is the industry I've worked on for over a decade now, and it legitimately has become difficult hiring new grads in the US with a "NAND-to-Tetris" mindset because curriculums have moved away from that aside from a couple top programs.

          • sarchertech 10 hours ago

            ABET still requires computer architecture and organization. And they also require coverage of networking. There are 130 ABET accredited programs in the US and a ton more programs that use it as an aspirational guide.

            Based on your domain, I think a big part of what you’re seeing is that over the last 15 years there was a big shift in CS students away from people who are interested in computers towards people who want to make money.

            The easiest way to make big bucks is in web development, so that’s where most graduates go. They think of DBA, devops, and cybersecurity as low status. The “low status” of those jobs becomes a bit of a self fulfilling prophecy. Few people in the US want to train for them or apply to them.

            I also think that the average foreign worker doing these jobs isn’t equivalent to a new grad in the US. The majority have graduate degrees and work experience.

            You could hire a 30 year old US employee with a graduate degree and work experience too for your entry level job. It would just cost a lot more.

  • brookst 7 days ago

    I just can't agree with this argument at all.

    Today, you hire an intern and they need a lot of hand-holding, are often a net tax on the org, and they deliver a modest benefit.

    Tomorrow's interns will be accustomed to using AI, will need less hand-holding, will be able to leverage AI to deliver more. Their total impact will be much higher.

    The whole "entry level is screwed" view only works if you assume that companies want all of the drawbacks of interns and entry level employees AND there is some finite amount of work to be done, so yeah, they can get those drawbacks more cheaply from AI instead.

    But I just don't see it. I would much rather have one entry level employee producing the work of six because they know how to use AI. Everywhere I've worked, from 1-person startup to the biggest tech companies, has had a huge surplus of work to be done. We all talk about ruthless prioritization because of that limit.

    So... why exactly is the entry level screwed?

    • chongli 7 days ago

      Tomorrow's interns will be accustomed to using AI, will need less hand-holding, will be able to leverage AI to deliver more.

      Maybe tomorrow's interns will be "AI experts" who need less hand-holding, but the day after that will be kids who used AI throughout elementary school and high school and know nothing at all, deferring to AI on every question, and have zero ability to tell right from wrong among the AI responses.

      I tutor a lot of high school students and this is my takeaway over the past few years: AI is absolutely laying waste to human capital. It's completely destroying students' ability to learn on their own. They are not getting an education anymore, they're outsourcing all their homework to the AI.

      • sibeliuss 7 days ago

        It's worth reminding folks that one doesn't _need_ a formal education to get by. I did terrible in school and never went to college and years later have reached a certain expertise (which included many fortunate moments along the way).

        What I had growing up though were interests in things, and that has carried me quite far. I worry much more about the addictive infinite immersive quality of video games and other kinds of scrolling, and by extension the elimination of free time through wasted time.

      • alephnerd 7 days ago

        I mean, a lot of what you mentioned is an issue around critical thinking (and I'm not sure that's something that can be taught), which has always remained an issue in any job market, and to solve that deskilling via automation (AI or traditional) was used to remediate that gap.

        But if you deskill processes, it makes it harder to argue in favor of paying the same premium you did before.

    • gerad 7 days ago

      They don't have the experience to tell bad AI responses from good ones.

      • xp84 7 days ago
        6 more

        True, but this becomes less of an issue as AI improves, right? Which is the 'happier' direction to see a problem moving, as if AI doesn't improve, it threatens the jobs less.

        • hnthrow90348765 7 days ago
          2 more

          I would be worried about the eventual influence of advertising and profits over correctness

          • brookst 6 days ago

            Why is the company who employs the intern paying for an AI service that corrupts its results with ads?

        • sarchertech 7 days ago
          3 more

          If AI improves to the point that an intern doesn’t need to check its work, you don’t need the intern.

          You don’t need managers, or CEOs. You don’t even need VCs.

          • brookst 6 days ago
            2 more

            Too reductionist.

            • sarchertech 5 days ago

              Exactly the right amount of reductionist.

    • einpoklum 7 days ago

      > will need less hand-holding, will be able to leverage AI to deliver more

      Well, maybe it'll be the other way around: Maybe they'll need more hand-holding since they're used to relying on AI instead of doing things themselves, and when faced with tasks they need to do, they will be less able.

      But, eh, what am I even talking about? The _senior_ developers in a many companies need a lot of hand-holding that they aren't getting, write bad code, with poor practices, and teach the newbies how to get used to doing that. So that's why the entry-level people are screwed, AI or no.

      • brookst 7 days ago

        You’ve eloquently expressed exactly the same disconnect: as long as we think the purpose of internships is to write the same kind of code that interns write today, sure, AI probably makes the whole thing less efficient.

        But if the purpose of an internship is to learn how to work in a company, while producing some benefit for the company, I think everything gets better. Just like we don’t measure today’s terms by words per minute typed, I don’t think we’ll measure tomorrow’s interns by Lines of code that hand – written.

        So much of the doom here comes from a thought process that goes “we want the same outcomes as today, but the environment is changing, therefore our precious outcomes are at risk.“

  • diogolsq 7 days ago

    You’re right that AI is fast and often more efficient than entry-level humans for certain tasks — but I’d argue that what you’re describing isn’t delegation, it’s just choosing to do the work yourself via a tool. Implementation costs are lower now, so you decide to do it on your own.

    Delegation, properly defined, involves transferring not just the task but the judgment and ownership of its outcome. The perfect delegation is when you delegate to someone because you trust them to make decisions the way you would — or at least in a way you respect and understand.

    You can’t fully delegate to AI — and frankly, you shouldn’t. AI requires prompting, interpretation, and post-processing. That’s still you doing the thinking. The implementation cost is low, sure, but the decision-making cost still sits with you. That’s not delegation; it’s assisted execution.

    Humans, on the other hand, can be delegated to — truly. Because over time, they internalize your goals, adapt to your context, and become accountable in a way AI never can.

    Many reasons why AI can't fill your shoes:

    1. Shallow context – It lacks awareness of organizational norms, unspoken expectations, or domain-specific nuance that’s not in the prompt or is not explicit in the code base.

    2. No skin in the game – AI doesn’t have a career, reputation, or consequences. A junior human, once trained and trusted, becomes not only faster but also independently responsible.

    Junior and Interns can also use AI tools.

    • dasil003 7 days ago

      You said exactly what I came here to say.

      Maybe some day AI will truly be able to think and reason in a way that can approximate a human, but we're still very far from that. And even when we do, the accountability problem means trusting AI is a huge risk.

      It's true that there are white collar jobs that don't require actual thinking, and those are vulnerable, but that's just the latest progression of computerization/automation that's been happening steadily for the last 70 years already.

      It's also true that AI will completely change the nature of software development, meaning that you won't be able to coast just on arcane syntax knowledge the way a lot of programmers have been able to so far. But the fundamental precision of logical thought and mapping it to a desirable human outcome will still be needed, the only change is how you arrive there. This actually benefits young people who are already becoming "AI native" and will be better equipped to leverage AI capabilities to the max.

  • Loughla 7 days ago

    So what happens when you retire and have no replacement because you didn't invest in entry level humans?

    This feels like the ultimate pulling up the ladder after you type of move.

  • mirkodrummer 7 days ago

    imo comparing entry-level people with ai is very short sighted, I was smarter than every dumb dinosaur at my first job, I was so eager to learn and proactive and positive... i probably was very lucky too but my point is i don't believe this whole thing that a junior is worse than ai, i'd rather say the contrary

  • phailhaus 7 days ago

    I don't get this because someone has to work with the AI to get the job done. Those are the entry-level roles! The manager who's swamped with work sure as hell isn't going to do it.

  • aloknnikhil 7 days ago

    It's not that entry-level jobs / interns are irrelevant. It's more that entry-level has been redefined and it requires significant uplevelling in terms of skills necessary to do a job at that level. That's not necessarily a bad thing. As others have said here, I would be more willing to hand-off more complex tasks to interns / junior engineers because my expectation is they leverage AI to tackle it faster and learn in the process.

  • uludag 6 days ago

    I thought the whole idea of automation though was to lower the skill requirement. Everyone compares AI to the industrial revolution and the shift from artisan work to factory work. If this analogy were to hold true, then what employers should actually be wanting is more junior devs, maybe even non-devs, hired at a much cheaper wage. A senior dev may be able to outperform a junior by a lot, but assuming the AI is good enough, four juniors or like 10 non-devs should be able to outperform a senior.

    This obviously not being the case shows that we're not in a AI driven fundamental paradigm shift, but rather run of the mill cost cutting measures. Like suppose a tech bubble pops and there are mass layoffs (like the Dotcom bubble). Obviously people will loose their jobs. AI hype merchants will almost definitely try to push the narrative that these losses are from AI advancements in an effort to retain funding.

  • pedalpete 7 days ago

    We've been doing the exact opposite for some positions.

    I've been interviewing marketing people for the last few months (I have a marketing background from long ago), and the senior people were either way too expensive for our bootstrapped start-up, or not of the caliber we want in the company.

    At the same time, there are some amazing recent grads and even interns who can't get jobs.

    We've been hiring the younger group, and contracting for a few days a week with the more experienced people.

    Combine that with AI, and you've got a powerful combination. That's our theory anyway.

    It's worked pretty well with our engineers. We are a team of 4 experienced engineers, though as CEO I don't really get to code anymore, and 1 exceptional intern. We've just hired our 2nd intern.

  • einpoklum 7 days ago

    > Why hire an intern or a recent college-grad when they lack both the expertise and experience to do what an AI could probably do?

    1. Because, generally, they don't.

    2. Because an LLM is not a person, it's a chatbot.

    3. "Hire an intern" is that US thing when people work without getting real wages, right?

    Grrr :-(

    • aianus 7 days ago

      Interns make $75k+ in tech in the US. It's definitely not unpaid. In fact my school would not give course credit for internships if they were unpaid.

  • snowwrestler 7 days ago

    Companies reducing young hires because of AI are doing it backward. Returns on AI will be accelerated by early-career staff because they are already eagerly using AI in daily life, and have the least attachment to how jobs are done now.

    You’re probably not going to transform your company by issuing Claude licenses to comfortable middle-aged career professionals who are emotionally attached to their personal definition of competency.

    Companies should be grabbing the kids who just used AI to cheat their way through senior year, because that sort of opportunistic short-cutting is exactly what companies want to do with AI in their business.

    • sarchertech 7 days ago

      If the AI can write code to a level that doesn’t need an experienced person to check the output, you don’t need tech companies at all.

  • runeks 4 days ago

    > Sure, the AI might require handholding and prompting too, but the AI is either cheaper or actually "smarter" than the young person.

    The AI will definitely require handholding. And that hand-holder will be an intern or a recent college-grad.

  • mjburgess 7 days ago

    This is always the case though. A factor of 50x productivity between expert and novice is small. Consider how long it take you to conduct foot surgery vs. a food surgeon -- close to a decade of medical school + medical experience -- just for a couple hours of work.

    There have never been that many businesses able to hire novices for this reason.

    • pc86 7 days ago

      This is a big part of why a lot of developers' first 1-3 jobs are small mom & pop shops of varying levels of quality, almost none of which have "good" engineering cultures. Market rate for a new grad dev might be X, it's hard to find an entry level job at X but mom & pop business who needs 0.7 FTE developers is willing to pay 0.8X and even though the owner is batshit insane it's not a bad deal for the 22 and 23 year olds willing to do it.

      • mjburgess 7 days ago

        Sure. I mean perhaps, LLMs will accelerate a return to a more medieval culture in tech where you "have to start at 12 to be any good". Personally, I think that's a good (enough) idea. By 22, I'd at least a decade of experience; my first job at 20 was as a contractor for a major national/multinational.

        Programming is a craft, and just like any other, the best time to learn it is when it's free to learn.

    • InitialLastName 7 days ago

      I think for a surgeon as an example, quality may be a better metric than time. I'll bet I could conduct an attempted foot surgery way faster than a foot surgeon, but they're likely to conduct successful foot surgeries.

      • nradov 7 days ago

        Sure, but no one has found a good metric for actually quantifying quality for surgeons. You can't look at just the rate of positive outcomes because often the best surgeons take on the worst cases that others won't even attempt. And we simply don't have enough reliable data to make proper metric adjustments based on individual patient attributes.

  • dylan604 7 days ago

    Are you honestly trying to tell us that the code you receive from an AI is not requiring any of your time to review and tweak and is 100% correct every time and ready to deploy into your code base with no changes what so ever? You my friend must be a steely eyed missile man of prompting

    • throwuxiytayq 7 days ago

      Consider that there are no humans in existence that fulfill your requirements, not to mention $20/mo ones

      • dylan604 7 days ago

        why would i consider that when there absolutely are humans that can do that. your dollar value is just ridiculous. if you're a hot shit dev that no longer needs junior devs, then if you spend 15 minutes refactoring the AI output, then you're underwater on that $20/mo value

  • anshumankmr 6 days ago

    >Not that I'm saying AI is a god-send, but new grads and entry-level roles are kind of screwed.

    A company that I know of is having a L3 hiring freeze also and some people are downgraded from L4 to L3 or L5 to L4 also.. Getting more work for less cost.

  • sauercrowd 7 days ago

    "intern" and "entry level" are proxies for complexity with these comparisons, not actual seniority. We'll keep hiring interns and entry level positions, they'll just do other things.

  • baxtr 7 days ago

    I think it’s the other way around.

    If LLMs continue to become more powerful, hiring more juniors who can use them will be a no-brainer.

    • phatfish 7 days ago

      Yup, apart from a few companies at the cutting edge the most difficult problems to solve in a work environment are not technical.

  • necheffa 7 days ago

    > Why hire an intern or a recent college-grad when they lack both the expertise and experience to do what an AI could probably do?

    AI can barely provide the code for a simple linked list without dropping NULL pointer dereferences every other line...

    Been interviewing new grads all week. I'd take a high performing new grad that can be mentored into the next generation of engineer any day.

    If you don't want to do constant hand holding with a "meh" candidate...why would you want to do constant hand holding with AI?

    > I often find myself choosing to just use an AI for work I would have delegated to them, because I need it fast and I need it now.

    Not sure what you are working on. I would never prioritize speed over quality - but I do work in a public safety context. I'm actually not even sure of the legality of using an AI for design work but we have a company policy that all design analysis must still be signed off on by a human engineer in full as if it were 100% their own.

    I certainly won't be signing my name on a document full of AI slop. Now an analysis done by a real human engineer with the aid of AI - sure, I'd walk through the same verification process I'd walk through for a traditional analysis document before signing my name on the cover sheet. And that is something a jr. can bring to me to verify.

  • jmyeet 7 days ago

    This is basically what happened after 2008. The entry level jobs college grads did basically disappeared and didn't really come back for many years. So we kind of lost half a generation. Those who missed out are the ones who weren't able to buy a house or start a family and are now in their 40s, destined to be permanent renters who can never retire.

    The same thing will happen to Gen Z because of AI.

    In both cases, the net effect of this (and the desired outcome) is to suppress wages. Not only of entry-level job but every job. The tech sector is going to spend the next decade clawing back the high costs of tech people from the last 15-20 years.

    The hubris here is that we've had a unprecedented boom such that many in the workforce have never experienced a recession, what I'd call "children of summer" (to borrow a George RR Martin'ism). People have fallen into the trap of the myth of meritocracy. Too many people thing that those who are living paycheck to paycheck (or are outright unhoused) are somehow at fault when spiralling housing costs, limited opportunities and stagnant real wages are pretty much responsible for everything.

    All of this is a giant wealth transfer to the richest 0.01% who are already insanely wealthy. I'm convinced we're beyond the point where we can solve the problems of runaway capitalism with electoral politics. This only ends in tyranny of a permanent underclass or revolution.

  • abletonlive 7 days ago

    This is a big issue in the short term but in the long term I actually think AI is going to be a huge democratization of work and company building.

    I spend a lot of time encouraging people to not fight the tide and spend that time intentionally experimenting and seeing what you can do. LLMs are already useful and it's interesting to me that anybody is arguing it's just good for toy applications. This is a poisonous mindset and results in a potentially far worse outcome than over-hyping AI for an individual.

    I am wondering if I should actually quit a >500K a year job based around LLM applications and try to build something on my own with it right now.

    I am NOT someone that thinks I can just craft some fancy prompt and let an LLM agent build me a company, but I think it's a very powerful tool when used with great intention.

    The new grads and entry level people are scrappy. That's why startups before LLMs liked to hire them. (besides being cheap, they are just passionate and willing to make a sacrifice to prove their worth)

    The ones with a lot of creativity have an opportunity right now that many of us did not when we were in their shoes.

    In my opinion, it's important to be technically potent in this era, but it's now even more important to be creative - and that's just what so many people lack.

    Sitting in front of a chat prompt and coming up with an idea is hard for the majority of people that would rather be told what to do or what direction to take.

    My message to the entry-level folks that are in this weird time period. It's tough, and we can all acknowledge that - but don't let cynicism shackle you. Before LLMs, your greatest asset was fresh eyes and the lack of cynicism brought upon by years of industry. Don't throw away that advantage just because the job market is tough. You, just like everybody else, have a very powerful tool and opportunity right in front of you.

    The amount of people trying to convince you that it's just a sham and hype means that you have less competition to worry about. You're actually lucky there's a huge cohort of experienced people that have completely dismissed LLMs because they were too egotistical to spend meaningful time evaluating it and experimenting with it. LLM capabilities are still changing every 6 months-1 year. Anybody that has decided concretely that there is nothing to see here is misleading you.

    Even in the current state of LLM if the critics don't see the value and how powerful it is mostly a lack of imagination that's at play. I don't know how else to say it. If I'm already able to eliminate someone's role by using an LLM then it's already powerful enough in its current state. You can argue that those roles were not meaningful or important and I'd agree - but we as a society are spending trillions on those roles right now and would continue to do so if not for LLMs

    • izabera 7 days ago

      what does "huge democratization of work" even mean? what world do you people live in? the current global unemployment rate on my planet is around 5% so that seems pretty democratised already?

      • tdeck 7 days ago

        I've noticed that when people use the term "democratization" in business speak, it makes sense to replace it with "commodification" 99% of the time.

      • abletonlive 7 days ago
        14 more

        What I mean by that is that you have even more power to start your own company or use LLMs to reduce the friction of doing something yourself instead of hiring someone else to do it for you.

        Just as the internet was a democratization of information, llms are a democratization of output.

        That may be in terms of production or art. There is clearly a lower barrier for achieving both now compared to pre-llm. If you can't see this then you don't just have your head stuck in the sand, you have it severed and blasted into another reality.

        The reason why you reacted in such a way is again, a lack of imagination. To you, "work" means "employment" and a means to a paycheck. But work is more than that. It is the output that matters, and whether that output benefits you or your employer is up to you. You now have more leverage than ever for making it benefit you because you're not paying that much time/money to ask an LLM to do it for you.

        Pre-llm, most for-hire work was only accessible to companies with a much bigger bank account than yours.

        There is an ungodly amount of white collar workers maintaining spreadsheets and doing bullshit jobs that LLMs can do just fine. And that's not to say all of those jobs have completely useless output, it's just that the amount of bodies it takes to produce that output is unreasonable.

        We are just getting started getting rid of them. But the best part of it is that you can do all of those bullshit jobs with an LLM for whatever idea you have in your pocket.

        For example, I don't need an army of junior engineers to write all my boilerplate for me. I might have a protege if I am looking to actually mentor someone and hire them for that reason, but I can easily also just use LLMs to make boilerplate and write unit tests for me at the same time. Previously I would have had to have 1 million dollars sitting around to fund the amount of output that I am able to produce with a $20 subscription to an LLM service.

        The junior engineer can also do this too, albeit in most cases less effectively.

        That's democratization of work.

        In your "5% unemployment" world you have many more gatekeepers and financial barriers.

        • hn_acc1 7 days ago
          9 more

          Just curious what area you work in? Python or some kind of web service / Jscript? I'm sure the LLMs are reasonably good for that - or for updating .csv files (you mention spreadsheets).

          I write code to drive hardware, in an unusual programming style. The company pays for Augment (which is now based on o4, which is supposed to be really good?!?). It's great at me typing: print_debug( at which point it often guesses right as to which local variables or parameters I want to debug - but not always. And it can often get the loop iteration part correct if I need to, for example, loop through a vector. The couple of times I asked it to write a unit test? Sure, it got a the basic function call / lambda setup correct, but the test itself was useless. And a bunch of times, it brings back code I was experimenting with 3 months ago and never kept / committed, just because I'm at the same spot in the same file..

          I do believe that some people are having reasonable outcomes, but it's not "out of the box" - and it's faster for me to write the code I need to write than to try 25 different prompt variations.

          • abletonlive 7 days ago
            8 more

            A lot of python in a monorepo. Mono repos have an advantage right now because the LLM can pretty much look through the entire repo. But I'm also applying LLM to eliminate a lot of roles that are obsolete, not just using it to code.

            Thanks for sharing your perspective with ACTUAL details unlike most people that have gotten bad results.

            Sadly hardware programming is probably going to lag or never be figured out because there's just not enough info to train on. This might change in the future when/if reasoning models get better but there's no guarantee of that.

            > which is now based on o4

            based on o4 or is o4, those are two different things. augment says this: https://support.augmentcode.com/articles/5949245054-what-mod...

              Augment uses many models, including ones that we train ourselves. Each interaction you have with Augment will touch multiple models. Our perspective is that the choice of models is an implementation detail, and the user does not need to stay current with the latest developments in the world of AI models to fully take advantage of our platform.
            
            Which IMO is....a cop out, a terrible take, and just...slimey. I would not trust a company like this with my money. For all you know they are running your prompts against a shitty open source model running on a 3090 in their closet. The lack of transparency here is concerning.

            You might be getting bad results for a few reasons:

              - your prompts are not specific enough
              - your context is poisoned. how strategically are you providing context to the prompt? a good trick is to give the llm an existing file as an example to how you want it to produce the output and tell it "Do X in the style of Y.file". Don't forget with the latest models and huge context windows you could very well provide entire subdirectories into context (although I would recommend being pretty targeted still)
              - the model/tool you're using sucks
              - you work in a problem domain that LLMs are genuinely bad at
            
            Note: your company is paying a subscription to a service that isn't allowing you to bring your own keys. they have an incentive to optimize and make sure you're not costing them a lot of money. This could lead to worse results.

            see here for Cline team's perspective on this topic: https://www.reddit.com/r/ChatGPTCoding/comments/1kymhkt/clin...

            I suggest this as the bare minimum for the HN community when discussing their bad results with LLMs and coding:

              - what is your problem domain
              - show us your favorite prompt
              - what model and tools are you using?
              - are you using it as a chat or an agent? 
              - are you bringing your own keys or using a service?
              - what did you supply in context when you got the bad result? 
              - how did you supply context? copy paste? file locations? attachments?
              - what prompt did you use when you got the bad result?
            
            I'm genuinely surprised when someone complaining about LLM results provides even 2 of those things in their comment.

            Most of the cynics would not provide even half of this because it'd be embarrassing and reveal that they have no idea what they are talking about.

            • rini17 7 days ago
              7 more

              But how is AI supposed to replace anyone when you have either to get lucky or to correctly set up all these things you write about first? Who will do all that and who will pay for it?

              • abletonlive 7 days ago
                6 more

                So your critique of AI is that it can't read your mind and figure out what to do?

                > But how is AI supposed to replace anyone when you have either to get lucky or to correctly set up all these things you write about first? Who will do all that and who will pay for it?

                I mean....i'm doing it and getting paid for it so...

                • rini17 6 days ago
                  5 more

                  Yes, because AGI is advertised(or reviled) as such. That you plug it in and it figures everything else out itself. No need for training and management like for humans.

                  In other words, did the AI actually replace you in this case? Do you expect it to? Because people clearly expect it, then we have such discussions as this.

                  • abletonlive 6 days ago
                    4 more

                    You are incredibly foolish to get hung up on marketing promises and ignoring llm capabilities that are a reality and useful right now

                    good luck with that

                    • rini17 5 days ago
                      3 more

                      Tell that to all these bloodbathers. I am trying it out myself and in touch with the reality.

                      • abletonlive 5 days ago
                        2 more

                        You're trying it out with literally the expectation that it can read your mind and do what you want with no effort involved on your part.

                        So basically you're not trying it out. Please just put it down, you have nothing interesting to say here

                        • rini17 4 days ago

                          Maybe. But are you aware that noone, at least in management, wants to hear "you must make the effort"?

        • blibble 7 days ago
          4 more

          > What I mean by that is that you have even more power to start your own company or use LLMs to reduce the friction of doing something yourself instead of hiring someone else to do it for you.

          > Previously I would have had to have 1 million dollars sitting around to fund the amount of output that I am able to produce with a $20 subscription to an LLM service.

          this sounds like the death of employment and the start of plutocracy

          not what I would call "democratisation"

          • abletonlive 7 days ago
            3 more

            > plutocracy

            Well, I've said enough about cynicism here so not much else I can offer you. Good luck with that! Didn't realize everybody loved being an employee so much

            • blibble 7 days ago
              2 more

              not everyone is capable of starting a business

              so, employee or destitute? tough choice

              • abletonlive 7 days ago

                I spent a lot of time arguing the barrier to entry for starting one is lower than ever. But if your only options are employee or being destitute, I will again point right to -> cynicism.

snowwrestler 7 days ago

Historically, people have been pretty good at predicting the effects of new technologies on existing jobs. But quite bad at predicting the new jobs / careers / industries that are eventually created with those technologies.

This is why free market economies create more wealth over time than centrally planned economies: the free market allows more people to try seemingly crazy ideas, and is faster to recognize good ideas and reallocate resources toward them.

In the absence of reliable prediction, quick reaction is what wins.

Anyway, even if AI does end up “destroying” tons of existing white collar jobs, that does not necessarily imply mass unemployment. But it’s such a common inference that it has its own pejorative: Luddite.

And the flip side of Ludddism is what we see from AI boosters now: invoking a massive impact on current jobs as a shorthand to create the impression of massive capability. It’s a form of marketing, as the CNN piece says.

  • digdugdirk 7 days ago

    More people need to understand the actual history of the luddites. The real issue was the usage of mechanized equipment to overwhelm an entire sector of the economy of the day - destroying the labor value of a vast swath of craftspeople and knocking them down a peg on the social ladder.

    Those people who were able to get work were now subject to a much more dangerous workplace and forced into a more rigid legalized employer/employee structure, which was a relatively new "corporate innovation" in the grand scheme of things. This, of course, allowed/required the state to be on the hook for enforcement of the workplace contract, and you can bet that both public and private police forces were used to enforce that contract with violence.

    Certainly something to think about for all the users on this message board who are undoubtedly more highly skilled craftspeople than most, and would never be caught up in a mass economic displacement driven by the introduction of a new technological innovation.

    At the very least, it's worth a skim through the Wikipedia article: https://en.wikipedia.org/wiki/Luddite

  • madaxe_again 7 days ago

    When steam engines came along, an awful lot of people argued that being able to pump water from mines faster, while inarguably useful, would not have any broad economical impact. Only madmen saw the Newcomen engine and thought “ah, railways!”. Those madmen became extraordinarily wealthy. Vast categories of work were eliminated, others were created.

    I think this situation is very similar in terms of the underestimation of scope of application, however differs in the availability of new job categories - but then that may be me underestimating new categories which are as yet as unforeseen as stokers and train conductors once were.

  • nopinsight 7 days ago

    My thesis is that this could lead to a booming market for “pink-collar” service jobs. A significant latent demand exists for more and better services in developed countries.

    For instance, upper-middle-class and middle-class individuals in countries like India and Thailand often have access to better services in restaurants, hotels, and households compared to their counterparts in rich nations.

    Elderly care and health services are two particularly important sectors where society could benefit from allocating a larger workforce.

    Many others will have roles to play building, maintaining, and supervising robots. Despite rapid advances, they will not be as dexterous, reliable, and generally capable as adult humans for many years to come. (See: Moravec's paradox).

  • beepbooptheory 7 days ago

    So, we are doomed to work forever, just maybe different jobs?

    • satvikpendem 7 days ago

      Of course. I mean this has never not been the case unless you are independently wealthy. Work always expands, that's why it's a fallacy to think that if we just had more productivity gains that we'd work half the time; no, there are always new things to do tomorrow that were not possible yesterday.

    • absurdo 7 days ago

      Basically yeah. You live in a world of layered servitude and, short of a financial windfall that hoists you up for some time, you’re basically guaranteed to work your entire life, and grow old, frail and poor. This isn’t a joke, it’s reality for many people that’s hidden from us to keep us deluded. Similar to my other mini-rant, I don’t have any valid answers to the problem at hand. Just acknowledging how fucked things are for humanity.

      • aianus 7 days ago
        14 more

        No, it's quite easy to make $1mm in a rich country and move to a poorer country and chill if you so desire.

        • lurk2 7 days ago

          > No, it's quite easy to make $1mm in a rich country and move to a poorer country and chill if you so desire.

          On an aggregate level this is true and contrary to the prevailing sentiment of doomer skepticism, the developed world is usually still the best place to do it. On an individual level, a lot of things can go wrong between here and a million dollars.

        • satvikpendem 7 days ago
          8 more

          It's not that easy, as in, you can make the money but the logistics of moving and living in another country are always harder than expected, both culturally and bureaucratically.

          • Johanx64 7 days ago
            7 more

            >logistics of moving and living in another country are always harder than expected, both culturally and bureaucratically.

            You know what's hard? Moving from a poor "shithole" to a wealthy country, with expensive accommodation, where a month of rent is something you'd save up months for.

            Knowing and displaying (faking really) 'correct' cultural status signifiers to secure a good job. And all the associated stress, etc.

            Moving the other direction to a low-cost-of-living or poor shithole country is extremely easy in comparison with a fat stack of resources.

            You literally don't have to worry about anything in the least.

            • eastbound 7 days ago
              5 more

              Apart from the tax office suing you in oblivion because the startup you’ve founded is now worth 10x its revenue, so you need to pay 40% CGT with only 1/10th the income (at least that’s the French exit tax).

              So basically once you are rich, you have to choose to leave most of it on the table to go to a poor country.

              • bravesoul2 6 days ago
                4 more

                If you have an early startup valued based on vibes, and they tax you on those vibes (do they... source...?) then you are not rich. A xoogler who saved 500k post tax is arguably richer in that scenario.

                • eastbound 6 days ago
                  3 more

                  All companies are worth more than their revenue. It’s not vibes, it’s just how it works.

                  Same goes for employees with stock options in USA: They get taxed on CGT every year until they sell, for money they don’t have yet.

                  Same goes for development costs: A change in the US tax code circe 2016 made that development costs were assumed to be an investment over 3 years, so if you have 1m$ sales and 1m$ costs in the first year, the IRS only counts 333k$ real costs and you own them tax on the 666k$ revenue.

                  It’s a classic problem in capital. So yes, a 300k€ revenue means you are valued at a multiple of that and owe tax.

                  • bravesoul2 5 days ago

                    Hard to find sources on what doesn't exist but this article suggests the US doesn't pay tax on unrealised gains. https://www.axios.com/2024/08/23/kamala-harris-unrealized-ca...

                    I think exercising an option may be an event that realizes gains and causes issues for someone who has to pay tax but can't sell the asset as it is not liquid. But I think that isn't what you are taking about?

                    As for revenue. Many companies are priced at X revenue, but those are companies you expect to grow. If a company raises $1m and sells AI tokens for $1m/y (the revenue) in order to "dominate the market" but they pay AWS $2m/y for, and they can't raise any more or increase prices then that startup is probably worth nothing. For example.

                    Another example is a bar that sells $1m in revenue drinks and food, makes 500k gross and 50k net after staff, rent taxes etc.

                  • Johanx64 6 days ago

                    You're saying some wild stuff, it sounds like you need a competent accountant more than anything else brother.

            • satvikpendem 6 days ago

              Just because it's easier elsewhere doesn't mean it's per se easy. There are lots of bureaucratic challenges one must face even with lots of money, as my expat FIRE friends have found out over the years.

        • andrekandre 7 days ago

            > make $1mm in a rich country and move to a poorer country and chill if you so desire
          
          i wonder if such trends are good for said poorer country (e.g real estate costs) in the long run?
        • ta12653421 6 days ago
          3 more

          ++1

          Fun fact what most people ignore: There have been around ~7000 people on Mount Everest - while the US alone has around 300.000 / 350.000 people earning more than 1 million USD a year.

          So - its clear: Is more easier to become an "income-millionaire" than to climb Mount Everest! :-)

          • bravesoul2 6 days ago
            2 more

            Yeah not so sure. If you are a mountaineer there is a plan that gets you to the top of Everest. There is no plan that gets you $1m/y. Also consider many on high incomes will be daddy's wealth in one way or another.

            • ta12653421 4 days ago

              In reverse: Usually, you wont die if your "business-plan-to-go-to-1m-income" comes not true, in case of Everest climbing your next step maybe the last, regardless of the plan.

              So - pick your opportunity! :-D

    • 77pt77 7 days ago

      Just like the Red Queen.

      You have to always keep on moving just to stay in the same place.

  • csomar 7 days ago

    I think the takeaway is that interest rates have to be maintained relatively high as the ZIRP era has showed that it breaks the free market. There is a reason why the Trump wants to lower the interest rate.

    Sure it is painful but a ZIRP economy doesn't listen to the end consumers. No reason to innovate and create crazy ideas if you have plenty of income.

  • tw04 7 days ago

    But also it potentially means mass unemployment and we have literally no plan in place if that happens beyond complete societal collapse.

    Even if you think all the naysayers are “luddites”, do you really think it’s a great idea to have no backup plan beyond “whupps we all die or just go back to the Stone Age”?

    • snowwrestler 7 days ago

      We actually have many backup plans. The most effective ones will be the new business plans that unlock investment which is what creates new jobs. But behind that are a large set of government policies and services that help people who have lost work. And behind that are private resources like charities, nonprofits, even friends and family.

      People don’t want society to collapse. So if you think it’s something that people can prevent, feel comforted that everyone is trying to prevent it.

      • alluro2 7 days ago

        Compared to 30-40 years ago, I believe many in the US would argue that society has already collapsed to a significant extent, with regards to healthcare, education, housing, cost of life, homelessness levels etc.

        If these mechanisms you mention are in place and functioning, why is there, for example, such large growth of the economic inequality gap?

    • ccorcos 7 days ago

      > do you really think it’s a great idea to have no backup plan

      What makes you think people haven’t made back up plans?

      Or are you saying government needs to do it for us?

      • argomo 7 days ago
        11 more

        Ah yes the old "let's make individuals responsible for solving societal problems" bit. Nevermind that the state is sometimes the only entity capable of addressing the situation at scale.

        • ccorcos 7 days ago
          5 more

          Yes, I believe individuals should take responsibility for themselves and their future prosperity. We all know what happens when you don’t…

          History has shown us quite clearly what happens if governments, and not individuals, are responsible for finding employment.

          • Voloskaya 7 days ago
            4 more

            I fail to understand what it is you are suggesting a 20 something year old is supposed to do to prepare their backup plan.

            They should all just find a way be set for life within the next 3 years, is this your proposal ?

            • ccorcos 6 days ago
              3 more

              You’re supposed to learn skills that others are willing to pay you for.

              • Voloskaya 6 days ago
                2 more

                You are a responding in a thread about what to do in the event of AI replacing most humans at skills others are willing to pay for, so clearly, this is a 0 value answer.

                • ccorcos 6 days ago

                  I guess I don’t buy into the premise. Aren’t there some things you’d prefer to pay a human for than a robot?

                  I don’t think this 3 year timeline is realistic and pondering what we’re going to do in 20 years is unpredictable.

        • Nasrudith 7 days ago
          5 more

          That is literally part of the deal of not living in a literal dictatorship. It is your responsibility to solve societal problems. I mean, geeze what did they teach in civic classes in your generation?

          • lazyasciiart 7 days ago
            4 more

            So if you believe that it is your individual responsibility to solve societal problems, and assuming you believe in the possibility of human-driven mitigation of climate change: presumably you individually are solving that, by devoting your life to it? Or do you not really mean it's your individual responsibility?

            • ccorcos 6 days ago
              3 more

              People have freedom to choose their responsibilities. Some choose to work on solving society’s problems, others don’t.

              What’s a better alternative?

              • lazyasciiart 4 days ago
                2 more

                You're saying it isn't their responsibility.

                • ccorcos 3 days ago

                  No, I'm saying that we all have our opinions and we need to be careful about who gets the power to tell others what to do.

                  You might think that we are collectively responsible for solving climate change, and someone else might think that we are collectively responsible for ending the murder of unborn children via abortion or any number of other things.

                  So who gets to be the dictator of whom? If we are all going to live together in harmony, we have to be tolerant of diversity.

                  That doesn't take away any freedom from you to take responsibility. And it preserves other's freedom as well.

michaeldoron 7 days ago

Every time an analyst gives the current state of AI-based tools as evidence supporting AI disruption being just a hype, I think of skeptics who dismissed the exponential growth of covid19 cases due to their initial low numbers.

Putting that aside, how is this article called an analysis and not an opinion piece? The only analysis done here is asking a labor economist what conditions would allow this claim to hold, and giving an alternative, already circulated theory that AI companies CEOs are creating a false hype. The author even uses everyday language like "Yeaaahhh. So, this is kind of Anthropic’s whole ~thing.~ ".

Is this really the level of analysis CNN has to offer on this topic?

They could have sketched the growth in foundation model capabilities vs. finite resources such as data, compute and hardware. They could have wrote about the current VC market and the need for companies to show results and not promises. They could have even wrote about the giant biotech industry, and its struggle with incorporating novel exciting drug discovery tools with slow moving FDA approvals. None of this was done here.

  • Terr_ 7 days ago

    > I think of skeptics who dismissed the exponential growth of covid19 cases due to their initial low numbers.

    Compare: "Whenever I think of skeptics dismissing completely novel and unprecedented outcomes occurring by mechanisms we can't clearly identify or prove (will) exist... I think of skeptics who dismissed an outcome that had literally hundreds of well-studied historical precedents using proven processes."

    You're right that humans don't have a good intuition for non-linear growth, but that common thread doesn't heal over those other differences.

    • actuallyalys 7 days ago

      Yeah, for this analogy to work, we’d have to see AI causing a small but consistently doubling amount of lost jobs.

      • mitthrowaway2 6 days ago
        3 more

        If that were happening right now, how would we know? COVID-19 cases were tracked imperfectly but pretty well; is there any equivalent for AI-related job losses?

        • actuallyalys 6 days ago

          Right, my point is that we don't have the data to make a similar exponential argument. We can't rule out the possibility that we're currently in the early stages of exponential growth based on direct measurement. If it is exponential, once it doubles enough times, it will show up in overall economic data.

          We can also look at the tools, which have improved relatively quickly but don't appear to be improving exponentially. GPT-4 and GPT-4o came out about a year after their predecessors. Is GPT-4o a bigger leap that GPT-4 was? Are GPT-4.5 or 4.1 a bigger leap than GPT-4 was? I honestly don't know, but the general reception suggests otherwise. The biggest leaps recently seem to be making models that perform roughly as well as past ones but are much smaller. That has advantages from the standpoint of democratization and energy consumption, but those kinds of improvements seems to favor a situation where AI augments workers rather than replaces them.

  • bgwalter 7 days ago

    Why not use the promised exponential growth of home ownership that led to the catastrophic real estate bubble that burst in 2008 as an example?

    We are still dealing with the aftereffects, which led to the elimination of any working class representation in politics and suppression of real protests like Occupy Wall Street.

    When this bubble bursts, the IT industry will collapse for some years like in 2000.

    • michaeldoron 7 days ago

      The growth of home ownership was an indicator of real estate investment, not of real world capabilities - once the value of real estate dropped and the bubble burst, those investments were worth less than before, causing the crisis. In contrast, the growth in this scenario is the capabilities of foundation models (and to a lesser extent, the technologies that stem out of these capabilities). This is not a promise or an investment, it's not an indication of speculative trust in this technology, it is a non-decreasing function indicating a real increase in performance.

  • mjburgess 7 days ago

    You can pick and choose problems from history where folk belief was wrong: WW1 vs. Y2K.

    This isn't very informative. Indeed, engaging in this argument-by-analoguy betrays a lack of actual analysis, credible evidence and justification for a position. Arguing "by analogy" in this way, which picks and chooses an analogy, just restates your position -- it doesnt give anyone reasons to believe it.

  • TheOtherHobbes 7 days ago

    I'm not seeing how comparing AI to a virus that killed millions and left tens of millions crippled is an effective way to support your argument.

    • drewcon 7 days ago

      Humans are not familiar with exponential change so they have almost no ability to manage through exponential change.

      Its an apt comparison. The criticisms in the cnn article are already out date in many instances.

      • bayarearefugee 7 days ago
        2 more

        As a developer that uses LLMs, I haven't seen any evidence that LLMs or "AI" more broadly are improving exponentially, but I see a lot of people applying a near-religious belief that this is happening or will happen because... actually, I don't know? because Moore's Law was a thing, maybe?

        In my experience, for practical usage LLMs aren't even improving linearly at this point as I personally see Claude 3.7 and 4.0 as regressions from 3.5. They might score better on artificial benchmarks but I find them less likely to produce useful work.

        • drewcon 6 days ago

          5 years ago commercial image gen produced hallucinatory dream like blobs.

          2 years ago it was cool but unreliable.

          Today I just did an entire “photo shoot” in Midjourney.

      • const_cast 7 days ago

        Viruses spread and propagate themselves, often changing along the way. AI doesn't, and probably shouldn't. I think we've made a few movies on why that's a bad idea.

      • geraneum 7 days ago
        9 more

        > Humans are not familiar with exponential change

        Humans are. We have tools to measure exponential growth empirically. It was done for COVID (i.e. epidemiologists do that usually) and is done for economy and other aspects of our life. If there's to be exponential growth, we should be able to put it in numbers. "True me bro" is not a good measure.

        Edit: typo

        • margalabargala 7 days ago
          8 more

          There's individual persons modelling exponential change just fine, and then there's what happens when you apply to the populace at large.

          "A person is smart. People are dumb, panicky dangerous animals and you know it."

          • geraneum 7 days ago
            7 more

            > when you apply to the populace at large

            What does this mean? What do you apply to populace at large? Do you mean a populace doesn’t model the exponential change right?

            • margalabargala 7 days ago
              6 more

              Yep that's what I meant! Context clues did you well here.

              • geraneum 7 days ago
                5 more

                “A populace modeling exponential change”. Yeah, that’s just word salad.

                • margalabargala 7 days ago
                  4 more

                  We can agree to disagree. After all, even you were able to figure out what I meant :-)

                  • geraneum 6 days ago
                    3 more

                    disagree on what? You have not put forward a coherent statement. I had to fix your sentence. ;)

                    • geraneum 6 days ago

                      Since I can’t reply under you answer for some reason I put it here.

                      We can have a constructive discussion instead. My problem was not actually parsing what you said. I’m questioning the assumption if populace collectively modeling exponential change is really meaningful. You can, for example, describe how does it look like when populace can model change exponentially. Is there any relevant literature on this subject that I can look into? Does this phenomenon have a name?

                    • margalabargala 6 days ago

                      I understand that complex sentences can sometimes be difficult to parse for median Americans or non-native speakers, but we disagree on whether what I said was word salad prior to you rewording it by explicitly enumerating the implied indirect object. As you demonstrated, context clues were ample to determine meaning.

      • agarren 7 days ago

        > The criticisms in the cnn article are already out date in many instances.

        Which ones, specifically? I’m genuinely curious. The ones about “[an] unfalsifiable disease-free utopia”? The one from a labor economist basically equating Amodei’s high-unemployment/strong economy claims to pure fantasy? The fact that nothing Amodei said was cited or is substantiated in any meaningful way? Maybe the one where she points out that Amodei is fundamentally a sales guy, and that Anthropic is making the rounds saying scary stuff just after they released a new model - a techbro marketing push?

        I like anthropic. They make a great product. Shame about their CEO - just another techbro pumping his scheme.

    • dingnuts 7 days ago

      especially when the world population is billions and at the beginning we were worried about double digit IFR.

      Yeah. Imagine if COVID had actually killed 10% of the world population. Killing millions sucks, but mosquitos regularly do that too, and so does tuberculosis, and we don't shut down everything. Could've been close to a billion. Or more. Could've been so much worse.

    • IshKebab 7 days ago

      I think you missed the point. AI is dismissed by idiots because they are looking at its state now, not what it will be in future. The same was true in the pandemic.

  • monkeyelite 7 days ago

    > I think of skeptics who dismissed the exponential growth of covid19 cases due to their initial low numbers.

    But that didn’t happen. All of the people like pg who drew these accelerating graphs were wrong.

    In fact, I think just about every commenter on COVID was wrong about what would happen in the early months regardless of political angle.

    • tim333 6 days ago

      I remember scientists, especially epidemiologists being quite accurate. I guess the key is to not even have a political angle but instead some knowledge of what you are talking about.

      • monkeyelite 6 days ago

        I don’t remember that. How do you explain the total collapse of public trust in science and medical institutions?

        Try revisiting their content from spring of 2020 (flatten the curve, wild death predictions, etc).

        > I guess the key is to not even have a political angle

        It’s a fantasy to imagine technical knowledge allows you to transcend the political and 2020 only reinforced that.

  • timr 7 days ago

    > I think of skeptics who dismissed the exponential growth of covid19 cases due to their initial low numbers.

    Uh, not to be petty, but the growth was not exponential — neither in retrospect, nor given what was knowable at any point in time. About the most aggressive, correct thing you could’ve said at the time was “sigmoid growth”, but even that was basically wrong.

    If that’s your example, it’s inadvertently an argument for the other side of the debate: people say lots of silly, unfounded things at Peak Hype that sound superficially correct and/or “smart”, but fail to survive a round of critical reasoning. I have no doubt we’ll look back on this period of time and find something similar.

  • SoftTalker 7 days ago

    Analysis == Opinion when it comes to mainstream news reporting. It's one guy's thinking on something.

  • deadbabe 7 days ago

    It goes both ways. Once the exponential growth of COVID started, I heard wildly outrageous predictions of what was going to happen next, none of which ever really came to fruition.

  • biophysboy 7 days ago

    Its an article reformulated from a daily newsletter. Newsletters take the form of a quick, casual follow up to current events (e.g. an Amodei interview). Its not intended to be exhaustive analysis.

    Besides the labor economist bit, it also makes the correct point that tech people regularly exaggerate and lie. A great example of this is biotech, a field I work in.

  • qgin 7 days ago

    This is the exact thing I’ve expressed as well.

    This moment feels exactly to me like that moment when we were going to “shut down for two weeks” and the majority of people seemed to think that would be the end of it.

    It was clear where the trend was going, but exponentials always seem ridiculous on an intuitive level.

  • PeterStuer 7 days ago

    "Is this really the level of analysis CNN has to offer on this topic?"

    It's not CNN exlusive. Newsmedia that did not evolve towards clicks, riling up people, hatewatching and paid propaganda to the highest bidder went extinct a decade ago. This is what did evolve.

    • biophysboy 7 days ago

      This is outdated. Most of journalism has shifted to subscription models, offering a variety of products under one roof: articles, podcasts, newsletters, games, recipes, product reviews, etc.

  • aaronbaugher 7 days ago

    > Is this really the level of analysis CNN has to offer on this topic?

    Not just this topic.

  • bckr 7 days ago

    That’s not what major news outlets are for. I’m not sure exactly what they’re for.

  • leeroihe 7 days ago

    The best heuristic is what people are realizing happened with uncheck "skilled" immigration in places like canada (and soon the U.S.). Everyone was sold that we "need these workers" because nobody was willing to work and that they added to GDP. When in reality, there's now significant evidence that all these new arrivals did was put a net drain on welfare, devalue the labor of endemic citizens (regardless of race - in many cases affecting endemic minorities MORE) and in the end, just reduced cost while degrading companies who did this.

    We will wake up in 5 yrs to find we replaced people for a dependence on a handful of companies that serve llms and make inference chips. Its beyond dystopian.

    • matteotom 7 days ago

      Can you provide more details about said "significant evidence"? This seems to be a pretty popular belief, despite being contrary to generally accepted economics, and I've yet to see good evidence for it.

darth_avocado 7 days ago

I don’t understand how any business leader can be excited about humans being replaced by AI. If no one has a job, who’s going to buy your stuff? When the unemployment in the country goes up, consumer spending slows down and recession kicks in. How could you be excited for that?

  • ben_w 7 days ago

    Game theory/Nash equilibrium/Prisoner's Dilemma, and the turkey's perspective in the problem of induction.

    So far, for any given automation, each actor gets to cut their own costs to their benefit — and if they do this smarter than anyone else, they win the market for a bit.

    Every day the turkey lives, they get a bit more evidence the farmer is an endless source of free food that only wants the best for them.

    It's easy to fool oneself that the economics are eternal with reference to e.g. Jevons paradox.

    • abracadaniel 7 days ago

      My long term fear with AI is that by replacing entry level jobs, it breaks the path to train senior level employees. It could take a couple of decades to really feel the heat from it, but could lead to massive collapse as no one is left with any understanding of how existing systems work, or how to design replacements.

      • xp84 7 days ago
        2 more

        > It could take a couple of decades to really feel the heat from it, but could lead to massive collapse

        When you consider how this interacts with the population collapse (which is inevitable now everywhere outside of some African countries) this seems even worse. In 20 years, we will have far fewer people under age 60 than we have now, and among that smaller cohort, the percentage of people at any given age who have useful levels of experience will be less because they may not be able to even begin meaningful careers.

        Best case scenario, people who have gotten 5 or more years of experience by now (college grads of 2020) may scrape by indefinitely. They'll be about 47 then and have no one to hire that's more qualified than AI. Not necessarily because AI is so great; rather, how will there be someone with 20 years of experience when we simply don't hire any junior people this year?

        Worst case, AI overtakes the Class of 2020 and moves up the experience-equivalence ladder faster than 1 year per year, so it starts taking out the classes of 2015, 2010, etc.

        • baby_souffle 7 days ago

          > Worst case, AI overtakes the Class of 2020 and moves up the experience-equivalence ladder faster than 1 year per year, so it starts taking out the classes of 2015, 2010, etc.

          This is my bet. Similar to Moores law. Where it plateaus is anybody’s guess…

      • pseudo0 7 days ago
        3 more

        Juniors and offshore teams will probably be the most severely impacted. If a senior dev is already breaking off smaller tightly scoped tasks and fixing up the results, that loop can be accomplished much more quickly by iterating with a LLM. Especially if you have to wait a business day for someone in India to even start on the task when a LLM is spitting out a similar quality PR in minutes.

        Ironically a friend of mine noticed that the team in India they work with is now largely pushing AI-generated code... At that point you just need management to cut out the middleman.

        • teitoklien 7 days ago
          2 more

          lol, what it’s soon going to lead to is unfortunately the very opposite of what you’re thinking.

          Management will cut down your team’s headcount and outsource even more to India ,Vietnam and Philippines.

          A CFO looks at balance sheet not operations context, even if you’re idea is better the opposite of what you think is likely going to happen very soon.

          • dagw 6 days ago

            Management will cut down your team’s headcount and outsource even more to India ,Vietnam and Philippines

            Management did all that at companies I've worked for for years before 'AI'. The big change is that the teams in India won't 200 developers, but 20 developers handholding an AI.

      • Nasrudith 7 days ago

        The worst case for such a cycle is generating new jobs in reverse engineers. Although in practice with what we have seen with machinists it tends to just accelerate existing trends towards outsourcing to countries who haven't had the 'entry level collapse'.

        We've already eliminated certain junior level domains essentially by design. There aren't any 'barber-surgeons' with only two years of training for good reason. Instead we have surgery integrated it into a more lengthy and complicated educational path to become what we now would consider a 'proper' surgeon.

        I think the answer is that if the 'junior' is uneconomical or otherwise unacceptable be prepared to pay more for the alternative, one way or another.

      • lurkshark 7 days ago

        I’m actually worried we’ve gotten a kickstart on that process already. Anecdotally it seems like entry level developer jobs are harder to come by today than a decade ago. Without the free-money growth we were seeing for a long time it seems like companies are more incentivized to only hire senior developers at the loss of the greater good that comes with hiring and mentoring junior developers.

        Caveat that this is anecdotal, not sure if there are numbers on this.

      • scarlehoff 7 days ago

        This is what I fear as well: some companies might adopt a "sustainable" approach to AI, but others will dynamite the entry path to their companies. Of course, if your only goal is to sell a unicorn and be out after three years, who cares... but serious companies with lifelong employees that adopt the AI-first strategy are in for a surprise (looking at you, Microsoft).

      • cjs_ac 7 days ago

        This isn't AI-specific, though; businesses decided that it was everyone else's responsibility to train their employees over a decade ago.

      • asah 6 days ago

        I've heard this fear for decades about COBOL programmers, and yet I don't see this bubbling up except higher costs.

        If there's a shortage, in the free market, humans will retrain.

      • socalgal2 7 days ago

        I agree with your worry.

        That said, the first thing that jumps to my mind is cars. Back when they were first introduced you had to be a mechanically inclined person to own one and deal with it. Today, people just buy them and hire the very small number of experts (relative to the population of drivers) to deal with any issues. Same with smartphones. The majority of users have no idea how they really work. If it stop working they seek out an expert.

        ATM, AI just seems like another level of that. JS/Python programmers don't need to know bits and bytes and memory allocation. Vibe coders won't need to know what JS/Python programmers need to know.

        Maybe there won't be enough experts to keep it all going though.

      • BriggyDwiggs42 7 days ago

        If it takes a few decades, they may actually automate all but the most impressive among senior positions though.

      • Traubenfuchs 6 days ago

        As a senior software engineer code monkey this is my greatest hope!

    • spacemadness 7 days ago

      And we as humans figured all this out and still do nothing with this knowledge. We fight as hard as we can against collective wisdom.

    • absurdo 7 days ago

      Basically if anyone has an iota of sensibility you should have never taken sama, Zuckerberg, Gates, or anyone else of that sort at face value. When they tell you they’re doing things for the good of humanity, look at what the other hand is up to.

      • antithesizer 6 days ago

        >or anyone else of that sort

        This category is expansive enough to make fools of almost everyone on hn.

  • anvandare 7 days ago

    A cancerous cell does not care that it is (indirectly) killing the lifeform that it is a part of. It just does what it does without a thought.

    And if it could think, it would probably be very proud of the quarter (hour) figures that it could present. The Number has gone up, time for a reward.

  • thmsths 7 days ago

    Tragedy of the commons: no one being able to buy stuff is a problem for everyone, but being able to save just a bit more by getting rid of your workforce is a huge advantage for your business.

    • bckr 7 days ago

      “tragedy of the commons” is treated as a Theory of Human Nature when it’s really a religious principle underlying how we operate our society.

      • Jensson 7 days ago

        People hunted large mammals to extinction long before modern society, so tragedy of the commons is nature in general. We know other predators do it as well, not just humans.

    • JKCalhoun 7 days ago

      … in the interim, of course.

  • untrust 7 days ago

    Another question: If AI is going to eat up everyone's jobs, how will any business be safe from a new competitor showing up and unseating them off their throne? I don't think that the low level peons would be the only ones at stake as a company could be easily outcompeted as well since AI could conceivably outperform or replace any existing product anyways.

    I guess funding for processing power and physical machinery to run the AI backing a product would be the biggest barrier to entry?

    • zhobbs 7 days ago

      Yeah this will likely lead to margin compression. The best companies will be fine though, as brand and existing distribution is a huge moat.

      • azemetre 7 days ago
        2 more

        “Best” is carrying a lot of wait. More accurate to say the monopolistic companies that engage in regulatory capture will be fine.

        • jrs235 7 days ago

          Empowering the current US President to demand more bribes.

    • layer8 7 days ago

      Institutional knowledge is key here. Third parties can’t replicate it quickly just by using AI.

      • lubujackson 7 days ago

        Luckily we are firing all those people so they will be available for new roles.

        This feels a lot like the dot boom/dot bust era where a lot of new companies are going to sprout up from the ashes of all this disruption.

      • floatrock 7 days ago

        Also: network effects, inertia, cornering the market enough to make incumbents uneconomical, regulatory capture...

        AI certainly will increase competition in some areas, but there are countless examples where being the best at something doesn't make you the leader.

    • JKCalhoun 7 days ago

      The beginning of the AI Wars?

  • SpicyLemonZest 7 days ago

    Business leaders in AI are _not_ excited and agree with your concerns. That's what the source article is about - the CEO of AI lab Anthropic said he sees major social problems coming soon. The problem is that the information environment is twisted in knots. The author, like many commentators, characterizes your concerns as "optimism" and "hype", because she doesn't think AI will actually have these large impacts.

    • spacemadness 7 days ago

      I think he says this just to hype up how powerful of a force AI is which helps these CEOs bottom line eventually. Cynically “we’ve created something so powerful it will eliminate jobs and cause strife” gets those investors excited for more.

    • geraneum 7 days ago

      They are. The audience of this talk is not normal people. He’s excited and is targeting a specific group in his messaging. The author is a person like majority.

      • SpicyLemonZest 7 days ago
        2 more

        I don't understand what you mean. The audience of this talk is Axios, a large news website targeting the general public.

        • geraneum 6 days ago

          I believe he's talking to money, to investors. He does it through Axios, CNN, BBC, etc. Their company is not sustainable at this rate. None of the LLM service providers are. They need money for now and that's why they talk like this.

          50% of a group of workers losing their jobs to this tech is not a worrisome future for him. It's a pitch!

  • onlyrealcuzzo 7 days ago

    > If no one has a job, who’s going to buy your stuff?

    All the people employed by the government and blue collar workers? All the entrepreneurs, gig workers, black market workers, etc?

    It's easy to imagine a world in which there are way less white collar workers and everything else is pretty much the same.

    It's also easy to imagine a world in which you sell less stuff but your margins increase, and overall you're better off, even if everybody else has less widgets.

    It's also easy to imagine a world in which you're able to cut more workers than everyone else, and on aggregate, barely anyone is impacted, but your margins go up.

    There's tons of other scenarios, including the most cited one - that technology thus far has always led to more jobs, not less.

    They're probably believing any combination of these concepts.

    It's not guaranteed that if there's 5% less white-collar workers per year for a few decades that we're all going to starve to death.

    In the future, if trends continue, there's going to be way less workers - since there's going to be a huge portion of the population that's old and retired.

    You can lose x% of the work force every year and keep unemployment stable...

    A large portion of the population wants a lot more people to be able to not work and get entitlements...

    It's pretty easy to see how a lot of people can think this could lead to something good, even if you think all those things are bad.

    Two people can see the same painting in a museum, one finds it beautiful, and the other finds it completely uninteresting.

    It's almost like asking - how can someone want the Red team to win when I want the Blue team to win?

    • darth_avocado 7 days ago

      > All the people employed by the government and blue collar workers

      If people don’t have jobs, government doesn’t have taxes to employ other people. If CEOs are salivating at the thought of replacing white collar workers, there is no reason to think next step of AI augmented with robotics won’t replace blue collar workers as well.

      • trealira 7 days ago
        12 more

        > If CEOs are salivating at the thought of replacing white collar workers, there is no reason to think next step of AI augmented with robotics won’t replace blue collar workers as well.

        Robotics seems harder, though, and has been around for longer than LLMs. Robotic automation can replace blue collar factory workers, but I struggle to imagine it replacing a plumber who comes to your house and fixes your pipes, or a waiter serving food at a restaurant, or someone who restocks shelves at grocery stores, that kind of thing. Plus, in the case of service work like being a waiter, I imagine some customers will always be willing to pay for a human face.

        • ben_w 7 days ago
          4 more

          > or a waiter serving food at a restaurant,

          Over the last few years, I've seen a few in use here in Berlin: https://www.alibaba.com/showroom/robot-waiter-for-sale.html

          > or someone who restocks shelves at grocery stores

          For physical retail, or home delivery?

          People are working on this for traditional stores, but I can't tell which news stories are real and which are hype — after around a decade of Musk promising FSD within a year or so, I know not to simply trust press releases even when they have a video of the thing apparently working.

          For home delivery, this is mostly kinda solved: https://www.youtube.com/watch?v=ssZ_8cqfBlE

          > Plus, in the case of service work like being a waiter, I imagine some customers will always be willing to pay for a human face.

          Sure… if they have the money.

          But can we make an economy where all the stuff is free, and we're "working" n-hours a day smiling at bad jokes and manners of people we don't like, so we can earn money to spend to convince someone else who doesn't like us to spend m-hours a day smiling at our bad jokes and manners?

          • trealira 7 days ago
            3 more

            > Over the last few years, I've seen a few in use here in Berlin: https://www.alibaba.com/showroom/robot-waiter-for-sale.html

            Wow. I genuinely didn't think robotic waiters would ever exist anytime soon.

            > For physical retail, or home delivery?

            I was thinking for physical retail. Thanks for the video link.

            • ido 6 days ago

              It’s more a dishwasher level of automation than 3CPO- when you order they enter your table number and the kitchen staff puts the prepared dishes in the shelves in the robot, which the drives to your table. Once it gets there you take the dishes from the robot.

              Tech-wise this could have existed 30 years ago (maybe going around the restaurant would have been more challenging than today but it’s a fixed path and the robots don’t leave the restaurant).

            • pesus 7 days ago

              I've seen robot waiters at one restaurant in SF as well, and I wouldn't be surprised if there were more. They'll most likely be here on a large scale faster than we think.

        • JadeNB 6 days ago

          > Robotics seems harder, though, and has been around for longer than LLMs. Robotic automation can replace blue collar factory workers, but I struggle to imagine it replacing a plumber who comes to your house and fixes your pipes, or a waiter serving food at a restaurant, or someone who restocks shelves at grocery stores, that kind of thing. Plus, in the case of service work like being a waiter, I imagine some customers will always be willing to pay for a human face.

          Wouldn't you have struggled to imagine most of what LLMs can now do 5 years ago?

        • ryandrake 7 days ago
          3 more

          > I struggle to imagine it replacing a plumber who comes to your house and fixes your pipes, or a waiter serving food at a restaurant, or someone who restocks shelves at grocery stores, that kind of thing.

          These are three totally different jobs requiring different kinds of skills, but they will all be replaced with automation.

          1. Plumber is a skilled trade, but the "skilled" parts will eventually be replaced with 'smart' tools. You'll still need to hire a minimum wage person to actually go into each unique home and find the plumbing, but the tools will do all the work and will not require an expensive tradesman's skills to work.

          2. Waiter serving food, already being replaced with kiosks, and quite a bit of the "back of the house" cooking areas are already automated. It will only take a slow cultural shift towards ordering food through technology-at-the-table, and robots wheeling your food out to you. We've already accepted kiosks in fast food and self-checkout in grocery stores. Waiters are going bye-bye.

          3. Shelf restocking, very easy to imagine automating this with robotics. Picking a product and packing it into a destination will be solved very soon, and there are probably hundreds of companies working on the problem.

          • 9x39 7 days ago

            > 1. Plumber is a skilled trade, but the "skilled" parts will eventually be replaced with 'smart' tools. You'll still need to hire a minimum wage person to actually go into each unique home and find the plumbing, but the tools will do all the work and will not require an expensive tradesman's skills to work.

            But if you have to be trained in the use of a variety of 'smart' tools - that sounds like engineering to know what tool to deploy and how.

            It's also incredibly optimistic about future tools - what smart tool fixes leaky faucets, hauls and installs water heaters, unclogs or replaces sewer mains, runs new pipes, does all this work and more to code, etc? There are cool tools and power tools and cool power tools out there, but vibe plumbing by the unskilled just fills someone's house with water or worse...

            > 2. Waiter serving food, already being replaced with kiosks, and quite a bit of the "back of the house" cooking areas are already automated. It will only take a slow cultural shift towards ordering food through technology-at-the-table, and robots wheeling your food out to you. We've already accepted kiosks in fast food and self-checkout in grocery stores. Waiters are going bye-bye.

            Takeout culture is popular among GenZ, and we're more likely to see walk-up orders with online order ahead than a facsimile of table service.

            Why would cheap restaurants buy robots and allow a dining room to go unmanned and risk walkoffs instead of just skipping the whole make-believe service aspect and run it like a pay-at-counter cafeteria? You're probably right that waiters will disappear outside of high-margin fine dining as labor costs squeeze margins until restaurants crack and reorganize.

            >3. Shelf restocking, very easy to imagine automating this with robotics. Picking a product and packing it into a destination will be solved very soon, and there are probably hundreds of companies working on the problem.

            Do-anything-like-a-human robots might crack that, but today it's still sci-fi. Humans are going to haul things from A to B for a bit longer, I think. I bet we see drive-up and delivery groceries win via lights-out warehouses well before "I, Robot" shelf stockers.

          • trealira 7 days ago

            > 1. Plumber is a skilled trade, but the "skilled" parts will eventually be replaced with 'smart' tools. You'll still need to hire a minimum wage person to actually go into each unique home and find the plumbing, but the tools will do all the work and will not require an expensive tradesman's skills to work.

            I'm not a plumber, but my background knowledge was that pipes can be really diverse and it could take different tools and strategies to fix the same problem for different pipes, right? My thought was that "robotic plumber" would be impossible for the same reasons it's hard to make a robot that can make a sandwich in any type of house. But even with a human worker that uses advanced robotic tools, I would think some amount of baseline knowledge of pipes would always be necessary for the reasons I outlined.

            > 2. Waiter serving food, already being replaced with kiosks, and quite a bit of the "back of the house" cooking areas are already automated. It will only take a slow cultural shift towards ordering food through technology-at-the-table, and robots wheeling your food out to you. We've already accepted kiosks in fast food and self-checkout in grocery stores. Waiters are going bye-bye.

            That's true. I forgot about fast-food kiosks. And the other person showed me a link to some robotic waiters, which I didn't know about. Seems kind of depressing, but you're right.

            > 3. Shelf restocking, very easy to imagine automating this with robotics. Picking a product and packing it into a destination will be solved very soon, and there are probably hundreds of companies working on the problem.

            The way I imagine it, to automate it, you'd have to have some sort of 3D design software to choose where all the items would go, and customize it in the case of those special display stands for certain products, and then choose where in the backroom or something for it to move the products to, and all that doesn't seem to save much labor over just doing it yourself, except the physical labor component. Maybe I just lack imagination.

        • DrillShopper 7 days ago

          > a waiter serving food at a restaurant

          I have already eaten at three restaurants that have replaced the vast majority of their service staff with robots, and they're fine at that. Do I think they're better than a human? No, personally, but they're "good enough".

        • hnthrow90348765 7 days ago

          >or a waiter serving food at a restaurant

          I've seen this already at a pizza place. Order from a QR code menu and a robot shows up 20-25 minutes later at your table with your pizza. Wait staff still watched the thing go around.

      • JKCalhoun 7 days ago

        Yeah, it's as though "middle class" was a brief miracle of our age. Serfs and nobility is the more probably human condition.

        Hey, is there a good board game in there somewhere? Serfs and Nobles™

      • kevin_thibedeau 7 days ago
        4 more

        ML models don't make fully informed decisions and will not until AGI is created. They can make biased guesses at best and have no means of self-directed inquiry to integrate new information with an understanding of its meaning. People employed in a decision making capacity are safe, whether that's managing people or building a bridge from a collection of parts and construction equipment.

        • JadeNB 6 days ago

          > People employed in a decision making capacity are safe, whether that's managing people or building a bridge from a collection of parts and construction equipment.

          Surely the modern history of decision making has been to move as much of it as possible away from humans and to algorithms, even "dumb" ones?

        • whattheheckheck 7 days ago
          2 more

          Has anyone made a fully informed decision?

          • madaxe_again 7 days ago

            Look, human cognition is obviously better than machine cognition, and nobody has ever made a poor argument or decision.

            End of conversation.

    • spamizbad 7 days ago

      > All the people employed by the government and blue collar workers? All the entrepreneurs, gig workers, black market workers, etc?

      I can tell you for many of those professions their customers are the same white collar workers. The blue collar economy isn't plumbers simply fixing the toilets of the HVAC guy, while the HVAC guy cools the home of the electrician, while...

      • Jensson 7 days ago

        > The blue collar economy isn't plumbers simply fixing the toilets of the HVAC guy, while the HVAC guy cools the home of the electrician, while...

        That is exactly what blue collar economy used to be though: people making and fixing stuff for each other. White collar jobs is a new thing.

    • munksbeer 7 days ago

      >It's also easy to imagine a world in which you sell less stuff but your margins increase, and overall you're better off, even if everybody else has less widgets.

      History seems to show this doesn't happen. The trend is not linear, but the trend is that we live better lives each century than the previous century, as our technology increases.

      Maybe it will be different this time though.

      • carlosjobim 7 days ago

        I think that's mostly myth, and a very very deeply ingrained myth. That's why probably hundreds of people already feel the rage boiling up inside of them right now after reading my first sentence.

        But it is myth. It has always been in the interest of the rulers and the old to try to imprint on the serfs and on the young how much better they have it.

        Many of us, maybe even most of us, would be able to have fulfilling lives in a different age. Of course, it depends on what you value in life. But the proof is in the pudding, humanity is rapidly being extinguished in industrial society right now all over the world.

      • ryandrake 7 days ago

        "Technology increases" have not made my life better than my boomer parents' and they will probably not make the next generation's lives better than ours. Big things like housing costs, education costs, healthcare costs are not being driven down by technology, quite the opposite.

        Yes, the lives of "people selling stuff" will likely get better and better in the future, through technology, but the wellbeing of normal people seems to have peaked at around the year 2000 or so.

    • neutronicus 7 days ago

      There are also blue- and pink-collar industries that we all tacitly agree are crazy understaffed right now because of brutal work conditions and low pay (health care, child care, K-12, elder care), with low quality-of-service a concern across the board, and with many job functions that seem very difficult to replace with AI (assuming liability for preventing children and elderly adults from physically injuring themselves and others).

      If you, a CEO, eliminate a bunch of white-collar workers, presumably you drive your former employees into all these jobs they weren't willing to do before, and hey, you make more profits, your kids and aging parents are better-taken-care-of.

      Seems like winning in the fundamental game of society - maneuvering everyone else into being your domestic servants.

      • const_cast 7 days ago
        2 more

        Right, but the elephant in the room is that despite those industries being constantly understaffed and labor being in extreme demand, they're underpaid. It seems nobody gives a flying fuck about the free market when it comes to the labor market, which is arguably the most important market.

        So, flooding those industries with more warm bodies probably won't help anything. I imagine it would make the already fucked labor relations even more fucked.

        • neutronicus 7 days ago

          It would be bad for compensation in the field(s) but the actual working conditions might improve, just by dint of having enough people to do all the work expected.

    • JKCalhoun 7 days ago

      > All the people employed by the government and blue collar workers?

      You forgot the born-wealthy.

      I feel increasingly like a rube for having not made my little entrepreneurial side-gigs focused strictly on the ultra-wealthy. I used to sell tube amplifier kits, for example, so you and I could have a really high-end audio experience with a very modest outlay of cash (maybe $300). Instead I should have sold the same amps but completed for $10K. (There is no upper bounds for audio equipment though — I guess we all know.)

      • ryandrake 7 days ago
        4 more

        This is the real answer. Eventually, when 95% of us have no jobs because AI and robotics are doing everything, then the rich will just buy and sell from each other. The other 7 billion people are not economically relevant and will just barely participate in the economy. It'll be like the movie Elysium.

        I briefly did a startup that was kind of a side-project of a guy whose main business was building yachts. Why was he OK with a market that just consisted of rich people? "Because rich people have the money!"

        • hnthrow90348765 7 days ago
          2 more

          >It'll be like the movie Elysium.

          The rich were able to insulate themselves in space which is much harder to get to than some place on Earth. If the rich want to turtle up on some island because that's the only place they're safe, that's probably a better outcome for us all. They lose a lot of ability to influence because they simply can't be somewhere in person.

          It also relies heavily on a security force (or military) being complicit, but they have to give those people a better life than average to make it worth it. Even those dumb MAGA idiots won't settle for moldy bread and leaky roofs. That requires more and more resources, capital, and land to sustain and grow it, which then takes more security to secure it. "Some rich dude controlling everything" has an exponential curve of security requirements and resources. This even comes down to how much land they need to be able to farm and feed their security guys.

          All this assuming your personal detail and larger security force actually likes you enough, because if society has broken down to this point, they can just kill the boss and take over.

          • skydhash 6 days ago

            And this is assuming that they won’t destroy each other. Any society have conflicts and I failed to see a bunch of ultrarich not have any.

        • bluefirebrand 7 days ago

          > This is the real answer. Eventually, when 95% of us have no jobs because AI and robotics are doing everything, then the rich will just buy and sell from each other

          My prediction is that the poor will reinvent the guillotine

  • FeteCommuniste 7 days ago

    I guess the idea is that the people left working will be made so productive and wealthy thanks to the miracle of AI that they can more than make up the difference with extravagant consumption.

    • isoprophlex 7 days ago

      I too plan to buy 100.000 liters of yogurt each day once AI has transported me into the socioeconomic strata of the 0.1%

      • FeteCommuniste 7 days ago
        2 more

        My many robots will be busy building glorious mansions out of yogurt cups.

        • Terr_ 7 days ago

          Or, as per a Love, Death, and Robots short film, the new superintelligence will be inextricable from yogurt...

    • darth_avocado 7 days ago

      If you want to see what that looks like, just look at the economy of India. Do we really want that?

      • FeteCommuniste 7 days ago
        2 more

        Certainly not what I want, but it looks like we could be headed there. And the "industry leaders" seem cool with it, to judge by their politics.

      • munksbeer 7 days ago
        7 more

        The economy of India is trending in the opposite direction to this narrative. More and more people lifted out of poverty as they modernise.

        • darth_avocado 7 days ago
          6 more

          The comment wasn’t on the trend or where things are going and the historical progress the country has made. The comment was on the current state of the economy. The fact that wealth concentration creates its own unique challenges. If as many people were unemployed and in poverty (or in the low income bracket) in the US or any other developed nation, the living conditions would have been drastically deteriorated. The consumer market would have shrunk to the point where most people couldn’t afford to buy chips and soda.

          • munksbeer 7 days ago
            5 more

            The point is, I don't see that happening. The reverse is happening in the world. The percentage of people in poverty globally is decreasing each year.

            I still fail to see why people think we're going to innovate ourselves into global poverty, it makes no sense.

            • darth_avocado 7 days ago
              4 more

              Poverty is decreasing because innovation is creating more jobs. Everything hinges on the fact that people can earn a living and spend their money to generate more jobs. If AI replaces those jobs you’re going the other way.

              • const_cast 7 days ago
                3 more

                Right, every economic system we've thought up relies on the assumption that everyone works. Or, close to everyone. Capitalism is just as much about consumption as it is production.

                • SpicyLemonZest 7 days ago
                  2 more

                  Close to everyone doesn't work today. The labor force participation rate is only about 62%.

                  • const_cast 6 days ago

                    Labor force participation rate has increased pretty drastically since 1950. I'd imagine due to better medicine and treatments that allow people to work when they otherwise wouldn't.

                    But, 62% is very high. Keep in mind that number takes into account not only the elderly and disable, but also children.

                    Pretty much everyone who can work is working. We don't want children to be working, that's bad. We should all be on the same page about that.

      • JKCalhoun 7 days ago

        I'd been thinking modern day Russia, but I admit to being ignorant of a lot of countries outside the U.S.

    • al_borland 7 days ago

      A single rich person can only much door dash. Scaling a customer base needs to be done horizontally.

  • tim333 6 days ago

    It's like all the farmers soil shoveling jobs were stolen by tractors. People moved on to more interesting things.

    • lowbloodsugar 6 days ago

      In all previous such revolutions, humans were freed to do more productive work while the cost of goods came down. But that doesn’t mean the same is true this time. Now the revolution does not make physical tasks easier (like ploughing or spinning thread) but intellectual labor. This time, there are no jobs to go to, since those jobs are also done by AI.

      • tim333 6 days ago
        2 more

        There are still interacting with other humans jobs.

        • lowbloodsugar 21 hours ago

          So, prostitution? Bartender? There are still jobs now that require human interaction, but unless interacting with another human is the job itself, then, yes, those jobs are going. McDonalds already has kiosks for ordering. Yo Sushi has beer taps at your table. Young men are walking around having relationships with cute anime girls who listen and validate them (and how are they ever going to deal with an actual human female who has her own needs??). Others on HN are saying AI isn't good enough for this or that, but have you ever tried dealing with a real person in a call center, or a minimum wage employee at the job center, or a sales person at a cheap cosmetics counter. So sure, there are jobs at Chanel, or Audi, or Michelin starred restaurants, for the rich who can not only afford such luxury but enjoy lauding it over the rest of us, but for the rest of us there's Johnny Cab if we even have any way of paying for a ride in one.

  • johnbenoe 7 days ago

    You ever thought there’s more to life than work lol. Maybe humans can approach a new standard of living…

    • darth_avocado 7 days ago

      I’m yet to be convinced that if majority of the humans are out of work, the government will be able to take care of them and allow them to “pursue their calling”. Hunger games is a more believable outcome to me.

      • johnbenoe 7 days ago
        2 more

        [flagged]

        • JKCalhoun 7 days ago

          To the degree we feel capable, I suspect many of us are doing "what we can".

          The worst it gets of course, the more each of us will feel capable of.

    • JKCalhoun 7 days ago

      If someone is going to suggest UBI, I wish they could explain to me how Reservations have failed so hard in the U.S.. I think that would be a cautionary tale.

      • duderific 7 days ago
        2 more

        Decades and decades of mistreatment are not going to be remedied by some modest handouts. That doesn't mean that UBI as a whole could never work.

        • 9x39 7 days ago

          Shouldn't we be able to find at least one pilot or prototype with a lasting success story to build off of before concluding we need to do it on a huge scale?

    • codr7 7 days ago

      Excellent choice of words there: new standard.

      I'm sure we are, but it doesn't look like an improvement for most people.

      • johnbenoe 7 days ago
        6 more

        Not yet at least, but there’s no stopping this kind of efficiency jump. Anyone who thinks otherwise is in denial.

        • codr7 3 hours ago

          There are also no indications that it's coming. Anyone who assumes so is delusional.

        • myko 7 days ago
          3 more

          Maybe, but aren't LLM companies burning cash? The efficiency gains I see from LLMs typically come from agents which perform circular prompts on themselves until they reach some desired outcome (or give up until a human can prod them along).

          It seems like we'll need to generate a lot more power to support these efficiency gains at scale, and unless that is coming from renewables (and even if it is) that cost may outweigh the gains for a long time.

          • johnbenoe 7 days ago
            2 more

            They’re burning cash at a high rate because of the grand potential, and they are of course keeping some things behind closed doors.

            I also respect the operative analysis, but the strategical, long-term thinking, is that this will come and it will only speed up everything else.

            • codr7 7 days ago

              The grand potential of short sighted profits with no concern for society nor other humans, yes.

        • codr7 7 days ago

          I would say anyone who sees that happening is in denial, because all the proof out there points in the opposite direction.

    • rfrey 7 days ago

      The most powerful nation on earth isn't even willing to extend basic health care to the masses, nevermind freeing them to pursue a higher calling than enriching billionaires.

  • leeroihe 7 days ago

    They want an omnipresent, lobotomized and defeated underclass who only exists to "respond" to the ai to continue to improve it. This is basically what alexander wang from Scale AI explained at a recent talk which was frankly terrifying.

    Your UBI will be controlled by the government, you will have even less agency than you currently have and a hyper elite will control the thinking machines. But don't worry, the elite and the government are looking out for your best interest!

    • pdfernhout 7 days ago

      We already have that "defeated underclass" courtesy of a century of mainstream schooling (according to NYS Teacher of the Year John Taylor Gatto): "The Underground History of American Education -- A conspiracy against ourselves" https://www.lewrockwell.com/2010/10/john-taylor-gatto/the-cu... "As soon as you break free of the orbit of received wisdom you have little trouble figuring out why, in the nature of things, government schools and those private schools which imitate the government model have to make most children dumb, allowing only a few to escape the trap. The problem stems from the structure of our economy and social organization. When you start with such pyramid-shaped givens and then ask yourself what kind of schooling they would require to maintain themselves, any mystery dissipates — these things are inhuman conspiracies all right, but not conspiracies of people against people, although circumstances make them appear so. School is a conflict pitting the needs of social machinery against the needs of the human spirit. It is a war of mechanism against flesh and blood, self-maintaining social mechanisms that only require human architects to get launched. I’ll bring this down to earth. Try to see that an intricately subordinated industrial/commercial system has only limited use for hundreds of millions of self-reliant, resourceful readers and critical thinkers. In an egalitarian, entrepreneurially based economy of confederated families like the one the Amish have or the Mondragon folk in the Basque region of Spain, any number of self-reliant people can be accommodated usefully, but not in a concentrated command-type economy like our own. Where on earth would they fit? In a great fanfare of moral fervor some years back, the Ford Motor Company opened the world’s most productive auto engine plant in Chihuahua, Mexico. It insisted on hiring employees with 50 percent more school training than the Mexican norm of six years, but as time passed Ford removed its requirements and began to hire school dropouts, training them quite well in four to twelve weeks. The hype that education is essential to robot-like work was quietly abandoned. Our economy has no adequate outlet of expression for its artists, dancers, poets, painters, farmers, filmmakers, wildcat business people, handcraft workers, whiskey makers, intellectuals, or a thousand other useful human enterprises — no outlet except corporate work or fringe slots on the periphery of things. Unless you do "creative" work the company way, you run afoul of a host of laws and regulations put on the books to control the dangerous products of imagination which can never be safely tolerated by a centralized command system...."

      In 2010, I put together a list of alternatives here to address the rise of AI and Robotics and its effect on jobs: https://pdfernhout.net/beyond-a-jobless-recovery-knol.html "This article explores the issue of a "Jobless Recovery" mainly from a heterodox economic perspective. It emphasizes the implications of ideas by Marshall Brain and others that improvements in robotics, automation, design, and voluntary social networks are fundamentally changing the structure of the economic landscape. It outlines towards the end four major alternatives to mainstream economic practice (a basic income, a gift economy, stronger local subsistence economies, and resource-based planning). These alternatives could be used in combination to address what, even as far back as 1964, has been described as a breaking "income-through-jobs link". This link between jobs and income is breaking because of the declining value of most paid human labor relative to capital investments in automation and better design. Or, as is now the case, the value of paid human labor like at some newspapers or universities is also declining relative to the output of voluntary social networks such as for digital content production (like represented by this document). It is suggested that we will need to fundamentally reevaluate our economic theories and practices to adjust to these new realities emerging from exponential trends in technology and society."

  • keybored 7 days ago

    We have consumer capitalism now. Before we didn’t. There’s no reason it can’t be replaced.

    Sure there can be rich people who are radical enough to push for another phase of capitalism.

    That’s a kind of a capitalism which is worse for workers and consumers. With even more power in the hands of capitalists.

  • bravesoul2 6 days ago

    How to move to post-capitalism is the question. We need something money-like to motivate but something different.

  • carlosjobim 7 days ago

    That's a very pessimistic view. People can borrow money against their property, then later they can borrow money against their diploma and professional certificates (and nobody should be allowed to work without being certified, that's dangerous). Then later I think it's time for banks to start offering consumers the reproductive right of mortgaging their children, either born or unborn.

  • roenxi 7 days ago

    You're being confused by the numbers. We aren't trying to maximise consumer spending, the point is to maximise living standards. If the market equilibrium price of all goods was $0 consumer spending would be $0 and living standards would be off the charts. It'd be a great outcome.

    It just happens that up to this point there have been things that couldn't be done by capital. Now we're entering a world where there isn't such a thing and it is unclear what that implies for the job market. But people not having jobs is hardly a bad thing as long as it isn't forced by stupid policy, ideally nobody has to work.

    • amanaplanacanal 7 days ago

      In theory. In reality, how are the benefits of all this efficiency going to be distributed to the people who aren't working? I sure don't see any calls for higher taxes and more wealth redistribution.

      • ikrenji 7 days ago

        Let's face it ~ almost all work will be automated in the next 50 years. Either capitalism dies or humanity dies

    • alluro2 7 days ago

      Given the current mechanics evident in the society - declining education, healthcare and rising cost of living, homelessness and exploding economic inequality - who is "we", trying to maximise living standards, and what movement do you see leading towards such an outcome?

CSMastermind 7 days ago

Huge amounts of white collar jobs have been automated since the advent of computers. If you look at the work performed by office workers in the 1960s and compared it to what people today do it'd be almost unrecognizable.

They spent huge amounts of time on things that software either does automatically or makes 1,000x faster. But by and large that actually created more white collar jobs because those capabilities meant more was getting done which meant new tasks needed to be performed.

  • janalsncm 7 days ago

    I don’t like this argument because 1) it doesn’t address the social consequences of rapid onset and large scale unemployment and 2) there is no law of nature that a job lost here creates a new job there.

    On the first point, unemployment during the Great Depression was “only” 30%. And those people were eventually able to find other jobs. Here, we are talking about permanent unemployment for even larger numbers of people.

    The Luddites were right. Machines did take their jobs. Those individuals who invested significantly in their craft were permanently disadvantaged. And those who fought against it were executed.

    And on point 2, to be precise, a lack of jobs doesn’t mean a lack of problems. There are a ton of things society needs to have accomplished, and in a perfect world the guy who was automated out of packing Amazon boxes could open a daycare for low income parents. We just don’t have economic models to enable most of those things, and that’s only going to get worse.

    • ccorcos 7 days ago

      What makes you so concerned about rapid onset of we haven’t seen any significant change in the (USA) unemployment rate?

      And there are some laws of nature that are relevant such as supply-demand economics. Technology often makes things cheaper which unlocks more demand. For example, I’m sure many small businesses would love to build custom software to help them operate but it’s too expensive.

      • DenisM 6 days ago
        2 more

        It’s an interesting argument, thanks.

        A good analogy would be web development transition from c to java to php to Wordpress. I feel like it did make web sites creation for small business more accessible. OTOH a parallel trend was also mass-scale production of industry-specific platforms, such as Yahoo Shopping.

        It’s not clear to me which trend won in the end.

        • ccorcos 6 days ago

          It’s possible that both are true. “Why” questions tend to be mathematically overdetermined. There are many correct explanations (equations) and fewer variables than equations.

    • ryukoposting 7 days ago

      I'll preface this by saying I agree with most of what you said.

      It'll be a slow burn, though. The projection of rapid, sustained large-scale unemployment assumes that the technology rapidly ascends to replace a large portion of the population at once. AI is not currently on a path to replacing a generalized workforce. Call center agents, maybe.

      Second, simply "being better at $THING" doesn't mean a technology will be adopted, let alone quickly. If that were the case, we'd all have Dvorak keyboards and commuter rail would be ubiquitous.

      Third, the mass unemployment situation requires economic conditions where not leveraging a presumably exploitable underclass of unemployed persons is somehow the most profitable choice for the captains of industry. They are exploitable because this is not a welfare state, and our economic safety net is tissue-paper thin. We can, therefore, assume their labor can be had at far less than its real worth, and thus someone will find a way to turn a profit off it. Possibly the Silicon Valley douchebags who caused the problem in the first place.

      • t-writescode 7 days ago

        > > it doesn’t address the social consequences of rapid onset and large scale unemployment

        > It'll be a slow burn, though.

        Have you been watching the current developer market?

        It's really, really rough out here for unemployed software developers.

  • PeterHolzwarth 7 days ago

    The classic example is the 50's/60's photograph of an entire floor of a tall office building replaced by single spreadsheet. This passed without comment.

  • anthomtb 7 days ago

    > Huge amounts of white collar jobs have been automated since the advent of computers

    One of which was the occupation of being a computer!

  • lambdasquirrel 7 days ago

    Anecdotal, but AI was what enabled me to learn French, when I was doing that. Before LLMs, I would've had to pay a lot more money to get the class time I'd need, but the availability of Google Translate and DeepL meant that some meaningful, casual learning was within reach. I could reasonably study, try to figure things out, and have questions for the teachers the two or three times a week I had lessons.

    Nowadays I'm learning my parents' tongue (Cantonese) and Mandarin. It's just comical how badly the LLMs do sometimes. I swear they roll a natural 1 on a d20 and then just randomly drop a phrase. Or at least that's my head canon. They're just playing DnD on the side.

qgin 7 days ago

I often see people say “AI can’t do ALL of my job, so that means my job is safe.

But what this means at scale, over time, is that if AI can do 80% of your job, AI will do 80% of your job. The remaining 20% human-work part will be consolidated and become the full time job of 20% of the original headcount while the remaining 80% of the people get fired.

AI does not need to do 100% of any job (as that job is defined today ) to still result in large scale labor reconfigurations. Jobs will be redefined and generally shrunk down to what still legitimately needs human work to get it done.

As an employee, any efficiency gains you get from AI belong to the company, not you.

  • sram1337 7 days ago

    ...or your job goes from commanding a $200k/yr salary to $60k/yr. Hopefully that's enough to pay your mortgage.

    • noisy_boy 6 days ago

      I am more worried about the resultant breakdown of the social order due to the social pressures and hopelessness and it's ramifications. Which next-bonus-cycle CEOs don't give a shit about. UBI is just talk (which few are talking) and even that has issues.

      • qgin 5 days ago

        Currently the United States is in the process of making Medicaid and SNAP / food stamps contingent on being employed.

        We’re further from UBI than we’ve ever been.

deadbabe 7 days ago

Something I’ve come to realize in the software industry is: if you have more smart engineers than the competition, you win.

If you don’t snatch up the smartest engineers before your competition does: you lose.

Therefore at a certain level of company, hiring is entirely dictated by what the competition is doing. If everyone is suddenly hiring, you better start doing it too. If no one is, you can relax, but you could also pull ahead if you decide to hire rapidly, but this will tip off competitors and they too will begin hiring.

Whether or not you have any use for those engineers is irrelevant. So AI will have little impact on hiring trends in this market. The downturn we’ve seen in the past few years is mostly driven by the interest rate environment, not because AI is suddenly replacing engineers. An engineer using AI gives more advantage than removing an engineer, and hiring an engineer who will use AI is more advantageous than not hiring one at all.

AI is just the new excuse for firing or not hiring people, previously it was RTO but that hype cycle has been squeezed for all it can be.

spcebar 7 days ago

Something is nagging me about the AI-human replacement conversation that I would love insight from people who know more about startup money than me. It seems like the AI revolution hit as interest rates went insane, and at the same time the AI that could write code was becoming available, the free VC money dried up, or at least changed. I feel like that's not usually a part of the conversation and I'm wondering if we would be having the same conversation if money for startups was thrown around (and more jobs were being created for SWEs) the way it was when interest rates were zero. I know next to nothing about this and would love to hear informed opinions.

  • sfRattan 7 days ago

    > It seems like the AI revolution hit as interest rates went insane...

    > ...I'm wondering if we would be having the same conversation if money for startups was thrown around (and more jobs were being created for SWEs) the way it was when interest rates were zero.

    The end of free money probably has to do with why C-level types are salivating at AI tools as a cheaper potential replacement for some employees, but describing the interest rates returning to nonzero percentages as going insane is really kind of a... wild take?

    The period of interest rates at or near zero was a historical anomaly [1]. And that policy clearly resulted in massive, systemic misallocation of investment at global scale.

    You're describing it as if that was the "normal?"

    [1]: https://www.macrotrends.net/2015/fed-funds-rate-historical-c...

  • swyx 7 days ago

    its not part of the conversation because the influence here is tangential at best (1) and your sense of how much vc money is on the table at any given time is not good (2).

    1a. most seed/A stage investing is acyclical because it is not really about timing for exits, people just always need dry powder

    1b. tech advancement is definitely acyclical - alexnet, transformers, and gpt were all just done by very small teams without a lot of funding. gpt2->3 was funded by microsoft, not vc

    2a. (i have advance knowledge of this bc i've previewed the keynote slides for ai.engineer) free vc money slowed in 2022-2023 but has not at all dried up and in fact reaccelerated in a very dramatic way. up 70% this yr

    2b. "vc" is a tenous term when all biglabs are >>10b valuation and raising from softbank or sovereign wealth. its no longer vc, its about reallocating capital from publics to privates because the only good ai co's are private

    • mjburgess 7 days ago

      I'm not seeing how you're replying to this comment. I'm not sure you've understood their point.

      The point is that there's a correlation between macroeconomic dynamics (ie., the price of credit increasing) and the "rise of AI". In ordinary times, absent AI, the macroeconomic dynamics would fully explain the economic shifts we're seeing.

      So the question is why do we event need to mention AI in our explanation of recent economic shifts?

      What phenomena, exactly, require positing AI disruption?

      • rglover 6 days ago

        Social media. Especially in SV, the embarrassment of failing publicly having been given so much money is far too painful psychologically.

        Spinning that to say you're a "visionary" for replacing expensive employees with AI (even when it's clear we're not there yet) is risky, but a good enough smoke screen to distract the average bear from poking holes in your financials.

      • munificent 7 days ago

        > What phenomena, exactly, require positing AI disruption?

        AI company CEOs trying to juice their stock evaluations?

bachmeier 7 days ago

> AI is starting to get better than humans at almost all intellectual tasks

"Starting" is doing a hell of lot of work in that sentence. I'm starting to become a billionaire and Nobel Prize winner.

Anyway, I agree with Mark Cuban's statement in the article. The most likely scenario is that we become more productive as AI complements humans. Yesterday I made this comment on another HN story:

"Copilot told me it's there to do the "tedious and repetitive" parts so I can focus my energy on the "interesting" parts. That's great. They do the things every programmer hates having to do. I'm more productive in the best possible way.

But ask it to do too much and it'll return error-ridden garbage filled with hallucinations, or just never finish the task. The economic case for further gains has diminished greatly while the cost of those gains rises."

  • SoftTalker 7 days ago

    It it sustainable? I know when I program, it's sometimes nice to get to something that's easy, even if it's tedious and repetitive. It's like stopping to walk for a bit when you're on a run. You're still moving, but you can catch your breath and recharge.

    • bachmeier 7 days ago

      Oh, I agree, but I'd say that it's probably easier to do those small things than it is to figure out a prompt to have Copilot do them. If it feels good, there's no reason not to do it yourself. I think we'd all agree that it's a joy to be able to tell Copilot to write out the scaffolding at the start of a new project.

  • JKCalhoun 7 days ago

    > I'm starting to become a billionaire

    Suggests you are accumulating money, not losing it. That I think is the point of the original comment: AI is getting better, not worse. (Or humans are getting worse? Ha ha, not ha ha.)

    • bachmeier 7 days ago

      > That I think is the point of the original comment: AI is getting better, not worse.

      Well, in order to meet the standard of the quote "wipe out half of all entry-level office jobs … sometime soon. Maybe in the next couple of years" we need more than just getting better. We need considerably better technology with a better cost structure to wipe out that many jobs. Saying we're starting on that task when the odds are no better than me becoming a billionaire within two years is what we used to call BS.

monero-xmr 7 days ago

https://en.m.wikipedia.org/wiki/List_of_predictions_for_auto...

It wasn’t just Elon. The hype train on self driving cars was extreme only a few years ago, pre-LLM. Self driving cars exist sort of, in a few cities. Quibble all you want but it appears to me that “uber driver” is still a popular widespread job, let alone truck driver, bus driver, and “car owner” itself.

I really wish the AI ceos would actually make my life useful. For example, why am I still doing the dishes, laundry, cleaning my house, paying for landscaping, painters, and on and on? In terms of white collar work I’m paying my fucking lawyers more than ever. Why don’t they solve an actual problem

  • Philpax 7 days ago

    Because textual data is plentiful and easy to model, and physical data is not. This will change - there are now several companies working on humanoid robots and the models to power them - but it is a fundamentally different set of problems with different constraints.

  • MangoToupe 7 days ago

    > I really wish the AI ceos would actually make my life useful.

    TBH, I do think that AI can deliver on the hype of making tools with genuinely novel functionality. I can think of a dozen ideas off the top of my head just for the most-used apps on my phone (photos, music, messages, email, browsing). It's just going to take a few years to identify how to best integrate them into products without just chucking a text prompt at people and generating stuff.

  • GardenLetter27 7 days ago

    Bureaucracy and regulation is the main issue there though.

    Like in Europe where you're forced to pay a notary to start a business - it's not really even necessary, nevermind something that couldn't be automated, but it's just but of the establishment propping up bureaucrats.

    Whereas LLMs and generative models in art and coding for example, help to avoid loads of bureaucracy in having to sort out contracts, or even hire someone full-time with payroll, etc.

    • xxs 7 days ago

      >Like in Europe where you're forced to pay a notary to start a business

      Do you have a specific country in mind, as the statement is not true for quite a lot of EU member states... and likely untrue for most of the European countries.

    • jellicle 7 days ago

      We are going to have an ever-increasing supply of stories along the lines of "used a LLM to write a contract; contract gave away the company to the counterparty; now trying to get a court to dissolve the contract".

      Sure you'll have destroyed the company, but at least you'll have avoided bureaucracy.

    • dosinga 7 days ago

      > Like in Europe

      Like in the US you have a choice of which jurisdiction you want to start your company. Not all require a notary

  • edent 7 days ago

    Buy a dishwasher - they're cheap, work really well, and don't use much energy / water.

    Same as a washing machine / drier. Chuck the clothes in, press a button, done.

    There are Roomba style lawnmowers for your grass cutting.

    I'll grant you painting a house and plumbing a toilet aren't there yet!

    • al_borland 7 days ago

      With the laundry machine and dishwasher, it still requires effort. A human needs to collect the dirty stuff, put it into the machine properly, decide when it should run, load the soap, select a cycle type, start it, monitor the machine to know when it’s done, empty the machine, and put the stuff away properly, thus starting the human side of the process again.

      It’s less work than it used to be, but remove the human who does all that and the dirty dishes and clothes will still pile up. It’s not like we have Rosie, from The Jetsons, handling all those things (yet). How long before the average person has robot servants at home? Until that day, we are effectively project managers for all the machines in our homes.

      • Kirby64 7 days ago
        4 more

        > A human needs to collect the dirty stuff, put it into the machine properly, decide when it should run, load the soap, select a cycle type, start it, monitor the machine to know when it’s done, empty the machine, and put the stuff away properly, thus starting the human side of the process again.

        The really modern stuff is pretty much as simple as “load, start, unload” - you can buy combo washing machines that wash and dry your clothes, auto dispense detergent, etc. It’s not folding or putting away your clothes, and you still need to maintain it (clean the filter, add detergent occasionally, etc)… but you’re chipping away at what is left for a human to do. Who cares when it’s done? You unload it when you feel like it, just like every dishwasher.

        • al_borland 7 days ago
          2 more

          Unload timing on the washer/dryer matters.

          Leave things wet in the washer too long and they smell like mold and you have to run it again. Leave them in the dryer too long and they are all wrinkled, and you have to run it again (at least for a little while).

          I grew up watching everyone in my family do this, sometimes multiple times for the same load. That’s why I set timers and remove stuff promptly.

          The dishwasher I agree, and it’s usually best to leave them in there at least for a little while once it’s done. However, not unloading it means dirty dishes start to stack up on the counter or in the sink, so it still creates a problem.

          As far as “load, start, unload” goes. We covered unload, but load is also an issue where some people do have issues. They load the dishwasher wrong and things don’t get clear, or they start it wrong and are left with spots all over everything. Washing machines can be overloaded, or unbalanced. Washing machines and dryers can also be started wrong, the settings need to match the garments being washed. Some clothes are forgiving, others are not. There is still human error in the mix.

          • Kirby64 7 days ago

            > Leave things wet in the washer too long and they smell like mold and you have to run it again. Leave them in the dryer too long and they are all wrinkled, and you have to run it again (at least for a little while).

            Not a problem for the two-in-one washer/dryers for the mildew issue, and for the wrinkles, most dryers have a cycle to keep running them intermittently after the cycle finishes for hours to mitigate most of the wrinkling issues. You’ve got a much much longer window before wrinkles are an issue with that setup.

        • ghaff 7 days ago

          My understanding is combo machines aren't ideal. But running a load of laundry in a couple separate machines is pretty low effort.

  • coffeefirst 7 days ago

    You know what I want? A LM that navigates customer support phone trees for me.

    If you want to waste my time with an automated nonsense we should at least even the playing field.

    This is feasible with today’s technology.

    • darthwalsh 6 days ago

      Sounds like Google Duplex, but I guess they never expanded the tech beyond restaurant reservations.

      But on my Pixel now, on some phone trees it shows a UI with numbers and choices, and even predicts ahead for the other choices so you aren't forced to wait. Very handy!

  • Hilift 7 days ago

    Self-driving cars are required to beep when in reverse. In both San Francisco and San Diego homes near Waymo charging facilities are a nuisance. The neighbors hate the beeping, and they operate late hours, and use things like shop vac cleaners that are loud. Whoever thought of this hates self driving cars and people. There is no way this can work in mixed urban areas.

  • DrillShopper 7 days ago

    > In terms of white collar work I’m paying my fucking lawyers more than ever. Why don’t they solve an actual problem

    Rule 0 is that you never put your angel investors out of work if you want to keep riding on the gravy train

golol 7 days ago

> To be clear, Amodei didn’t cite any research or evidence for that 50% estimate.

I truly belive these types of paper don't deserve to be valued so much.

  • righthand 7 days ago

    Yes we live in a world where no “experts” are required to provide any evidence or truth, but media outlets will gladly publish every false word and idea. For the same reason these Ceos want to wipe their workforce for more money, not a functioning society.

    • airstrike 7 days ago

      The attention economy is ruining society.

  • madaxe_again 7 days ago

    And the journalist cited what research or evidence, precisely, in his rebuttal?

hansmayer 6 days ago

It's so good to see the non-expert types are finally starting to see the whole hype for what it really is -> the long tail of last 20 years of incremental ML development, and not some revolutionary tech. We did not need to have this much hype around web 1.0 which was immediately adopted due to being obviously, well, revolutionary.

  • madaxe_again 6 days ago

    Yes, the dot-com bubble never happened.

    • hansmayer 6 days ago

      We're talking technology adoption here, no need to sidetrack (although I would argue, the bubble of technology sector burning 200B to produce 10B in revenue will be so much more painful). Back in the day everyone was running to do something on the web, commercial or not, be it out of enthusiasm, greed, noble aspirations, art or even criminal (warez, anyone?). I won't even mention the iPhone moment here. Compare that to five years into GenAI hype, where is the massive adoption and thousands of applications or at least a single important breakthrough? Where is the AGI some of these "leaders" have been promising to arrive in 2025? Chumming along with Tesla's robotaxis?

      • madaxe_again 6 days ago
        4 more

        Right - adoption was slower than people expected at the time, but it did happen, and a lot of the stuff that got thrown at the wall back then did eventually stick.

        We are absolutely in a hype and market bubble around AI right now - and like the dot com bubble, the growth came not in 2000, but years later. It turns out it takes time for a new technology to percolate through society, and I use the “mom metric” as a bellwether - if your/my mother is using the tech, you’d better believe it has achieved market penetration.

        Until 2011 my mum was absolutely not interested in the web. Now she does most of her shopping on it, and spends her days boomerposting.

        She recently decided to start paying for ChatGPT.

        Sure, it’s a fuzzy thing, but I think the adoption cycle this time around will be faster, as the access to the tech is already in peoples’ hands, and there are plenty of folks who are already finding useful applications for genai.

        Robotaxis, whether they end up dominated by Tesla or waymo or someone else entirely, are inarguably here, and the adoption rates (the USA is not the only market in the world) are ramping significantly this year.

        I’m not sure I get your point about smartphones? They’re in practically every pocket on the planet, now, they’re not some niche thing.

        • hansmayer 6 days ago
          3 more

          Well, both the web1.0 and the smartphones were major inflection points in technological development. I argue that the GenAI is not. Steve Jobs did not need to shove the AppStore down anybodys throat, the way Gemini and other crap are being shoved right now. The growth happened organically and exponentially, because everyone instantly saw value in those products. It happened through early adopters and the late majority. Here we have neither. Where are the thousands, well even hundreds of applications that the end users actually want to use? Your mum, based on your description fits more into the category of laggards, and that category never determines anything about a product/technology impact.

          • noisy_boy 6 days ago
            2 more

            > the way Gemini and other crap are being shoved right now. The growth happened organically and exponentially, because everyone instantly saw value in those products.

            Nobody shoved Gemini to me - chatGPT sucked and I was curious if Sonnet was the best around there for coding stuff and found Gemini to be excellent. As a side note, it also generates excellent question papers - chatGPT is dog shit compared to that.

            • hansmayer 5 days ago

              Well, someone did to me as well as millions of other Google Workspace users. So you're not using google workspace, good for you! Despite turning the Gemini off in administrative settings, myself and millions of other users get daily "nudges" to consider Gemini summarising our e-mails or do some other superfluous bullshit. And the alternatives are few and between, at least if I want a shot at my email actually being delivered and not sorted into spam.

fny 7 days ago

I think everyone is missing the bigger picture.

This is not a matter of whether AI will replace humans whole sale. There are two more predominant effects:

1. You’ll need fewer humans to do the same task. In other forms of automation, this has led to a decrease in employment. 2. The supply of capable humans increases dramatically. 3. Expertise is no longer a perfect moat.

I’ve seen 2. My sister nearly flunked a coding class in college, but now she’s writing small apps for her IT company.

And for all of you who poo poo that as unsustainable. I became proficient in Rust in a week, and I picked up Svelte in a day. I’ve written a few shaders too! The code I’ve written is pristine. All those conversations about “should I learn X to be employed” are totally moot. Yes APL would be harder, but it’s definitely doable. This is an example of 3.

Overall, this will surely cause wage growth to slow and maybe decrease. In turn, job opportunities will dry up and unemployment might ensue.

For those who still don’t believe, air traffic controllers are a great thought experiment—they’re paid quite nicely. What happens if you build tools so that you can train and employ 30% of the population instead of just 10%?

  • ironman1478 7 days ago

    "I became proficient in Rust in a week". How did you evaluate that if you weren't an expert in Rust to begin with? What does proficient mean to you? Also, are you advocating we get rid of air traffic controllers with AI? How would we train the AI? What model would you use? If you can't solve a safety critical problem from first principles, there is no way an AI should be in the loop. This makes no sense.

    Cynically, I'm happy we have this AI generated code. It's gonna create so much garbage and they'll have to pay good senior engineers more money to clean it all up.

    • ofjcihen 7 days ago

      To your second point we’re seeing a huge comeback of vulnerabilities that we’re “mostly gone”. Things like very basic RCEs and SQLi. This is a great thing for security workers as well.

  • stefan_ 7 days ago

    I don't understand, no one ever needed an LLM to automate air traffic controllers. 1980s tech could do that just fine. The reason they continue to exist is essentially cultural. Fell into a local maximum trap and now the entire industry and governance is incapable of lifting itself out of it and instead come up with stuff like "standardized phrases for the voice coms that we have inexplicably made crucial to the entire system" while riding cultural cliches like "the pilot must be in control" as they continue manual flight into big rocks.

  • hooverd 7 days ago

    Can you talk about Rust without your friend computer?

    • fny 7 days ago

      Of course not! But I can definitely ship useful tools, and I can could learn to talk the talk in a tenth of the time it would otherwise have taken.

      Which is my point, this is not about replacement, it's about reducing the need and increasing supply.

      • kttjoppl 7 days ago

        How are you going to ship a tool you don't understand? What are you going to do when it breaks? How are you going to debug issues in a language you don't understand? How do you know the code the LLM generated is correct?

        LLMs absolutely help me pick up new skills faster, but if you can't have a discussion about Rust and Svelte, no, you didn't learn them. I'm making a lot of progress learning deep learning and ChatGPT has been critical for me to do so. But I still have to read books, research papers, and my framework's documentation. And it's still taking a long time. If I hadn't read the books, I wouldn't know what question to ask or how to evaluate if ChatGPT is completely off base (which happens all the time).

    • MattSayar 7 days ago

      Can you talk about assembly without the internet?

      I fully understand your point and even agree with it to an extent. LLMs are just another layer of abstraction, like C is an abstraction for asm is an abstraction for binary is an abstraction for transistors... we all stand on the shoulders of giants. We write code to accomplish a task, not the other way around.

      • hooverd 7 days ago
        2 more

        I think friction is important to learning and expertise. LLMs are great tools if you view them as compression. I think calculators are a good example, people like to bring those up as a gotcha, but an alarming amount of people are now innumerate on basic receipt math or comprehending orders of magnitude.

        • MattSayar 7 days ago

          It is absolutely essential that we still have experts who know the details. LLMs are just the tide that lifts all ships.

      • bluefirebrand 7 days ago

        > Can you talk about assembly without the internet?

        Yes.

        Can you not?

  • BigJono 7 days ago

    > I became proficient in Rust in a week, and I picked up Svelte in a day. I’ve written a few shaders too! The code I’ve written is pristine. All those conversations about “should I learn X to be employed” are totally moot.

    fucking lmao

    • fny 7 days ago

      My point is you learn X and your time to learn and ship Y is dramatically reduced.

      It would have taken me a month to write the GPU code I needed in Blender, and I had everything working in a week.

      And none of this was "vibed": I understand exactly what each line does.

      • whyowhy3484939 7 days ago
        3 more

        You did not and you are not proficient. LLMs and AI in general cater to your insecurities. An actual good human mentor will wipe the floor with your arrogance and you'll be better for it.

        • fny 6 days ago
          2 more

          I think you're under the impression that I am not a software engineer. I already know C, and I've even shipped a very small, popular, security sensitive open source library in C, so I am certainly proficient enough to rewrite Python into Rust for performance purposes without hiring a Rust engineer or write shaders to help debug models in Blender.

          My point is that LLMs make it 10x easier to adapt and transition to new languages, so whatever moat someone had by being a "Rust developer" is now significantly erased. Anyone with solid systems programming experience could switch from C/C++ to Rust with the help of an LLM and be proficient in a week or two's time. By proficient, I mean able to ship valuable features. Sure they'll have to leveraging an LLM to help smooth out understanding new features like borrow checking, but they'll surely be able to deliver given how already circumspect the Rust compiler is.

          I agree fundamentals matter and good mentorship matters! However, good developers will be able to do a lot more diverse tasks which means more supply of talent across every language ecosystem.

          For example, I don't feel compelled at all to hire a Svelte/Vue/React developer specifically anymore: any decent frontend developer can race forward with the help of an LLM.

          • whyowhy3484939 4 days ago

            I realize I came across as harsh and I surely don't want to judge you personally on your skills as A) that's not necessary for my point to make sense and B) uncalled for. I'm sure you are a capable C developer and I'm sorry for being an asshole - but I am one so it's hard for me to pretend otherwise...

            Being able to program in C is something I can also do, but it sure as heck does not make me proficient Rust developer if I cobble some shit from a LLM together and call it a day.

            I can appreciate how "businesses" think this is a valuable, but - and this is often forgotten by salaried developers - as I am not a business owner I have neither the position nor the intention of doing any "business". I am in a position to do "engineering". Business is for someone else to worry about. Shipping "valuable features" is not something I care about. Shipping working and correct features is something I worry about. Perhaps modern developers should call themselves business analysts or something if they wish to stop engineering.

            LLMs are souped up Stack Overflows and I can't believe my ears if I hear a fellow developer say someone on Stack Overflow ported some of their code to Rust on request and that this feature of SO now makes them a proficient Rust developer because they can vaguely follow the code and can now "ship" valuable features.

            This is like being able to vaguely follow Kant's Critique of Pure Reason, which is something any amateur can do, compared to being able to engage with it academically and rigorously. I deeply worry about the competence of the next generation - and thus my own safety - if they believe superficial understanding is equivalent to deep mastery.

            Edit: interesting side note: I am writing this as a dyed in the wool generalist. Now ain't that something? I don't care if expertise dies off professionally, because I never was an "expert" in something. I always like using whatever works and all systems more or less feel equal to me yet I can also tell that this approach is deeply flawed. In many important ways deep mastery really matters and I was hoping the rest of society would keep that up and now they are all becoming generalists who don't know shit and it worries me..

      • ofjcihen 7 days ago

        It would have taken you a month and you would have been able to understand it 100x more.

        LLMs are great but what they really excel at is raising the rates of Dunning-Kruger in every industry they touch.

    • whyowhy3484939 7 days ago

      Yes, this is definitely missing a /s, I hope.

      Please for the love of god tell me this is a joke.

  • lexandstuff 7 days ago

    Re the last sentence, is the answer that more people will die in aviation disasters?

chris_armstrong 7 days ago

The wildest claims are those of increased labor productivity and economic growth: if they were true, our energy consumption would be increasing wildly beyond our current capacity to add more (dwarfing the increase from AI itself).

Productivity doesn’t increase on its own; economists struggle to separate it from improved processes or more efficient machinery (the “multi factor productivity fudge”). Increased efficiency in production means both more efficient energy use AND being able to use a lot more of it for the same input of labour.

Lu2025 6 days ago

don't think that the white collar layoffs of the last 3-4 years are due to AI. The tech layoffs of 2022 are explained in part by the impact of 2017 tax reform. Before 2022, the research and development expenses could be written off taxes as a tax credit in the same year. It's a dollar to dollar reduction of tax liability. Tech companies classify a lot of their work as R&D. So those overpaid Facebook coders are essentially public charges! Somebody up thread said that programmers are disproportionately highly compensated in the US. They are, because it's not companies who pay for them, it's taxpayers, indirectly. Starting 2022, the deal became a bit less sweet. R&D expenses had to be amortized over 5 years. What happened next is a collusion in response to Great Resignation. Several large tech companies conspired to have layoffs at the same time as a salary compression move. The AI statements are mostly a scare tactic to put pressure on employees. For some industries and applications AI is revolutionary, but for coding it's good at autocomplete and not much else.

infinitebit 7 days ago

So glad to see a MSM outlet take the words of an AI ceo with even a single grain of salt. I’ve been really disappointed with the way so many publications have just been breathlessly repeating what is essentially a sales pitch.

(ftr i’m not even taking a side re: is AI going to take all the jobs. regardless of what happens the fact remains that the reporting has been absolute sh*t on this. i guess “the singularity is here” gets more clicks than “sales person makes sales pitch”)

  • absurdo 6 days ago

    HN does the same. We don’t really have a platform on the internet for good discussions so we mostly get regurgitated talking points and a lot of flags/downvotes if it’s deemed a serious enough issue (f.e. pandemic) that taking a contrary stance is strictly forbidden.

elktown 7 days ago

Tech has a big problem of selective critical thinking due to a perpetual gold rush causing people to adopt a stockbroker mentality of not missing out on the next big thing - be it the next subfield like AI, the next cool tech that you can be an early adopter on etc. But yeah, nothing new under the sun; it's corruption.

  • mjburgess 7 days ago

    In many spheres today "thought leadership" is a kind of marketing and sales activity. It is no wonder then that no one can think and no one can lead: either would be an fatal to healthy sales.

K0balt 6 days ago

The “bloodbath” will be slow but is quite likely to be significant.

AI / GP robotic labor will not penetrate the market so much in existing companies, which will have huge inertial buffers, but more in new companies that arise in specific segments where the technology proves most useful.

The layoffs will come not as companies replace workers with AI, but as AI companies displace non-AI companies in the market, followed by panicked restructuring and layoffs in those companies as they try to react, probably mostly unsuccessfully.

Existing companies don’t have the luxury of buying market share with investor money, they have to make a profit. A tech darling AI startup powered by unicorn farts and inference can burn through billions of SoftBank money buying market share.

  • xpe 6 days ago

    I find this plausible. Is the data starting to show this?

    • K0balt 6 days ago

      There might be some, but I think it’s still early.

      For the moment, AI is enabling a bunch of stuff that was too expensive or time consuming to do before (flooding the commons with shiny garbage and pedantic text to drive “engagement”.

      Despite the hype, It’s going to be 2-3 years before AI application really fall into stride, and 3-7 before general purpose robotics really get up to speed.

keybored 7 days ago

> If the CEO of a soda company declared that soda-making technology is getting so good it’s going to ruin the global economy, you’d be forgiven for thinking that person is either lying or fully detached from reality.

Exactly. These people are growth-seekers first, domain experts second.

Yet I saw progressive[1] outlets reacting to this as a neutral reporting. So it apparently takes a “legacy media” outlet to wake people out of their AI stupor.

[1] American news outlets that lean social-democratic

econ 7 days ago

There use to be a cookie factory here that had up to 12 people sitting there all day doing nothing. If the machines broke down it really took all of them to clean up. This pattern will be rediscovered.

jona777than 6 days ago

There will likely be more jobs because of AI. With more “knowledge”, comes more responsibility. Spam folders only exist because of automated emails. That classification process is more work. We may find there are more needs to meet as AI advances, not less.

The fallacy is in the statement “AI will replace jobs.” This shirks responsibility, which immediately diminishes credibility. If jobs are replaced or removed, that’s a choice we as humans have made, for better or worse.

josefritzishere 7 days ago

I don't think we've seen a technology more over-hyped in the history of industrialized society. Cars, which did fully replace horses, was not even hyped this hard.

joshdavham 7 days ago

This type of hype is pretty perplexing to me.

Supposing that you are trying to increase AI adoption among white-collar workers, why try to scare the shit out them in the process? Or is he moreso trying to sell to the C-suite?

  • taormina 7 days ago

    He’s selling exclusively to the C-suite. Why would he care about the white collar workers? He wouldn’t be trying to put them all out of work if he cares

  • chr15m 7 days ago

    Because it creates FOMO which creates sales.

veunes 6 days ago

If the product can't speak for itself, scare people into believing it will soon

AnimalMuppet 7 days ago

At least temporarily, it can be somewhat self-fulfilling, though. Companies believe it, think they'd better shed white-collar jobs to stay competitive. If enough companies believe that, white-collar jobs go down, even if AI is useless.

Of course, in the medium term, those companies may find out that they needed those people, and have to hire, and then have to re-train the new people, and suffer all the disruption that causes, and the companies that didn't do that will be ahead of the game. (Or, they find out that they really didn't need all those people, even if AI is useless, and the companies that didn't get rid of them are stuck with a higher expense structure. We'll see.)

HenryBemis 6 days ago

> To be clear, Amodei didn’t cite any research or evidence for that 50%

This reminds me the "Walter White" meme "I am the documentation". When the CEO of a company that makes LLM says something like that, "I perk up and listen" (to quote the article).

When a doctor says "water in my village is bad quality, it gives diarrhea to 30% of the villagers", I don't need a fancy study from some university. The doctor "is the documentation". So if the Anthropic/ChatGPT/LLaMa/etc. (mixing companies and products, it's ok though) say that "so-and-so", they see the integrations, enhancements, compliments, companies ordering _more_ subscriptions, etc.

In my current company (high volume, low profit margin) they told us "go all in on AI". They see that (e.g. with Notion-like-tools) if you enable the "AI", that thing can save _a lot_ of time on "Confluence-like" tasks. So, paying $20-$30-$40 per person, per month, and that thing improving the productivity/output of an FTE by 20%-30% is a massive win.

So yes, we keep the ones we got (because mass firings, ministry of 'labour', unions, bad marketing, etc.). Headcount will organically be reduced (retirements, getting a new job, etc.) combined with minimizing new hires, and boom! savings!!

  • theshackleford 6 days ago

    > They see that (e.g. with Notion-like-tools) if you enable the "AI", that thing can save _a lot_ of time on "Confluence-like" tasks.

    If only it worked like this in reality. I used actual notion features literally this week and watched it fail so hard it was hilarious. It continually told me there was no documentation on X despite an entire page worth of documentation existing on it, had to be told this; at which point it apologised and regurgitated it.

    Wow! What a time saver! I feel more productive already!

WaltPurvis 7 days ago

I plugged those two quotes from Amodei into ChatGPT along with this prompt: "Pretend you are highly skeptical about the potential of AI, both in general and in its potential for replacing human workers the way Amodei predicts. Write a quick 800-word takedown of his predictions."

I won't paste in the result here, since everyone here is capable of running this experiment themselves, but trust me when I say ChatGPT produced (in mere seconds, of course) an article every bit as substantive and well-written as the cited article. FWIW.

Animats 7 days ago

The real bloodbath will come when coordination between multiple AIs, in a company sense, starts working. Computers have much better I/O than humans. Once a corporate organization can be automated, it will be too fast for humans to participate. There will be no place for slow people.

"Move fast and break things" - Zuckerberg

"A good plan violently executed now is better than a perfect plan executed next week." - George S. Patton

  • catigula 7 days ago

    This doesn't even make sense. What corporations do you think will exist in this world?

    You're not going to sell me your SaaS when I can rent AIs to make faster cheaper IP that I actually own to my exact specifications.

    • ofjcihen 7 days ago

      This is always the indicator I look for whether or not someone actually knows what they’re talking about.

      If you can’t extrapolate on your own thesis you can’t be knowledgeable in the field.

      Good example was a guy on here who was convinced every company would be ran by one person because of AI. You’d wake up in the morning and decide which products your AI came up with while you slept would be profitable. The obvious next question is “then why are you even involved?”

      • catigula 7 days ago

        I agree, I was actually leaving the question open-ended because I can't necessarily scale it all the way up, it's too complex. Why would they even rent me AIs when they can just be every company? Who is "they"?

        All that needs to be understood is that the narcissistic grandeur delusion that you will singularly be positioned to benefit from sweeping restructuring of how we understand labor must be forcibly divested from some people's brains.

        Only a very select few are positioned to benefit from this and even their benefit is only just mostly guaranteed rather than perfectly guaranteed.

    • sbierwagen 7 days ago

      https://slatestarcodex.com/2016/05/30/ascended-economy/

      Robot run iron mine that sells iron ore to a robot run steel mill that sells steel plate to a robot run heavy truck manufacturer that sells heavy trucks to robot run iron mines, etc etc.

      The material handling of heavy industry is already heavily automated, almost by definition. You just need to take out the last few people.

      • c0redump 6 days ago

        Except that robotics technology is completely different from LLMs? Comments of this flavor are such a tell that the commenter has absolutely no idea what they’re talking about.

1vuio0pswjnm7 7 days ago

"If the CEO of a soda company declared that soda-making technology is getting so good it's going to ruin the global economy, you'd be forgiven for thinking that person is either lying or fully detached from reality.

Yet when tech CEOs do the same thing, people tend to perk up."

Silicon Valley and Redmond make desperate attempts to argue for their own continued relevance.

For Silicon Valley VC, software running on computers cannot be just a tool. It has to cause "disruption". It has to be "eating the world". It has to be a source of "intelligence" that can replace people.

If software and computers are just boring appliances, like yesterday's typewriters, calculators, radios, TVs, etc., then Silicon Valley VC may need to find a new line of work. Expect the endless media hype to continue.

No doubt soda technology is very interesting. But people working at soda companies are not as self-absorbed, detached from reality and overfunded as people working for so-called "tech" companies.

bayareapsycho 6 days ago

My last company (F50, ass engineering culture, pretends to be a tech company) went and fired all of the juniors at a certain level because "AI"

The funny part is, most of those juniors were hired in 2022-2024, and they were better hires because of the harsher market. There were a bunch of "senior engineers" who were borderline useless and joined some time between 2018-2021

I just think it's kind of funny to fire the useful people and keep the more expensive ones around who try to do more "managerial" work and have more family obligations. Smart companies do the opposite

ghm2180 7 days ago

I wonder when the investors and investors in the early printing press or steam engine or excel spreadsheet was invented did they think of the ways — soul crushing homework(books), rapid and cruel colonization(steam engines and trains), innovative project management(excel) — there tech would be used?

The demand for these products was not where it was intended at the time probably. Perhaps the answer to its biggest effect lies in how it will free up human potential and time.

If AI can do that — and that is a big if — then how and what would you do with that time? Well ofc, more activity, different ways to spend time, implying new kinds of jobs.

  • HarHarVeryFunny 6 days ago

    The trouble with looking at past examples of new tech and automation is that those were all verticals - the displaced worker could move to a different, maybe newly created, work area left intact by the change.

    Where AI will be different (when we get there - LLMs are not AGI) is that it is a general human-replacement technology meaning there will be no place to run ... They may change the job landscape, but the new jobs (e.g. supervising AIs) will ALSO be done by AI.

    I don't buy this "AGI by 2027" timeline though - LLMs and LLM-based agents are just missing so many basic capabilities compared to a human (e.g. ability to learn continually and incrementally). It seems that RL, test-time compute (cf tree search) and agentic application, have given a temporary second wind to LLMs which were otherwise topping out in terms of capability, but IMO we are already seeing the limits of this too - superhuman math and coding ability (on smaller scope tasks) do not translate into GENERAL intelligence since they are not based on general mechanism - they are based on vertical pre-training in these (atypical in terms of general use case) areas where there is a clean reward signal for RL to work well.

    It seems that this crazy "we're responsibly warning you that we're going to destroy the job market!" spiel is perhaps because these CEOs realize there is a limited window of opportunity here to try to get widespread AI adoption (and/or more investment) before the limitations become more obvious. Maybe they are just looking for an exit, or perhaps they are hoping that AI adoption will be sticky even if it proves to be a lot less capable that what they are promising it will be.

cadamsdotcom 7 days ago

CEOs’ jobs involve hyping their companies. It’s up to us whether we believe.

I’d love a journalist using Claude to debunk Dario: “but don’t believe me, I’m just a journalist - we asked Dario’s own product if he’s lying through his teeth, and here’s what it said:”

  • geraneum 7 days ago

    I’d love a journalist that do their job. For example when someone like this CEO pulls a number out of their ass, maybe push them on how they arrived at this? Why does it displace 50%? Why 70? Why not 45?

trhway 7 days ago

Read on about PLTR in recent days - all these government layoffs (including by DOGE well connected to PLTR) with the money redirected toward the Grand Unification Project using PLTR Foundry (with AI) platform.

phendrenad2 7 days ago

These are the moments that make millionaires. A majority of people believe that AI is going to thoroughly disrupt society. They've been primed to worry about an "AI apocalypse" by Hollywood for their entire lives. The prevailing counter-narrative is that AI is going to flop. HARD. You can't get more diametrically opposed than that. If you can correctly guess (or logically determine) which is correct, and bet all of your money on it, you can launch yourself into a whole other echelon of life.

I've been a heavy user of AI ever since ChatGPT was released for free. I've been tracking its progress relative to the work done by humans at large. I've concluded that it's improvements over the last few years are not across-the-board changes, but benefit specific areas more than others. And unfortunately for AI hype believers, it happens to be areas such as art, which provide a big flashy "look at this!" demonstration of AI's power to people. But... try letting AI come up with a nuanced character for a novel, or design an amplifier circuit, or pick stocks, or do your taxes.

I'm a bit worried about YCombinator. I like Hacker News. I'm a bit worried that YC has so much riding on AI startups. After machine learning, crypto, the post-Covid 19 healthcare bubble, fintech, NFTs, can they take another blow when the music stops?

  • SoftTalker 7 days ago

    > The prevailing counter-narrative is that AI is going to flop. HARD.

    Why is that the counter-narrative? Doesn't it seem more likely that it will contine to gradually improve, perhaps asymptotically, maybe be more specifically trained in the niches where it works well, and it will just become another tool that humans use?

    Maybe that's a flop compared to the hype?

    • ls612 7 days ago

      At the rate the hyperscalers are increasing capex anything less than 1990s internet era growth rates will not be pretty. So far its been able to sustain those growth rates at the big boy AI companies (look at OpenAI revenue over time) but will it continue? Are we near the end of major LLM advances or are we near the beginning? There are compelling arguments both ways (running out of data is IMO the most compelling bear argument).

      • j_w 7 days ago

        Re: running out of data

        LLM bulls will say that they are going to generate synthetic data that is better than the real data.

      • barchar 7 days ago
        3 more

        It's been able to sustain 90s era revenue growth rates, not 90s era income growth rates no?

        • ls612 7 days ago
          2 more

          I think all of the dot com boom companies other than the shovel sellers like MS and Cisco were not profitable in the 90s? Not even future behemoths like Amazon.

          • hollerith 7 days ago

            Amazon would've been profitable if it weren't investing so much in growth. Also, eBay, Yahoo!, AOL, Priceline, Cisco Systems, E*TRADE and DoubleClick became profitable in the 90s according to DeepSeek.

  • ramesh31 7 days ago

    >The prevailing counter-narrative is that AI is going to flop. HARD. You can't get more diametrically opposed than that.

    The answer (as always) lies somewhere in the middle. Expert software developers who embrace the tech whole heartedly while understanding its' limitations are now in an absolute golden era of being able to do things they never could have dreamed of before. I have no doubt we will see the first unicorns made of "single pizza" size teams here shortly.

  • barchar 7 days ago

    It's not really enough to predict the outcome, you need something concrete to actually bet on, and you need to time things right (particularly for the pessimistic bet).

    For any bet that involves purchasing bits of profits you you could be right and lose money because because the government generally won't allow the entire economy to implode.

    By the time a bubble pops literally everyone knows they're in a bubble, knowing something is a bubble doesn't make it irrational to jump on the bandwagon.

  • ryandrake 7 days ago

    I wouldn't worry too much about YCombinator. Although individual investors can get richer or poorer, "investors" as a class effectively have unlimited money. Collectively, they will always be looking for a place to put it so it keeps growing even more, so there will always be work for firms like YCombinator to sprinkle all that investment money around.

  • tokioyoyo 7 days ago

    Not the biggest fan of crypto companies, but YC probably did well because of Coinbase.

johnwheeler 7 days ago

I previously worked at a company called Recharge Payments, directly supporting the CTO, Mike—a genuinely great person, and someone learning to program. Mike would assign me small tasks, essentially making me his personal AI assistant. Now, I approach everything I do from his perspective. It’s clear that over time, he’ll increasingly rely on AI, asking employees less frequently. Eventually, it’ll become so efficient to turn to AI that he’ll rarely need to ask employees anything at all.

  • lexandstuff 7 days ago

    I've never had a job like that. My job has always involved helping my company, not just figure out how to build something, but what to build. We typically collaborate on a few ideas and then go away, let them percolate in our brains, before coming back with some new ideas to try. The whole point of the Agile Manifesto is that we don't know what to build in the first place.

    Sometimes my boss has asked me to do something that in the long run will cost the company dearly. Luckily for him, I am happy to push back, because I can understand what we're trying to achieve and help figure the best option for the company based on my experience, intuition and the data I have available.

    There's so much more to working with a team than: "Here is a very specific task, please execute it exactly as the spec says". We want ideas, we want opinions, we want bursts of creative inspiration, we want pushback, we want people to share their experiences, their intuition, the vibe they get, etc.

    We don't want AI agents that do exactly what we say; we want teams of people with different skill sets who understand the problem and can interpret task through the lens of their skill set and experience, because a single person doesn't have all the answers.

    I think your ex-boss Mike will very soon find himself trapped in local minima of innovation, with only his own understanding of the world, and a sycophantic yes-man AI employee that will always do exactly as he says. The fact that AI mostly doesn't work is only part of the problem.

ck2 7 days ago

LLM is going to be used for oppression by every government, not just dictatorships but USA of course

Think of it as an IQ test of how new technology is used

Let me give you an easier example of such a test

Let's say they suddenly develop nearly-free unlimited power, ie. fusion next year

Do you think the world will become more peaceful or much more war?

If you think peaceful, you fail, of course more war, it's all about oppression

It's always about the few controlling the many

The "freedom" you think you feel on a daily basis is an illusion quickly faded

ArtTimeInvestor 7 days ago

Imagine you had a crystal ball that lets you look 10 years into the future, and you asked it about whether we underestimate or overestimate how many jobs AI will replace in the future.

It flickers for a moment, then it either says

"In 2025, mankind vastly underestimated the amount of jobs AI can do in 2035"

or

"In 2025, mankind vastly overestimated the amount of jobs AI can do in 2035"

How would you use that information to invest in the stock market?

  • elcapitan 7 days ago

    If I had a crystal ball that lets me look 10 years into the future and I wanted to invest in the stock market, I would ask it about the stock market.

  • heldrida 2 days ago

    In that hypothetical case, you’d invest in the robot industry and security. The poor would try to eat the rich, right?

  • JKCalhoun 7 days ago

    I'm already assuming the first answer but nonetheless have absolutely no idea how I would use that to make a guess about the stock market.

    So it's index funds (as always) with me anyway.

  • usersouzana 7 days ago

    Heads or tails, then proceed accordingly. You won't waste any more time analyzing it in hopes of getting it right.

  • dehrmann 7 days ago

    Ah, so a straddle.

topherPedersen 7 days ago

I could be wrong, but I think us software developers are going to become even more powerful, in demand, and valuable.

dottjt 6 days ago

I think a huge tradeoff that people haven't mentioned is that in using AI to replace workers, you're introducing a dependency on AI that you previously didn't have. This poses a terrible long-term risk for companies.

globalnode 7 days ago

i really liked this article, it puts into perspective how great claims require great proof, and so far all we've heard are great claims. i love ml tech but i just dont trust it to replace a human completely. sure it can augment roles but thats not the vision we're being sold.

randomname4325 7 days ago

Only way to know for sure you're safe from replacement is if your job is a necessary part of something generating revenue and your not easily replaceable. Otherwise you should assume the company won't hesitate to replace you. It's just business.

  • Voloskaya 6 days ago

    > if your job is a necessary part of something generating revenue and your not easily replaceable.

    First part of this statement is clearly false. People on the phone in a tech support company are very much necessary to generate revenue, people tending to field were very much necessary to extract the value of the fields. Draftsmen before CAD were absolutely necessary etc.

    Yet technology replaced them, or is in the process of doing so.

    So then, your statement simplifies to “if you want to be safe for replacement have a job that’s hard to replace” which isn’t very useful anymore.

  • snackernews 7 days ago

    Anyone who thinks an executive considers them necessary or irreplaceable in the current environment is fooling themselves.

  • Tokkemon 7 days ago

    Yeah I thought that too. Then they laid me off anyway.

ggm 7 days ago

Without well paid middle classes, who is buying all the fancy goods and services?

Money is just rationing. If you devalue the economy implicitly you accept that, and the consequences for society at large.

Lenin's dictum: A capitalist will sell you the rope you hang him with Comes to mind

  • Hilift 7 days ago

    > Without well paid middle classes, who is buying all the fancy goods and services?

    People charging on their credit cards. Consumers are adding $2 billion in new debt every day.

    "Total household debt increased by $167 billion to reach $18.20 trillion in the first quarter"

    https://www.newyorkfed.org/microeconomics/hhdc

  • ramesh31 7 days ago

    >Without well paid middle classes, who is buying all the fancy goods and services?

    Rich people buying even fancier goods and services. You already see this in the auto industry. Why build a great $20,000 car for the masses when you can make the same revenue selling $80,000 cars to rich people (and at higher margins)? This doesn't work of course when you have a reasonably egalitarian society with reasonable wealth inequality. But the capitalists have figured out how to make 75% of us into willing slaves for the rest. A bonus of this is that a good portion of that 75% can be convinced to go into lifelong debt to "afford" those things they wish they could actually buy, further entrenching the servitude.

indigoabstract 5 days ago

So if I understand correctly, it's basically down between:

1. cure cancer

2. fix the economy

3. keep everybody happily employed.

And he's saying we can only pick two, or pick one. Except for the last one, that's not really an option.

  • boshalfoshal 4 days ago

    And as it stands, AI is nowhere close to (1) and (2), but is pretty close to making all of (3) redundant.

    This could be because most work is actually frivilous (very possible), but its also easy for them to sell those since ostensibly (1) and (2) actually require a lot of out of distribution reasoning, thinking, and real agentic research (which current models probably aren't capable of).

    (3) just makes the most money now with the current technology. Curing cancer with LLMs, though altruistic, is more unrealistic and has no clear path to immediate profitability because of that.

    These "AGI" companies aren't doing this out of the goodness of their hearts with humanity in mind, its pretty clearly meant to be a "final company standing" type race where everyone at the {winning AI Company} is super rich and powerful in whatever new world paradigm shows up afterwards.

throwaway48476 6 days ago

The white collar bloodbath is the jobs that could have been automated pre AI but weren't due to organizational inertia, corporate freedom building and an unwillingness to invest.

leeroihe 7 days ago

I used to be a big proponent of AI tools and llms, even built products around them. But to be honest, with all of the big AI ceos promising that they're going to "replace all white collar jobs" I can't see that they want what's best for the country or the american people. It's legitimately despicable and ghoulish that they just expect everyone to "adapt" to the downstream affects of their knowledge-machine lock-in.

smeeger 7 days ago

if being redundant would lead to mass layoffs then half of white collar workers would have been laid off decades ago. and white collar people will fiddle with rules and regulations to make their ever more bloated redundancy even more brazen with the addition of AI… and then later when AI has the ability to replace blue collar workers it will do so immediately and swiftly while the white collar people get all the money. its happened a thousand times before and will happen again.

DrillShopper 7 days ago

I look forward to the day where executive overpromises and engineering underdeliveries bring about another AI winter so the useful techniques can continue without the stench of the "AI" association and so the grifters go bankrupt.

  • sevensor 7 days ago

    The implosion of this AI bubble is going to have a stupendous blast radius. It’s never been harder to distinguish AI from “things people do with computers” more generally. The whole industry is implicated, complicit, and likely to suffer when AI winter arrives. Dotcom bust didn’t just hit people who were working for pets.com.

  • pixl97 7 days ago

    Just like the internet was a fad, right?

    • threeseed 7 days ago

      Internet only became a fad once it was already large and had tens of millions of users.

      I remember the pre-Web days of Usenet and BBS and no one thought those were trendy.

      AI is far more akin to crypto.

      • pixl97 6 days ago
        2 more

        Lots of people talk about crypto yet almost no one uses it.

        Pretty much everyone I know uses AI for something.

        • DrillShopper 4 days ago

          That might be a perspective thing. I can think of three people in my life who have used AI at any point, and two of the three used it for diffusion models right when Stable Diffusion was first released.

rjurney 7 days ago

Workers in denial are like lemmings, headed for the cliff... not putting myself above that. A moderate view indicates great disruption before new jobs replace the current round being lost.

givemeethekeys 6 days ago

There is an AI bloodbath that is adding to supply of labor in all low hanging fields that aren’t yet being decimated by AI.

stephc_int13 7 days ago

The main culprit behind the hype of the AI revolution is a lack of understanding of its true nature and capabilities. We should know better, Eliza demonstrated decades ago how easily we can be fooled by language, this is different and more useful but we rely so much on language fluency and knowledge retrieval as a proxy for intelligence that we are fooled again.

I am not saying this is a nothing burger, the tech can be applied to many domains and improve productivity, but it does not think, not even a little, and scaling won’t make that magically happen.

Anyone paying attention should understand this fact by now.

There is no intelligence explosion in sight, what we’ll see during the next few years is a gradual and limited increase in automation, not a paradigm change, but the continuation of a process that started with the industrial revolution.

bawana 7 days ago

When are we going to get AI CEOs as a service?

  • 0x5f3759df-i 7 days ago

    I asked ChatGPT to be a CEO and decide if everyone should work in office 5 days a week:

    “ Final Thought (as a CEO):

    I wouldn’t force a full return unless data showed a clear business case. Culture, performance, and employee sentiment would all guide the decision. I’d rather lead with transparency, flexibility, and trust than mandates that could backfire.

    Would you like a sample policy memo I’d send to employees in this scenario?”

    A better, more reasonable CEO than the one I have. So I’m looking forward to AI taking that white collar job especially.

  • crims0n 7 days ago

    You may be onto something… sell strategic decisions by an AI cohort as a service, insure against the inevitable duds, profit.

givemeethekeys 6 days ago

Many people are unable to find jobs because they are too old.

Even older people prefer to hire younger people.

  • throwaway314155 6 days ago

    Okay? What does that have to do with anything?

    • givemeethekeys 6 days ago

      Losing one’s job is only half of the story. Finding another one when you’re getting old in the AI age is more difficult than before AI.

nova22033 6 days ago

Does anyone have any experience with using AI tools on a massive legacy code base?

notyouraibot 7 days ago

The hype around AI replacing software engineers is truly delusional. Yes they are very good at solving known problems, writing for loops and boilerplate code but introduce a little bit of complexity and creativity and it all fails. There have been countless tasks that I have given to AI, to which it simply concluded its not possible and suggested me to use several external libraries to get it done, after a little bit of manual digging, I was able to achieve that same task without any libraries and I'm not even a seasoned engineer.

osigurdson 7 days ago

The real value is going to be in areas that neither machines nor humans could do previously.

whynotminot 7 days ago

There’s a hype machine for sure.

But the last few paragraphs of the piece kind of give away the game — the author is an AI skeptic judging only the current products rather than taking in the scope of how far they’ve come in such a short time frame. I don’t have much use for this short sighted analysis. It’s just not very intelligent and shows a stubborn lack of imagination.

It reminds me of that quote “it is difficult to get a man to understand something, when his salary depends on his not understanding it.”

People like this have banked their futures on AI not working out.

  • codr7 7 days ago

    The opposite is more true imo.

    It's the AI hype squad that are banking their future on AI magically turning into AGI; because, you know, it surprised us once.

    • whynotminot 7 days ago

      Not really — even if AGI doesn’t work and these models don’t get any better, there’s still enormous value to be mined just from harnessing the existing state of the art.

      Or these guys pivot and go back to building CRUD apps. They’re either at the front of something revolutionary… or not… and they’ll go back to other lucrative big tech jobs.

      • SoftTalker 7 days ago
        7 more

        Is there enormous value? AI is burning cash at an extraordinary rate on the promise that it will be an enormous value. But if it plateaus, then all the servers, GPUs, data centers, power and cooling and other infrastructure will have to be paid for out of revenue. Will customers be willing to pay the actual costs of running this stuff.

        • whynotminot 7 days ago
          6 more

          I don’t know if what they’ve built and are building in the future will justify the level of investment. I’m not an economist or a VC. It’s hard to fathom the huge sums being so casually thrown around.

          All I can tell you is that for what I use AI for now in both my personal and professional life, I would pay a lot of money (way more than I already am) to keep just the current capabilities I already have access to today.

          • codr7 7 days ago
            3 more

            May I ask what exactly AI provides that's worth so much to you?

            Because I wouldn't miss it at all if it disappeared tomorrow, and I'm pretty sure the society would be better off without it.

            • whynotminot 6 days ago
              2 more

              Sure! You asked for it, here’s my speech:

              I’m a software engineer so for work I use it daily. It doesn’t “do my job” but it makes my job vastly more enjoyable. Need unit tests? Done. Want a prototype of an idea that you can refine? Here. Shell script? Boom. Somewhat complicated SQL query? Here ya go. Working with some framework you haven’t used before? Just having a conversation with AI about what I’m trying to do is so much better than sorting through often poorly written documentation. It’s like talking to another engineer who just recently worked on that same kind of problem… except for almost any problem you encounter. My productivity is higher. More than that, I find myself much more willing to take on bigger, harder problems because I know there’s powerful resources to answer just about any question I could have. It just makes me enjoy the job more.

              In my personal life, I use it to cut through the noise that in recent year has begun to overwhelm the signal on the internet. Give me a salmon recipe. This used to be the sort of thing you’d put into Google and get great results. Now first result is some ad-stuffed website that is 90% fluff piece and a recipe hidden at the bottom. Just give me the fricken recipe! AI does that.

              The other day I was trying to figure out whether a designer-made piece of furniture was authentic despite missing tags. Had a back and forth with ChatGPT, sharing photos, describing the build quality, telling it what the store owner had told me. Incredible depth of knowledge about an obscure piece of furniture.

              I also use the image generation all the time. For instance, for the piece of furniture I talked about, I took a picture of my apartment, and the furniture, and asked it to put the furniture into my space, allowing me to visualize it before purchase.

              It’s a frickin super power! I cannot even begin to understand how people are still skeptical about the transformative power of this stuff. It kind of feels like people are standing outside the library of Alexandria, debating whether it’s providing any value, when they haven’t even properly gone inside.

              Yes, there are flaws. I’m sure there’s people reading this about to tell me it made them put glue on their salad or whatever. But what we have is already so deeply useful to me. Could I have done all of this through old fashioned search? Mastered Photoshop and put the furniture into my apartment on my own? Of course! But the immediacy here is the game changer.

              • codr7 3 hours ago

                But the stuff you're so happy to not do is what software development is all about, telling a computer exactly what to do.

                Why not switch jobs if you don't like it?

          • hatefulmoron 7 days ago
            2 more

            I'm not trying to make a point, just curious -- what's stopping you from spending more money on AI? You could be using more API tokens, more Claude Code and whatever else.

            • whynotminot 6 days ago

              I have a ChatGPT subscription, and work has one of those “all the models” kind of subscriptions. So I have access to pretty much most of the mainline models — don’t feel the need to pay more.

              But if the business model collapsed and they had to raise prices, or work cheaped out and stopped paying for our access, then yeah, I’d step up and spend the money to keep it.

      • asadotzler 7 days ago

        They've so far spent about what the world spent to build out almost all of the broadband internet, the fiber, cable, cellular, etc. If AI companies stop now, about 10 years after they got going, does their effort give us trillions of dollars being added to the economy each year from today forward, like we got for every year after the 10 years of internet build out between 1998 and 2008? I'm not seeing it. If they stop now, that's a trillion dollars in the dumper because no one can afford to operate the existing tech without a continual influx of investor cash that may never pay off.

  • bgwalter 7 days ago

    Using the Upton Sinclair quote in this context is a sign of not understanding the quote. The original quote means that you ignore gross injustices of your employer in order to stay employed.

    It was never used in the sense of denigrating potential competitors in order to stay employed.

    > People like this have banked their futures on AI not working out.

    If "AI" succeeds, which is unlikely, what is your recommendation to journalists? Should they learn how to code? Should they become prostitutes for the 1%?

    Perhaps the only option would be to make arrangements with the Mafia like dock workers to protect their jobs. At least it works: Dock workers have self confidence and do not constantly talk about replacing themselves. /s

    • whynotminot 7 days ago

      I think the quote makes perfect sense in this context, regardless of the prior application.

      As to my recommendation to what they do — I dunno man. I’m a software engineer. I don’t know what I am going to do yet. But I’m sure as shit not burying my head in the sand.

      • bgwalter 7 days ago
        2 more

        Even if you apply the quote in a different sense, which would take away all its pithiness, you are still presupposing that "AI" will turn out to be a success.

        The gross injustices in the original quote were already a fact, which makes the quote so powerful.

        • whynotminot 7 days ago

          AI as is, is already a success, which is why I find it so baffling that people continue to write pieces like this.

          We don’t need AGI for there to be large displacement of human labor. What’s here is already good enough to replace many of us.

bawana 7 days ago

When are going to get AI CEOs as a service?

franczesko 6 days ago

AI bubble burst will come first.

infinitebit 7 days ago

I am SO thankful to see a news outlet take what tech CEOs say with a grain of salt re: AI. I feel like so many have just been breathlessly repeating anything they say without even an acknowledgement that there might be, you know, some incentive for them to stretch the truth.

(ftr i’m not even taking a side re: will AI take all the jobs. even if they do, the reporting on this subject by MSM has been abysmal)

gcanyon 7 days ago

...everyone here saying "someday AI will <fill in the blank> but not today" while failing to acknowledge that for a lot of things "someday" is 2026, and for an even larger number of things it's 2027, and we can't even predict whether or not in 2028 AI will handle nearly all things...

  • causal 7 days ago

    The problem is that it's hard to pin down any job that's been eliminated by AI even after years of having LLMs. I'm sure it will happen. It just seems like the trajectory of intelligence defies any simple formula.

    • gcanyon 7 days ago

      There's definitely an element of what we saw in the '90s -- software didn't always make people faster, it made the quality of their output better (wysiwyg page layout, better database tools/validation, spell check in email, etc. etc.).

      But we're going to get to a point where "the quality goes up" means the quality exceeds what I can do in a reasonable time frame, and then what I can do in any time frame...

    • sfblah 6 days ago

      I literally am in the process of firing someone who we no longer need because of efficiencies tied to GenAI. I work at a top-10 tech company. So, there you go. That's one job.

      • causal 5 days ago

        That's really interesting, can you offer any insight on the type of role this efficiency made unnecessary or why firing made more sense than augmenting?

atleastoptimal 7 days ago

losing jobs is the biggest predictable hazard of AI but far from the biggest

however there seems to be a big disconnect on this site and others

If you believe AGI is possible and that AI can be smarter than humans in all tasks, naturally you can imagine many outcomes far more substantial than job loss.

However many people don’t believe AGI is possible, thus will never consider those possibilities

I fear many will deny the probability that AGI could be achieved in the near future, thus leaving themselves and others unprepared for the consequences. There are so many potential bad outcomes that could be avoided merely if more smart people realized the possibility of AGI and ASI, and would thus rationally devote their cognitive abilities to ensuring that the potential emergence of smarter than human intelligences goes well.

Warh00l 5 days ago

pierceday.metalabel.com/aphone

brokegrammer 7 days ago

We don't need AI to wipe out entry-level office jobs. David Graeber wrote about this in Bullshit Jobs. But now that we have AI, it's a good excuse to wipe out those jobs for good, just like Elon did after he acquired Twitter. After that, we can blame AI for the deed.

  • kilroy123 7 days ago

    I've thought a lot about this. I think this is exactly what is happening. I've seen this first hand.

    A lot of the BS jobs are being killed off. Do some non-bs jobs get burn up in the fire along the way, yes. But it's only the beginning.

  • JanisErdmanis 7 days ago

    The productivity gains in activities will be countered by the same gains in counter activities. Everything is going to become more sophisticated, but bullshit will remain.

paulluuk 7 days ago

Around the time when bitcoin started to get serious public attention, late 2017, I remember feeling super hyped about it and yet everyone told me that money spent on bitcoin was wasted money. I really believed that bitcoin, or at least cryptocurrency as a whole, would fundamentally change how banking and currencies would work. Now, almost 10 years later, I would say that it did not live up to my believe that it would "fundamentally" change currencies and banking. It made some minor changes, sure, but if it weren't for the value of bitcoin, it would still be a nerdy topic about as well known as perlin noise. Although I did make quite a lot of money from it, though I sold out way too soon.

As a research engineer in the field of AI, I am again getting this feeling. People keep doubting that AI will have any kind of impact, and I'm absolutely certain that it will. A few years ago people said "AI art is terrible" and "LLMs are just autocomplete" or the famous "AI is just if-else". By now it should be pretty obvious to everyone in the tech community that AI, and LLMs in particular, are extremely useful and already have a huge impact on tech.

Is it going to fulfill all the promises made by billionaire tech CEOs? No, of course not, at least not on the time scale that they're projecting. But they are incredibly useful tools that can enhance efficiency of almost any job that involves setting behind a computer. Even just something like copilot autocomplete or talking with an LLM about a refactor you're planning, is often incredibly useful. And the amount of "intelligence" that you can get from a model that can actually run on your laptop is also getting much better very quickly.

The way I see it, either the AI hype will end up like cryptocurrency: forever a part of our world, but never quite lived up to it's promises, but I made a lot of money in the meantime. Or the AI hype will live up to it's promises, but likely over a much longer period of time, and we'll have to test whether we can live with that. Personally I'm all for a fully automated luxury communism model for government, but I don't see that happening in the "better dead than red" US. It might become reality in Europe though, who knows.

  • jollyllama 7 days ago

    Crypto is a really interesting point, because even the subset of people who have invested in it don't use it on a day to day basis. The entire valuation is based on speculative use cases.

  • kayamon 6 days ago

    Bitcoin was $2000 dollars each in 2017. Now in 2025 it's $104,000. It's set to keep countering global inflation until 2140.

    It ain't done yet.

  • layer8 7 days ago

    > already have a huge impact on tech

    As a user, I haven’t seen a huge impact yet on the tech I use. I’m curious what the coming years will bring, though.

  • rvz 7 days ago

    > By now it should be pretty obvious to everyone in the tech community that AI, and LLMs in particular, are extremely useful and already have a huge impact on tech.

    Enough to cause the next financial crash, achieving a steady increase of 10% global unemployment in the next decade at worst,

    That is the true definition of AGI.

  • surgical_fire 7 days ago

    Something can be useful and massively overhyped at the same time.

    LLMs are good productivity tools. I've been using it for coding, and it is massively helpful, really speeds things up. There's a few asterisks there though

    1) I does generate bullshit, and this is an unavoidable part of what LLMs are. The ratio of bullshit seems to come down with reasoning layers above it, but they will always be there.

    2) LLMs, for obvious reasons, tend to be more useful the more mainstream languages and libraries I am working with. The more obscure it is, the less useful it gets. It may have a chilling effect on technological advancement - new improved things are less used because LLMs are bad at them due to lack of available material, the new things shrivel and die on the vine without having a chance of organic growth.

    3) The economics of it are super unclear. With the massive hype there's a lot of money slushing around AI, but those models seem obscenely expensive to create and even to run. It is very unclear how things will be when the appetite of losing money at this wanes.

    All that said, AI is multiple breakthroughs away of replacing humans, which does not mean they are not useful assistants. And increase in productivity can lead to lower demand for labor, which leads ro higher unemployment. Even modest unemployment rates can have grim societal effects.

    The world is always ending anyway.

  • paulluuk 7 days ago

    On a side note, I do worry about the energy consumption of AI. I'll admit that, like the silicon valley tech bros, there is a part of me that hopes that AI will allow researchers to invent a solution to that -- something like fusion or switching to quantum-computing AI models or whatever. But if that doesn't happen, it's probably the biggest problem related to AI. More so even than alignment, perhaps.

theawakened 7 days ago

I've said this before and I'll say it again: The idea that 'AI' will EVER take over any programmers job is ridiculous. These idiots think they are going to create AGI, it's never going to happen, not with this race of people. There is far too much ignorance in humanity. AI will never be able to be any better than it's source, humanity. It's a soon-to-be realization for these billionaire talking heads. Nothing can rise higher than it's source. Even if they cover every square foot of land with data centers, it'll never work like they expect it to. The AI bubble will burst so hard the entire world will quake. I give it 5 years max.

jatora 7 days ago

While I agree that the current 'bloodbath' narrative is all hype, I'm honestly confused by a lot of the sentiment i see on here towards AI. Namely the dismissal of continual improvement and the rampant whistling past the graveyard attitude of what is coming.

It is confusing because many of the dismissals come from programmers, who are unequivocally the prime beneficiaries of genAI capability as it stands.

I work as a marketing engineer at a ~1B company and the amount of gains I have been able to provide as an individual are absolutely multiplied by genAI.

One theory I have is that maybe it is a failing of prompt ability that is causing the doubt. Prompting, fundamentally, is querying vector space for a result - and there is a skill to it. There is a gross lack of tooling to assist in this which I attribute to a lack of awareness of this fact. The vast majority of genAI users dont have any sort of prompt library or methodology to speak of beyond a set of usual habits that work well for them.

Regardless, the common notion that AI has only marginally improved since GPT-4 is criminally naive. The notion that we have hit a wall has merit, of course, but you cannot ignore the fact that we just got accurate 1M context in a SOTA model with gemini 2.5pro. For free. Mere months ago. This is a leap. If you have not experienced that as a leap then you are using LLM's incorrectly.

You cannot sleep on context. Context (and proper utilization of it) is literally what shores up 90% of the deficiencies I see complained about.

AI forgets libraries and syntax? Load in the current syntax. Deep research it. AI keeps making mistakes? Inform it of those mistakes and keep those stored in your project for use in every prompt.

I consistently make 200k+ token queries of code and context and receive highly accurate results.

I build 10-20k loc tools in hours for fun. Are they production ready? No. Do they accomplish highly complex tasks for niche use cases? Yes.

The empowerment of the single developer who is good at manipulating AI AND an experienced dev/engineer is absolutely incredible.

Deep research alone has netted my company tens of millions in pipeline, and I just pretend it's me. Because that's the other part that maybe many aren't realizing - its right under your nose - constantly.

The efficiency gains in marketing are hilariously large. There are countless ways to avoid 'AI slop', and it involves, again, leveraging context and good research, and a good eye to steer things.

I post this mostly because I'm sad for all of the developers who have not experienced this. I see it as a failure of effort (based on some variant of emotional bias or arrogance), not a lack of skill or intellect. The writing on the wall is so crystal clear.

  • sfblah 6 days ago

    You're right, of course. Most of this thread is some sort of weird motivated reasoning by people who are terrified of the reality of what lies ahead. I work at a top-10 tech company. We've stopped hiring junior talent, and it's 100% because of AI. I'm something like 2x more productive since using AI. We're now deploying agentic AI systems to further reduce headcount. The actual bloodbath will happen when there's any kind of financial pressure on the company (a recession).

    • Lu2025 6 days ago

      > when there's any kind of financial pressure on the company (a recession)

      When was the last time there was no financial pressure? I keep hearing how hard it is to be a small business owner for as long as I'm an adult and it's like a quarter century by now.

ihsw 7 days ago

[dead]

rayiner 7 days ago

[flagged]

  • jmmcd 7 days ago

    It could certainly replace the author of this article.

  • MangoToupe 7 days ago

    If anything, CNN's people-forward branding is indication that people want a human mediating the news to them.

    That's the most charitable thing I can say, at least.

rule2025 7 days ago

The real "white-collar massacre" is not caused by AI, but you have no irreplaceable, or the value created by hiring you is not higher than using AI. Businesses will not hesitate to use AI, you can't say that companies are ruthless, but that's the pursuit of efficiency. Just as horse-drawn carriages were replaced by cars and coachmen lost their jobs, you can't say it's a problem with cars.

History is always strikingly similar, the AI revolution is the fifth industrial revolution, and it is wise to embrace AI and collaborate with AI as soon as possible.

  • HarHarVeryFunny 6 days ago

    There's a popular saying, e.g. used by NVIDIA CEO Jensen Huang, that "AI won't replace you - a human using AI will replace you", which may be temporarily true while AI isn't very capable, but the AI CEOs are claiming AGI will be here in 2 years, and explicitly saying that it will be a "drop-in replacement remote worker". Obviously one of these is wrong - it's either just a tool to be learnt and used, or it is in fact a drop-in replacement for a human.

    One can argue about the timeline and technology (maybe not LLM based), but it does seem that human-level AGI will be here relatively soon - next 10 or 20 years, perhaps, if not 2. When this does happen, history is unlikely to be a good predictor of what to expect... AGI may create new jobs as well as detstroy old ones, but what's different is that AGI will also be doing those new jobs! AGI isn't automating one industry, or creating a technology like computers that can help automate any industry - AGI is a technology that will replace the need for human workers in any capacity, starting with all jobs that can be conducted without a physical presence.