I think the real white collar bloodbath is that the end of ZIRP was the end of infinite software job postings, and the start of layoffs. I think its easy to now point to AI, but it seems like a canard for the huge thing that already happened.
just look at this:
https://fred.stlouisfed.org/graph/?g=1JmOr
In terms of magnitude the effect of this is just enormous and still being felt, and never recovered to pre-2020 levels. It may never. (Pre-pandemic job postings indexed to 100, its at 61 for software)
Maybe AI is having an effect on IT jobs though, look at the unique inflection near the start of 2025: https://fred.stlouisfed.org/graph/?g=1JmOv
For another point of comparison, construction and nursing job postings are higher than they were pre-pandemic (about 120 and 116 respectively, where pre-pandemic was indexed to 100. Banking jobs still hover around 100.)
I feel like this is almost going to become lost history because the AI hype is so self-insistent. People a decade from now will think Elon slashed Twitter's employee count by 90% because of some AI initiative, and not because he simply thought he could run a lot leaner. We're on year 3-4 of a lot of other companies wondering the same thing. Maybe AI will play into that eventually. But so far companies have needed no such crutch for reducing headcount.
IMO this is dead on. AI is a hell of a scapegoat for companies that want to save face and pretend that their success wasn't because of cheap money being pumped into them. And in a world addicted to status games, that's a gift from the heavens.
ZIRP is an American thing? In that case maybe we could try comparisons with the job markets in other developed Western countries that didn't have this policy. If it was because of ZIRP, then their job markets should show clearly different patterns.
ZIRP was a central banking thing, not just an American phenomenon. At least in the tech industry, the declines we're seeing in job opportunities are a result of capital being more expensive for VCs, meaning less investments are made (both in new and existing businesses), meaning there's less cash to hire and expand with. It just felt like the norm because ZIRP ran more or less uninterrupted for 10 years.
You're right that we should see comparisons in other developed countries, but with SV being the epicenter of it all, you'd expect the fallout to at least appear more dramatic in the U.S.
And an overwhelming number of (focusing exclusively on the U.S.) tech "businesses" weren't businesses (i.e., little to no profitability). At best they were failed experiments, and at worst, tax write-offs for VCs.
So, what looked like a booming industry (in the literal, "we have a working, profitable, cash-flowing business here" sense) was actually just companies being flooded with investment cash that they were eager to spend in pursuit of rapid growth. Some found profitability, many did not.
Again, IMO, AI isn't so much the cause as it is the bandage over the wound of unprofitability.
There isn’t anything magically about precisely zero percent interest rates; the behavior we see is mostly a smooth extension of slightly higher rates, which the EU was at.
And of course ZIRP was pioneered in Japan, not the US.
Such an important point, I've seen and suspected the end of ZIRP being a much much greater influence on white collar work than we suspect. AI is going to take all the negative press but the flow of capital is ultimately what determines how the business works, which determines what software gets built. Conway's law 101. The white collar bloodbath is more of a haircut to shed waste accumulated during the excesses of ZIRP.
AI also happens to be a perfect scapegoat: CEOs who over-hired get to shift the blame to this faceless boogeyman, and (bonus!) new hires are more desperate/willing to accept worse compensation.
ZIRP and then the final gasp of COVID bubble over hiring.
At least in my professional circles the number of late 2020-mid 2022 job switchers was immense. Like 10 years of switches condensed into 18-24 months.
Further lot of experiences and anecdotes talking to people who saw their company/org/team double or triple in size when comparing back to 2019.
Despite some waves of mag7 layoffs we are still I think digesting what was essentially an overhiring bubble.
Is it negative press for AI, or is it convincing some investors that it’s actually causing a tectonic shift in the workforce and economy? It could be positive in some sense. Though ultimately negative, because the outcomes are unlikely to reflect a continuation of the perceived impact or imaginary progress of the technology.
Also section 174’s amortization of software development had a big role.
I agree, R&D change is what triggered 2022 tech layoffs. Coders used to be free, all this play with Metaverse and such was on public dime. As soon as a company had to spend real money, it all came crashing down.
This is a weird take. Employees are supposed to be business expenses, that's the core idea of running a business: profit = revenue - expenses, where expenses are personnel / materials, and pay taxes over profit. Since the R&D change, businesses can't fully expense employees and need to pay (business) taxes over their salaries. Employees - of course - still pay personal taxes also (as was always the case).
Yeah, free is a bit of a odd take. ! ZIRP + section 174 was a huge simultaneous blow to tech.
I would add one more: me too-ism from CEOs following Musk after the twitter reductions. I think many tech CEOs (e.g., Zuck) hate their workforce with a passion and used the layoff culture to unwind things and bring their workforce to heel (you might be less vocal in this sort of environment... think of the activists that used to work at Google).
> me too-ism from CEOs following Musk after the twitter reductions
I see evidence of a collusion. My friends at several tech companies (software and hardware) received very similar sounding emails in similar time frame. I think the goal was "salary compression". Management was terrified of the turnover and salary growth so they decided to act. They threw a bunch of people on the labor market at once to cool it down. It would normalize eventually but you don't need long. Fired H1-B holders have to find a new job within 2 months or self deport.
Totally agree. They wanted to mess with supply/demand to lower salaries. A lot of very highly paid people were laid off or forced out. RTO is really about shedding people, too, so let's not forget about that.
If a software engineer in a R&D project is using a AI service to develop the software, does the bill count as company business expense or does it fall under section 174?
That's about to get repealed it looks like.
TACO
For those unaware, the "TACO trade" is when Wall Street investors trade based on the principle that "Trump Always Chickens Out". For example, buying in a tariff-induced dip on the principle that he'll probably repeal the tariffs.
Now that someone's said to Trump's face that Wall Street thinks he always chickens out, he may or may not stop doing it.
> Now that someone's said to Trump's face that Wall Street thinks he always chickens out, he may or may not stop doing it
The point is he’s powerless not to. The alternative is allowing a bond rout to trigger a bank collapse, probably in rural America. He didn’t do the prep that produces actual leverage. (Xi did.)
This was the most interesting thing I found during the past few weeks - even “The US President is the most powerful man in the world” can’t win a war against the bond market.
> even “The US President is the most powerful man in the world” can’t win a war against the bond market
"You will not find it difficult to prove that battles, campaigns, and even wars have been won or lost primarily because of logistics" (D. D. Eisenhower).
Trump did zero preparation for this trade war. It's still unclear what the ends are, with opposing and contradictory aims being messaged. We launched the war simultaneously against everyone. The formula used to calculate tariffs doesn't make sense. And Trump decided to blow out the deficit and kneecap U.S. state capacity at the same time he's negotiating against himself on trade.
The U.S. President can take on the bond market. Most simply by taking the budget into surplus, thereby threatening its existence. But Trump didn't do that. He didn't even pretend he was going to do that. Instead, he's strategically put himself in a position where he has to chicken out, and it honestly seems like he's surrounded himself with people who are too high, drunk and/or stupid to see that. He's the poker player who shows up at the table, goes all in, looks at his cards and folds in one move.
There's no end - it's just Trump following his learned or innate behaviour.
Same behaviour that bankrupted every institution he's ever been in charge of before. The definition of insanity is doing the same thing again and expecting different results.
It's possible he'll stop chickening out to win his internal argument against that reporter who said he always chickens out. Feeling like he's winning seems to be important to him and he holds grudges for a long time. In that case the American economy goes bye bye.
We already know he wants to end the dollar reserve currency status, because he said so - trade deficit and reserve currency status are different words for the same thing.
Trump has never been good enough for the financial structures he has been finagled into the top position of.
So many dumpster fires but only a few official bankruptcies, well that's always what's on the table and anything goes.
Back in the 20th century almost everybody knew that Trump was not trustworthy, especially not with money, give me a break, that's what made him such a tragic/comic character.
It's almost like people forget with any org where he is the ultimate decision-maker, if there is challenging debt with no quick way out, he is more likely than most to declare bankruptcy. Otherwise it would require acumen he has never had to right a faltering ship. Plus he would be bogged down when he wanted to shift his focus to schemes that were more promising to him personally. Like other pie-in-the-sky deals back then, or something like his memecoin today. So many times in different orgs with different/leading personalities it's only a declaration away anyhow. Not normally on the menu for the best of the real decent businessmen, but what do you do when you get one that's far from the best and not even decent?
If there were some deep insight into his personal financial situation over the years, especially recently, there might be a more accurate picture whether he would be inclined to "one day" just decide to declare the whole USA bankrupt and move on to greener pastures himself. Or if the decision has already been made, who knew? Or would believe it yet anyway?
Any President could always have made more money doing something else, the whole time it's only been a matter of integrity, or lack thereof.
Can you expand on "probably in rural America"? Do you just mean that those smaller community banks are more at risk if rates rise? If so, because they issue more variable rate debt? Or is there something else?
edit: grammar
> Do you just mean that those smaller community banks are more at risk if rates rise? If so, because they issue more variable rate debt? Or is there something else?
Current issue is community banks have 3x the commercial real estate exposure of other banks [1]. They're also less liquid and have a lower ROA. So in cases where the shock comes from outside the financial sector, they tend to be the first we worry about.
[1] https://www.fdic.gov/quarterly-banking-profile 33% vs 11% of total assets
- [deleted]
Never assume a narcissist will take the sane way out when their game blows up in their face.
yeah, I thought the same thing. Steel tariff announcement is the first real test. Announcing the US Steel merger? / purchase? at the same time I think is part of the plan. I think he is going to stick this one out to prove them wrong. Would be interesting to see if TACO is even real, I could see someone on wall street opening a ton of puts, making the story up and then leaking the fake story to the reporter.
Mission accomplished.
Don't look a gift taco in the mouth
This
And the reason why I said it is because 174 is part of Trump's Cut^3 bill from 2017. DOE 174.
What's happening now is similar to what happened during the 2000's "dot-com bubble burst". Having barely survived that time, I saw this one coming and people told me I was crazy when I told them to hold on to their jobs and quit job-hopping, because the job-hopper is very often the first one to get laid off.
In 2000 I was moved cities and I had a job lined-up at a company that was run by my friends, I had about 15 good friends working at the company including the CEO, and I was guaranteed the job in software development at the company. The interview was supposed to be just a formality. So I moved, and went in to see the CEO, and he told me he could not hire me, the funding was cut and there was a hiring freeze. I was devastated. Now what? Well I had to freelance and live on whatever I could scrape together, which was a few hundred bucks a month, if I was lucky. Fortunately the place I moved into was a big house with my friends who worked at said company, and since my rent was so low at the time, they covered me for a couple of years. I did eventually get some freelance work from the company, but things did not really recover until about 2004 when I finally got a full-time programming job, after 4 very difficult years.
So many tech companies over-hired during covid, there was a gigantic bubble happening with FAANG and every other tech company at the time. The crash in tech jobs was inevitable.
I feel bad for people who got left out in the cold this time, I know what they are going through.
Those are some great friends. Aside from job hoppers, I noticed there are a lot of company loyalists getting canned too though (i.e worked at MSFT 10 years)
It's not exactly the same this time around, the dot-com bubble was a bit different, but both then and now were preceded by huge hiring bubbles and valuations that were stupid. Now it's a little different 25 years later, tech has advanced and AI means cutting the fat out of a lot of companies, even Microsoft.
AI is somewhat creating a similar bubble now, because investors still have money, and the current AI efforts are way over-hyped. 6.5 billion paid to aquihire Jony Ive is a symptom of that.
Keynes suggested that by 2030, we’d be working 15 hour workweeks, with the rest of the time used for leisure. Instead, we chose consumption, and helicopter money gave us bullshit jobs so we could keep buying more bullshit. This is fairly evident by the fact when the helicopter money runs out, all the bullshit jobs get cut.
AI may give us more efficiency, but it will be filled with more bullshit jobs and consumption, not more leisure.
Keynes lived in a time when the working class was organized and exerting its power over its destiny.
We live in a time that the working class is unbelievably brainwashed and manipulated.
> Keynes lived in a time when the working class ...
Keynes lived in a time when the working class could not buy cheap from China... and complain that everybody else was doing the same!
He was extrapolating, as well. Going from children in the mines to the welfare state in a generation was quite something. Unfortunately, progress slowed down significantly for many reasons but I don’t think we should really blame Keynes for this.
> We live in a time that the working class is unbelievably brainwashed and manipulated.
I think it has always been that way. Looking through history, there are many examples of turkeys voting for Christmas and propaganda is an old invention. I don’t think there is anything special right now. And to be fair to the working class, it’s not hard to see how they could feel abandoned. It’s also broader than the working class. The middle class is getting squeezed as well. The only winners are the oligarchs.
> progress slowed down significantly for many reasons
I think progress (in the sense of economic growth) was roughly in line with what Keynes expected. What he didn't expect is that people, instead of getting 10x the living standard with 1/3 the working hours, rather wanted to have 30x the living standard with the same working hours.
It's not really clear where he got this from.
Throughout human history, starting with the spread of agriculture, increased labor efficiency has always led to people consuming more, not to them working less.
Moreover, throughout the 20th century, we saw several periods in different countries when wages rose very rapidly - and this always led to a temporary average increase in hours worked. Because when a worker is told "I'll pay you 50% more" - the answer is usually not "Cool, I can work 30% less", but "Now I'm willing to work 50% more to get 2x of the pay".
> Throughout human history, starting with the spread of agriculture, increased labor efficiency has always led to people consuming more, not to them working less.
Can you give a single example where that happened?
During the industrial revolution it was definitely not what happened. In the late 1700s laborers typically averaged around 80 hours per week. In the 1880s this had decreased to around 60 hours per week. In the 1920s the average was closer to 48 hours per week. By the time Keynes was writing, the 40 hour work week had become standard. Average workweek bottomed out in the mid 1980s in the US and UK at about 37 hours before starting to increase again.
>80 hours per week >60 hours per week
That never was the case (except for short periods after salary increases).
And this is not a question where there could be any speculation: in those days there were already people collecting such statistics, and we have a bunch of diaries describing the actual state of affairs, both from the workers themselves and from those who organized their labor - and everything shows that few people worked more than 50 hours a week on average.
Most likely, the myth about 80 hours a week stems from the fact that such weeks really were common, but it was work in the format of "work a week or two or a month for 80 hours, then a week or two or a month you don't work, spend money, arrange your life"
There is also agriculture, which employed a significant part of the population in the past. There, on average, there was usually even less than 40 hours of productive work, it's just that timing is of great importance there, and there are bottlenecks, and when necessary, you have to work 20 hours a day, which is compensated by periods when the workload is much less than 6 hours a day.
It most certainly was the case. As you correctly point out, people were collecting such statistics at the time, we know how much they worked and they worked a lot. In London from 1750 to 1800 the average male laborer worked over 4000 hours per year, and the typical year had 307 workdays. We have records of employment that list who worked which days at particular businesses, and court cases where witnesses testified about their work schedules, and we know of complaints from people at this time about the excessive amount of time they worked.
Take the Philadelphia carpenters' strike in 1791, where they were on strike demanding a reduction in hours to a 60 hour work week. The strike was unsuccessful. In the 1820s there was a so called "10 Hour Day" labor movement in New York City (note that at this time people worked 6 days a week). In the 1840s mill workers in Massachusetts attempted to get the state legislature to intervene and reduce their 74 hour workweeks. This was also unsuccessful. Martin Van Buren signed an executive order limiting workdays for federal employees to 10 hours per day. The first enforceable labor law in the US came in 1874, which set a limit of 60 hours in a workweek for women in Massachusetts.
- [deleted]
There’s no middle class. You either have to work for a living or you don’t.
You either have to work for a living or you don’t
The words 'have to' are doing a lot of work in that statement. Some people 'have to' work to literally put food on the table, other people 'have to' work to able to making payments on their new yacht. The world is full of people who could probably live out the rest of their lives without working any more, but doing so would require drastic lifestyle changes they're not willing to make.
I personally think the metric should be something along the lines of how long would it take from losing all your income until you're homeless.
> I personally think the metric should be something along the lines of how long would it take from losing all your income until you're homeless.
What income? Income from job, or from capital? A huge difference. Also a lot harder to lose the latter, gross incompetence or a revolution, while the former is much easier.
Yea, should have been clearer. Income from work (or unemployment benefits) in this case. Someone who works to essentially supplement their income, but could live off their capital, is in a very different position than someone for whom work is their only source of income or wealth.
The sentence works without those two words. “You either work for a living or you don’t.”
Now what?
Now it comes down to how you define 'for a living'. You still need to differentiate between people who work to survive, people who work to finance their aspirational lifestyle, and people who have all the money they could possibly need and still work because they either see it as a calling or they just seem to like working. Considering all these people in the same 'class' is far too simplistic.
Enh, to me it’s not, either you work or you don’t.
So someone on the edge of poverty, balancing two or three minimum wage jobs just to make ends meet, should be considered part of the same class as the CEO of Microsoft or Google? Hell most people on the Forbes list 'work' in at least some meaning of the word, even if many of them effectively work for themselves.
What about the trust fund kid working part time at an art gallery just because they like the scene and hanging out with artists? Same class?
And on the flip side, are pensioners, the unemployed, and people on permanent disability part of the same class as the dilettante children of billionaires?
Are we talking about the verb or an ideal? Either you work or you don’t.
Are we talking about the verb or an ideal?
We are talking about class, and if we should be making distinctions between groups of people who work for a living based on their wealth, income, and economic stability. I believe there is a fundamental class difference between people who work, but are rich enough to stop working whenever they want, those who can't quite stop working but are comfortable enough to easily go 6 month without a pay check, and people who are only a couple of missed pay checks away from literal homelessness.
I guess I lost the plot. Same point, either you work or you don’t. I grew up not knowing if I would have dinner that night because we were so poor. I learned I needed to work to eat. I don’t care that rich people are rich, I only care about myself and my family.
Coddling poor people is so severely out of touch with their reality, they most likely resent the hell out of you for it, I know I did.
Nobody's coddling anyone here. Acknowledging the reality of class in society isn't doing anything but analysis.
The original claim was a proposal to increase the resolution of class analysis one degree "higher" than Marx and no longer differentiate between the modern proletariat (working class), bourgeois (middle class), and aristocracy (upper class), in this case proposing to lump together the bourgeois and proletariat because they both have to work or they'll starve to death.
In this world, being born from the orifice of an aristocrat means you never have to work (have meaning, "or you'll die of exposure"). That's a frank reality. If your reaction to being born from a non aristocratic orifice is to shrug your shoulders and accept reality, great, nobody's trying to take that from you.
However you seem to be taking it a step further and suggesting that the people pointing out that this nature of society is unfair are somehow wrong to do so. I disagree. I think is perfectly valid to be born from whatever orifice and declare the obvious unfairness of the situation and the work to balance things out for people. That's not coddling, it's just ensuring that we all benefit in a just way from the work of your grandfather. Cause right now, someone has stolen the value of his work from you, and that's why you (and I) had to work so hard to get where we are today.
If you love that you had to work so hard, fine. I could take it or leave it. Instead of working a double through school I would have preferred to focus more on my studies and get higher grades, find better internships instead of slinging sandwiches. Personally I look at the extraordinarily wealth of the aristocrat class and I think, "is it more important that they're allowed to own 3 yachts or that all the children of our society can go to college?" I strongly believe any given country will be much stronger if it has less yachts and more college-educated people. Or people with better access to healthcare. Or people with better transit options to work. Etc.
I strongly believe any given country will be much stronger if it has less yachts and more college-educated people.
And even you strongly disagree with that statement, it is important to have framework within which your opinion of that statement can be analysed.
There was the articles on AI, that linked to how its used in Microsoft.
Satya Nadella doesn't read his emails, and doesn't write responses. He subscribes to podcasts and then gets them summarised by AI.
He turns up to the office and takes home obscene amounts of money for doing nothing except play with toys and pretend he's working.
They are "working", but they are actually just playing. And I think thats the problem with some of these comments, they aren't distinguishing between work and what is basically a hobby.
> What about the trust fund kid working part time at an art gallery just because they like the scene and hanging out with artists?
Its a hobby. They don't have to do it, and if they get fired for gross misconduct then they could find alternative things to pass the time.
- [deleted]
Homeless or loose current house? Downsizing and/or moving to cheaper places could go a long way. Yet loosing current level of housing is what most people think want to avoid.
Either work, but homeless is more absolute. For some downsizing means moving into their car and for others it means moving into a 3000 sq ft house, with a smaller pool, in the third nicest neighbourhood in town. But yea, losing your house and being forced to drastically downsize against your will is no doubt traumatic in both cases.
“from losing all your income until you're homeless.”
I’m willing to bet you haven’t lived long enough to know that’s a more or less a proxy for old age. :) That aside, even homeless people acquire possessions over time. If you have a lot of homeless in your neighborhood, try to observe that. In my area, many homeless have semi functional motor homes. Are they legit homeless, or are they “homeless oligarchs”? I can watch any of the hundreds of YouTube channels devoted to “van life.” Is a 20 year old who skipped college which their family could have afforded, and is instead living in an $80k van and getting money from streaming a “legit homeless”? The world is not so black and white it will turn out in the long run.
Many of those semi-functional motorhomes are actually owned by a particular type of slumlord (vanlord) who rent them out to homeless people.
https://sanjosespotlight.com/san-jose-to-crack-down-on-rv-re...
While you’re not wrong in what differentiates those with wealth to those without, I think ignores a lot of nuance.
Does one have savings? Can they afford to spend time with their children outside of working day to day? Do they have the ability to take reasonable risks without chancing financial ruin in pursuit of better opportunities?
These are things we typically attribute to someone in the middle class. I worry that boiling down these discussions to “you work and they don’t” misses a lot of opportunity for tangible improvement to quality of life for large number of people.
It doesn't - its a battle cry for the working classes (ie anyone who actually works) to realize they are being exploited by those that simply do not.
If you have an actual job and an income constrained by your work output, you could be middle class, but you could also recognize that you are getting absolutely ruined by the billionaire class (no matter what your level of working wealth)
I'm really not convinced that I and my CEO share a common class interest against the billionaires, and I'm not particularly interested in standing together to demand that both of us need to be paid more.
I don't know how to convince you that both of you are struggling against each other when you should be in common cause, but in my experience if the CEO thinks even more they are a temporarily embarrassed billionaire then I can see why you'd resent them. That doesn't change the facts of the matter though.
Traditionally there were the English upper class, who had others work for them, and the working class who did. Doctors and Bankers were the middle class, because they owned houses with 6-8 servants running it, so while they worked, they also had plenty of people working for them.
I agree with your point. Now doctors are working class as well.
That's reductive. The middle class in the US commonly describes people who have access to goods and services in moderation. You aren't poor just because you can't retire.
It is very possible that foreign powers use AI to generate social media content in mass for propaganda. If anything, the internet up to 2015 seemed open for discussion and swaying by real people’s opinion (and mockery of the elite classes), while manipulation and manufactured consent became the norm after 2017.
> It is very possible that foreign powers use AI to generate social media content in mass for propaganda.
No need for AI. Troll farms are well documented and were in action before transformers could string two sentences together.
Italian party Lega (in the government coalition) has been using deep fakes for some time now. It's not only ridiculous, it's absolutely offensive to the people they mock - von Der leyen, other Italian politicians... -
Queen Ursula deserves to be mocked.
Even from an angle that destabilizes the EU and so directly benefits Russia?
My answer to this would be, i think: "well, if my mocking Ursula Von Der Leyen destabilizes the EU then maybe the EU shouldn't exist."
right?
pact of steel?
anyone?
Yes. She’s not an elected representative. And she’s been utterly ineffective at threatening Russia with her soft stance (Yes, in war, strong words are weak actions). Her place is back in Hunger Games, starving everybody for the greater good of the elite class.
- [deleted]
This is a pre-/post- Snowden & Schrems, which challenged the primary economic model of the internet as a surveillance machine.
All the free money dried up and the happy clapping Barney the Dinosaur Internet was no more!
He also lived in a time when the intense importance and function of a moral and cultural framework for society was taken for granted. He would have never imagined the level of social and moral degeneration of today.
I will not go into specifics because the authoritarians still disagree and think everything is fine with degenerative debauchery and try to abuse anyone even just pointing to failing systems, but it all does seem like civilization ending developments regardless of whether it leads to the rise of another civilization, e.g., the Asian Era, i.e., China, India, Russia, Japan, et al.
Ironically, I don’t see the US surviving this transitional phase, especially considering it essentially does not even really exist anymore at its core. Would any of the founders of America approve of any of America today? The forefathers of India, China, Russia, and maybe Japan would clearly approve of their countries and cultures. America is a hollowed out husk with a facade of red, white, and blue pomp and circumstance that is even fading, where America means both everything and nothing as a manipulative slogan to enrich the few, a massive private equity raid on America.
When you think of the Asian countries, you also think of distinct and unique cultures that all have their advantages and disadvantages, the true differences that make them true diversity that makes humanity so wonderful. In America you have none of that. You have a decimated culture that is jumbled with all kinds of muddled and polluted cultures from all over the place, all equally confused and bewildered about what they are and why they feel so lost only chasing dollars and shiny objects to further enrich the ever smaller group of con artist psychopathic narcissists at the top, a kind of worst form of aristocracy that humanity has yet ever produced, lacking any kind of sense of noblesse oblige, which does not even extend to simply not betraying your own people.
That a capitalist society might achieve a 15 hour workweek if it maintained a "non debauched culture" and "culture homogeneity" is an extraordinary claim I've never seen a scrap of evidence for. Can you support this extraordinary claim?
That there's any cultural "degenerative debauchery" is an extraordinary claim. Can you back up this claim with evidence?
"Decimated," "muddled," and "polluted" imply you have an objective analysis framework for culture. Typically people who study culture avoid moralizing like this because one very quickly ends up looking very foolish. What do you know that the anthropologists and sociologists don't, to where you use these terms so freely?
If I seem aggressive, it's because I'm quite tired of vague handwaving around "degeneracy" and identity politics. Too often these conversations are completely presumptive.
> That there's any cultural "degenerative debauchery" is an extraordinary claim. Can you back up this claim with evidence?
What's the sense in asking for examples? If one person sees ubiquitous cultural decay and the other says "this is fine," I think the difference is down to worldview. And for a pessimist and an optimist to cite examples at one another is unlikely to change the other's worldview.
If a pessimist said, "the opioid crisis is deadlier than the crack epidemic and nobody cares," would that change the optimist's mind?
If a pessimist said, "the rate of suicide has increased by 30% since the year 2000," would that change the optimist's mind?
If a pessimist said, "corporate profits, wealth inequality, household debt, and homelessness are all at record highs," ...?
And coming from the other side, all these things can be Steven Pinker'd if you want to feel like "yes there are real problems but actually things are better than ever."
There was a book that said something about "you will recognize them by their fruit." If these problems are the fruit born of our culture, it's worth asking how we got here instead of dismissing it with "What do you know that the anthropologists and sociologists don't?"
Sure some things are subjective but wide-ranging and vague claims are unactionable and therefore imo should simply be ignored. If someone's going to say something like that I think it's worth challenging them to get specific and actionable.
I also wholeheartedly disagree that, vaguely, diversity has something to do with the reduction of material conditions, or gay people, or whatever tf, so I wanted to allow the op the opportunity to be demonstrably wrong. They wouldn't take it of course, because there's no evidence for what they claim, because it's a ridiculous assertion.
The reasons things are they way they are today are identifiable and measurable. Rent is high because mostly because housing is an investment vehicle and supply is locked by a functional cartel. Homelessness is high mostly because of a lack of universal healthcare. Crime is continually dropping despite what the media says, and immigrants commit a lower crime per capita than any other demographic group - but the jails remain full because the USA engages in a demonstrably ineffective retributive justice system.
I'm so tired of conservatives walking around flinging every which way their feelings as facts. Zizek has demonstrated the potential value of a well considered conservative ideology, and unfortunately today all we get from that side is vague (or explicit) bigotry.
The OP didn't just claim that there's cultural degeneracy happening (which again, they didn't definite very well), they blamed real-world outcomes on it. That's a challengeable premise.
Oh the prized Asian magic, more civilized, less mixed, the magical place.
Capitalism arrives for everyone, Asia is just late for the party. Once it eventually financializes everything, the same will happen to it. Capitalism eventually eats itself, doesn't matter the language or how many centuries your people might have.
Keynes didn't anticipate social media
If you work 15 hours/week then presumably someone who chose to work 45 hours/week would make 3x more money.
This creates supply-demand pressure for goods and services. Anything with limited supply such as living in the nice part of town will price out anyone working 15 hours/week.
And so society finds an equilibrium…
Presumably the reduction to a 15 hour workweek would be much the same as the reduction to the 40 hour workweek - everyone takes the same reduction in total hours and increase in hourly compensation encoded in labor laws specifically so there isn't this tragedy of the commons.
Unless the law forbids working more than 15 hours per week, the numbers will shift around but the supply-demand market equilibrium will remain approximately the same.
If minimum wage goes up 40/15 = 267%, then the price of your coffee will go up 267% because the coffeeshop owner needs to pay 267% more to keep the cafe staffed.
The 40 hour work week is something a cultural equilibrium. But we've all heard of doctors, lawyers, and bankers working 100h weeks which affords them some of the most desirable real estate in the world...
> Unless the law forbids working more than 15 hours per week, the numbers will shift around but the supply-demand market equilibrium will remain approximately the same.
Require anyone working over 15 hours to be paid time and a half overtime. If you want to hire one person to work 40 hours per week, that is 30% more expensive than hiring 3 people to work the same number of hours. In some select instances sure, having a single person do the job is worth the markup, and some people will be willing to work those hours, just like today you have some people working over 40, but in general the market will demand reduction in working hours.
Similarly, there is a strong incentive to work enough hours to be counted as a full time employee, so the marginal utility of that 35th hour is pretty high currently, whereas if full time benefits and labor protections started at 15 hours, then the marginal utility of that 35th hour would be substantially less.
> If minimum wage goes up 40/15 = 267%, then the price of your coffee will go up 267% because the coffeeshop owner needs to pay 267% more to keep the cafe staffed.
That would be true if 100% of the coffee shop's revenue went to wages. Obviously that's not the case. In reality, the shop is buying ingredients, paying rent for the space, paying off capex for the coffee making equipment, utilizing multiple business services like accounting and marketing, and hopefully at the end of the day making some profit. Realistically, wages for a coffee shop are probably 20-30% of revenue. So to cover the increased cost of labor, prices would have to rise 53%. Note that in this scenario you also have 267% more money to spend on coffee.
Of course there are some more nuances as prices in general inflate. Ultimately though, the equilibrium you reach is that people working minimum wage for a full workweek wind up able to afford 1 minimum-wage workweek worth of goods and services. This holds true in the long term regardless of what level minimum wage is or how long a workweek is. Indeed you could just as easily have everyone's wages stay exactly the same but we are all working less, then we all have less money and there is a deflationary effect but in the long term we wind up at the same situation. Ideally, you'd strike a balance between these two which reaches the same end state with a reasonably steady money supply.
> The 40 hour work week is something a cultural equilibrium.
No, it isn't. It is an arbitrary convention, one in a long series which had substantially different values in the past. It has remained constant because it is encoded in law in such a way that it is no longer subject to simple pressures of labor supply and demand.
> But we've all heard of doctors, lawyers, and bankers working 100h weeks which affords them some of the most desirable real estate in the world...
There are a lot more than just doctors and lawyers and bankers working long hours. 37% of americans work 2 full time jobs, and most of them aren't exactly in a position to afford extremely desirable real estate. If the workweek were in a equilibrium due to supply and demand, wouldn't these people just be working more hours at their regular jobs?
I think something Keynes got wrong there and much AI job discussion ignores is people like working, subject to the job being fun. Look at the richest people with no need to work - Musk, Buffett etc. Still working away, often well past retirement age with no need for the money. Keynes himself, wealth and probably with tenure working away on his theories. In the UK you can quite easily do nothing by going on disability allowance and doing nothing and many do but they are not happy.
There can be a certain snobbishness with academics where they are like of course I enjoy working away on my theories of employment but the unwashed masses do crap jobs where they'd rather sit on their arses watching reality TV. But it isn't really like that. Usually.
The reality of most people is that they need to work to financially sustain themselves. Yes, there are people who just like what they do and work regardless, but I think we shouldn't discount the majority which would drop their jobs or at least work less hours had it not been out of the need for money.
Although in democracies we've largely selected that system. I've been to socialist places - Cuba and Albania before communism collapsed where a lot of people didn't do much but were still housed and fed (not very well - ration books) but no one seems to want to vote that stuff in.
The thing about those systems is you'd have to forgo the entire notion about private property and wealth as we currently know it for it to work out. Even then, there would be people who wouldn't want to work/contribute and the majority who would contribute the bare minimum (like you're saying). The percentage of people who'd work because they like it wouldn't be much higher than it is now. Or it might be even lower, as money wouldn't be as much of a factor in one's life.
It seems like a democratic system could both maintain private property and make sure all of their citizens have basic needs are satisifed (food, housing, education, medical). I don't see how these two are mutually exclusive, unless you take a hardline that taxation is theft.
I think more people take a soft line. Taxation isn't theft, but too much taxation is theft.
I don't know that I've ever heard this rationally articulated. I think it's a "gut feel" that at least some people have.
If taxes take 10% of what you make, you aren't happy about it, but most of us are OK with it. If taxes take 90% of what you make, that feels different. It feels like the government thinks it all belongs to them, whereas at 10%, it feels like "the principle is that it all belongs to you, but we have to take some tax to keep everything running".
So I think the way this plays out in practice is, the amount of taxes needed to supply everyones' basic needs is across the threshold in many peoples' minds. (The threshold of "fairness" or "reasonable" or some such, though it's more of a gut feel than a rational position.)
>food, housing, education, medical
Literally unlimited needs, term "basic" does not apply to them.
I'm not sure what you mean by "unlimited needs". These things are defiantly finite, and can be basic.
While they didn't do much at work and could coast forever, they still had to show up and sit out the hours. And this does seem to correlate highly with ration books. Which are also not amazon-fulfilled, but require going to a store, waiting in line, worring that the rations would run out, yada yada.
I'll take capitalism with all its warts over that workers paradise any day.
How did you visit Albania before communism collapsed? I thought it was closed off from the world.
Well it was in the middle period when some communism collapsed but Albania was communist still. They did tourist day trips from Corfu to raise some hard currency. It's only about a mile from Albania at the closest point.
Yeah, I’ve been to that part of the world. That’s really cool. I didn’t know it was available to tour at that time.
What percentage of people would you say like working for fun? Would you really claim they make up a significant portion of society?
Even myself, work a job that I enjoy building things that I’m good at, that is almost stress free, and after 10-15 years find that I would much rather spend time with my family or even spend a day doing nothing rather than spend another hour doing work for other people. the work never stops coming and the meaninglessness is stronger than ever.
I think a lot of people would work fewer hours and probably retire earlier if money were absolutely not in the equation. That said, it's also true that there are a lot of things you realistically can't do on your own--especially outside of software.
Well - I guess you are maybe typical in quite liking the work but wanting to do less hours? I saw some research that hunter gatherers work about 20 hours a week - maybe that's an optimum.
A lot of people like the work they do, but they also like the things they do when they aren't working - more.
Meanwhile your examples for happy working are all billionaires who do w/e tf they want, and your example of sad non working are disabled people.
Not to undercut your point - because you’re largely correct - but this is my reality. I have a decent-paying job in which I work roughly 15 hrs a week. Sometimes more when work scales up.
That said, I’m not what you’d call a high-earning person (I earn < 100k) I simply live within my means and do my best to curb lifestyle creep. In this way, Keynes’ vision is a reality, but it’s a mindset and we also have to know when enough wealth is enough.
You're lucky. Most companies don't accept that. Frequently, even when they have part time arrangements, the incentives are such that middle managers are incentivized to squeeze you (including squeezing you out), despite company policies and HR mandates.
I am lucky. I work for a very small consultancy (3 people plus occassional contractors) and am paid a fraction of our net income.
The arrangement was arrived at because the irregular income schedule makes an hourly wage or a salary a poor option for everyone involved. I’m grateful to work for a company where the owners value not only my time and worth but also value a similar work routine themselves.
40 hours/week is of course just an established norm for a lot of people and companies. But two 20 hour/week folks tend to cost more than one 40 hour/week person for all sorts of reasons.
source?
Well, for starters people probably want health insurance in the US which often starts at some percentage of full-time. Various other benefits. Then two people are probably just more overhead to manage than one. Though they may offer more flexibility.
Which is a shame because I bet most knowledge workers aren't putting in more than three or fours hours of solid work. The rest of the time they are just keeping a seat warm.
Spoken like middle management. If a knowledge worker is only putting in 4 hours they're either mismanaged or dead weight. Fire their manager and see if they are more effective, if not, then let them go. As a developer I routinely work 9 hour days without lunch and so do the others on my team and most people I've worked with as a developer. Myths like the 10% developer and lazy 4 hour knowledge workers are like the myth of the welfare queen. We really need to be more aware that when we complain about 5% of people that it becomes 100% to those outside of the field.
>As a developer I routinely work 9 hour days without lunch and so do the others on my team and most people I've worked with as a developer.
I've come across people like you and they don't produce as much value as they think.
I'm working hard on this one. I'm down to a three-day week, and am largely keeping the boundaries around those other four.
It came about late last year when the current employer started going getting gently waved off in early funding pitches. That resulted in some thrash, forced marches to show we could ship, and the attendant burnout for me and a good chunk of the team I managed. I took a hard look at where the company was and where I was, and decided I didn't have another big grind in me right now.
Rather than just quit like I probably would have previously, I laid it out to our CEO in terms of what I needed: more time taking care of my family and myself, less pressure to deliver impossible things, and some broad idea of what I could say "no" to. Instead of laughing in my face, he dug in, and we had a frank conversation about what I _was_ willing to sign up for. That in turn resulted in a (slow, still work-in-progress) transition where we hired a new engineering leader and I moved into a customer-facing role with no direct reports.
Now I to work a part-time schedule, so I can do random "unproductive" things like repair the dishwasher, chaperone the kid's field trip, or spend the afternoon helping my retired dad make a Costco run. I can reasonably stop and say, "I _could_ pay someone to do that for me, but I actually have time this week and I can just get it done" and sometimes I...actually do, which is kind of amazing?
...and it's still fucking hard to watch the big, interesting decisions and projects flow by with other people tackling them and not jump in and offer to help. B/c no matter what a dopamine ride that path can be, it also leads to late nights and weekends working and traveling and feeling shitty about being an absentee parent and partner.
Most of the people are leisuring af work (for keynes era standards) and also getting paid for it
> Keynes suggested that by 2030, we’d be working 15 hour workweeks, with the rest of the time used for leisure.
I suspect he didn't factor in how may people would be retired and on entitlements.
We're not SUPER far from that now, when you factor in how much more time off the average person has now, how much larger of percentage of the population is retired, and how much of a percentage is on entitlements.
The distribution is just very unequal.
I.E. if you're the median worker, you've probably seen almost no benefit, but if you're old or on entitlements, you've seen a lot of benefits.
> Keynes suggested that by 2030, we’d be working 15 hour workweeks
Most people with a modest retirement account could retire in their forties to working 15-hour workweeks somewhere in rural America.
The trade is you need to live in VHCOL city to earn enough and have a high savings rate. Avoid spending it all on VHCOL real estate.
And then after living at the center of everything for 15-20 years be mentally prepared to move to “nowhere”, possibly before your kids head off to college.
Most cannot meet all those conditions and end up on the hedonic treadmill.
> you need to live in VHCOL city to earn enough and have a high savings rate
Yes to the latter, no to the former. The states with the highest savings rates are Connecticut, New Jersey, Minnesota, Massachussetts and Maryland [1]. Only Massachussetts is a top-five COL state [2].
> then after living at the center of everything for 15-20 years be mentally prepared to move to “nowhere”
This is the real hurdle. Ultimately, however, it's a choice. One chooses to work harder to access a scarce resource out of preference, not necessity.
[1] https://en.wikipedia.org/wiki/List_of_U.S._states_by_savings...
[2] https://en.wikipedia.org/wiki/List_of_U.S._states_by_savings...
CT & NJ being top of the list points to the great NYC metropolitan wage premium though doesn't it? MA at #4 picks up Boston, MD at #5 picks up DC, etc.
CA probably nowhere on the list because its such a small state that any Silicon Valley premium gets diluted at the state level average.
I am not finding a clear definition of this index but it appears to be $saved/$income (or $saved/$living expenses) right? So 114% in CT dollars is probably way more than 102% Kansas dollars..
It's also worth noting the point I was making is - if you take a "one years NYC income in savings" amount of money and relocate to say, New Mexico.. the money goes a lot further than trying to do the opposite!
Some countries are still trending in that direction:
https://www.theguardian.com/commentisfree/2024/nov/21/icelan...
Policy matters
Keynes also convinced us that high unemployment and high inflation couldn't happen at the same time. This was proven wrong in the early 1970s.
It's more likely 15% of the workforce will have jobs. They'll be working eighty hour weeks and making just enough to keep them from leaving.
Now one has to work 60 hours to afford housing(rent/mortgage) and insurance (health, home, automotive). Yes, food is cheap if one can cook.
> Keynes suggested that by 2030, we’d be working 15 hour workweeks
Yeah, I'd say I get up to 15 hours of work done in a 40 hour workweek.
- [deleted]
It's still not 2030 yet. It could still happen.
> Instead, we chose consumption
instead, corporations chose to consume us
"Bullshit jobs" are the rubbish required to keep the paperwork tidy, assessed and filed. No company pays someone to do -nothing-.
AI isn't going to generate those jobs, it's going to automate them.
ALL our bullshit jobs are going away, and those people will be unemployed.
I foresee programers replaced by AI and the people who programed becoming pointy haired bosses to the AI.
I for see that when people only employ AI for programming, it quickly hits the point where they train on their own (usually wrong) code and it spirals into an implosion.
When kids stop learning to code for real, who writes GCC v38?
This whole LLM is just the next bitcoin/nft. People had a lot of video cards and wanted to find a new use for them. In my small brain it’s so obvious.
LLMs maybe but there will be other algorithms.
For sure, same point though.
i dunno, i have gotten tons of real work done with LLM’s. i just had o3 edit a contract and swap out pieces of it to make it work with SOW’s instead of embed the terms directly in the contract. i used to have to do that myself and have a lawyer review it. (i’ve been working with contracts for 30 years, i know enough now to know most basic contract law even though IANAL.) i’ve vibe coded a whole bunch of little things i would never have done myself or hired someone to do. i have had them extract data in seconds that would have taken forever. there is without question real utility in LLM’s and they are also without question getting better very fast.
to compare that to NFT’s is pretty disingenuous. i don’t know anyone who has ever accomplished anything with an NFT. (i’m happy to be wrong about that, and i have yet to find a single example).
There is without question value to LLMs, I absolutely agree.
Trying to make them more than they are is the issue I have. Let them be great at crunching words, I’m all about that.
Pretending that OpenAI is worth billions of dollars is a joke, when I can get 90% of the value the provide for free, on my own mediocre hardware.
Ha-ha, this is very funny :) Say, have you ever tried seriously using the AI-tools for programming? Because if you do, and still believe this, I may have a bridge/Eiffel Tower/railroad to sell you.
The majority of my code over theast few months has been written by LLMs. Including systems I rely on for my business daily.
Maybe consider it's not all on the AI tools if they work for others but not for you.
Sure man, maybe also share that bit with your clients and see how excited they'll be to learn their vital code or infrastructure may be designed by a stochastical system (*reliable a solid number of times).
My clients are perfectly happy about that, because they care about the results, not FUD. They know the quality of what I deliver from first-hand experience.
Human-written code also needs reviews, and is also frequently broken until subjected to testing, iteration, and reviews, and so our processes are built around proper qa, and proper reviews, and then the original source does not matter much.
It's however a lot easier to force an LLM into a straighjacket of enforced linters, enforced test-suite runs, enforced sanity checks, enforced processes at a level that human developers would quit over, and so as we build out the harness around the AI code generation, we're seeing the quality of that code increase a lot faster than the quality delivered by human developers. It still doesn't beat a good senior developer, but it does often deliver code that handles tasks I could never hand to my juniors.
(In fact, the harness I'm forcing my AI generated code through was written about 95%+ by an LLM, iteratively, with its own code being forced through the verification steps with every new iteration after the first 100 lines of code or so)
So to summarise - the quality of code you generated with LLM is increasing a lot faster, but somehow never reaching senior level. How is that a lot faster? I mean if it never reaches the (fairly modest) goal. But that's not the end of it. Your mid-junior LLMs are also enforcing quality gates and harnesses on the rest of your LLM-mid-juniors. If only there was some proof for that, like a project demo, so it could at least look believable...
It's a lot faster compared to new developers who still cost magnitudes more from day 1. It's not cost effective to hand every task to someone senior. I still have juniors on teams because in the long term we still need actual people who need a path to becoming senior devs, but in financial terms they are now a drain.
You can feel free not to believe it, as I have no plans to open up my tooling anytime soon - though partly because I'm considering turning it into a service. In the meantime these tools are significantly improving the margins for my consulting, and the velocity increases steadily as every time we run into a problem we make the tooling revise its own system prompt or add additional checks to the harness it runs to avoid it next time.
A lot of it is very simple. E.g a lot of these tools can produce broken edits. They'll usually realise and fix them, but adding an edit tool that forces the code through syntax checks / linters for example saved a lot of pain. As does forcing regular test and coverage runs, not just on builds.
For one of my projects I now let this tooling edit without asking permission, and just answer yes/no to whether it can commit once it's ready. If no, I'll tell it why and review again when it thinks it's fixed things, but a majority of commit requests are now accepted on the first try.
For the same project I'm now also experimenting with asking the assistant to come up with a todo list of enhancements for it based on a high level goal, then work through it, with me just giving minor comments on the proposed list.
I'm vaguely tempted to let this assistant reload it's own modified code when tests pass and leave it to work on itself for a a while and see what comes of it. But I'd need to sandbox it first. It's already tried (and was stopped by a permissions check) to figure out how to restart itself to enable new functionality it had written, so it "understands" when it is working on itself.
But, by all means, you can choose to just treat this as fiction if it makes you feel better.
No, I am not disputing whatever productivity gains you seem to be getting. I was just curious if it LLMs feeding data into each other can work that well, knowing how long it took OpenAI to make ChatGPT properly count the number of "R"s in the word "strawberry". There is this effect called "Habsburg AI". I reckon the syntax-check and linting stuff is straightforward, as it adds a deterministic element to it, but what do you do about the more tricky stuff like dreamt up functions and code packages? Unsafe practices like sensitive exposing data in cleartext, Linux commands which are downright the opposite of what was prompted, etc? That comes up a fair amount of times and I am not sure that LLMs are going to self-correct here, without human input.
It doesn't stop them from making stupid mistakes. It does reduce the amount of time I have to deal with the stupid mistakes that they know how to fix if the problem is pointed out to them, so that I can focus on more focused diffs of cleaner code.
E.g. a real example: The tooling I mentioned at one point early on made the correct functional change, but it's written in Ruby and Ruby allows defining methods multiple times in the same class - the later version just overrides the former. This would of course be a compilation error in most other languages. It's a weakness of using Ruby with a careless (or mindless) developer...
But Rubocop - a linter - will catch it. So forcing all changes through Rubocop and just returning the errors to LLM made it recognise the mistake and delete the old method.
It lowers the cognitive load of the review. Instead of having to wade through and resolve a lot of cruft and make sense of unusually structured code, you can focus on the actual specific changes and subject those to more scrutiny.
And then my plan is to experiment with more semantic checks of the same style as what Rubocop uses, but less prescriptive, of the type "maybe you should pay extra attention here, and explain why this is correct/safe" etc. An example might be to trigger this for any change that involves reading a key or password field or card number whether or not there is a problem with it, and both trigger the LLM to "look twice" and indicate it as an area to pay extra attention to in a human review.
It doesn't need to be perfect, it just need to provide enough of a harness to make it easier for humans in the loop to spot the remaining issues.
Right, so you understand that any dev who already uses for example Github Copilot with various code syntax extensions already achieves whatever it is that your new service is delivering? I'd spare myself the effort if I were you.
It didn't start with the intent of being a service; I started with it because there were a number of things that Copilot or tools like Claude Code doesn't do well enough that annoyed me, and spending a few hours was sufficient to get to the point where it's now my primary coding assistant because it works better for me for my stack, and because I can evolve it further to solve the specific problems I need solved.
So, no, I'll keep doing this because doing this is already saving me effort for my other projects.
> written by LLMs
Writing code is often easier than reading it. I suspect that coders soon will face what translators face now: fixing machine output at 2x to 3x less pay.
I tried and they weren't that good. I'm gazing into the future a little.
> "Bullshit jobs" are the rubbish required to keep the paperwork tidy, assessed and filed.
It's also the jobs that involve keeping people happy somehow, which may not be "productive" in the most direct sense.
One class of people that needs to be kept happy are managers. What makes managers happy is not always what is actually most productive. What makes managers happy is their perception of what's most productive, or having their ideas about how to solve some problem addressed.
This does, in fact, result in companies paying people to do nothing useful. People get paid to do things that satisfy a need that managers have perceived.
AI is going to 10x the amount of bullshit, fully automating the process.
NONE of the bullshit jobs are going away, there will simply be bigger, more numerous bullshit.
- [deleted]
Keynes was talking about work in every sense,including house chore. We're well below 15 hours of house chores by now, so that part became true.
Washing machines created a revolution where we could now expend 1/10th of the human labour to wash the same amount of clothes as before. We now have more than 10 times as much clothes to wash.
I don’t know if it’s induced demand, revealed preference or Jevon’s paradox, maybe all 3.
> We now have more than 10 times as much clothes to wash.
OK, but I doubt we're washing 10 times as much clothes, unless are people wearing them for one hour between washes...
> We now have more than 10 times as much clothes to wash.
Citation needed.
I saw some research once that the hours women spend doing housework hasn't changed. I think because human nature, not anything to do with the tech.
That's nonsense. It used to take women a full workday per week just to wash clothes.
https://robinmarkphillips.com/household-appliances-made-life...
I've done some 3rd world travel without washing machines for a while and my laundry was once a week dunk stuff in the sink for 5 minutes with shampoo + rinse water, wring and hang up. I don't buy the whole day being necessary thing.
Well, now we can own more clothes! And we can wash them more often! And rather than specialist washerwomen, everyone can/must use the laundry-room robots!
[flagged]
We've got 10 whole hours left over for "actual" work!
(Quotes because I personally have a significantly harder time doing bloody housework...)
Clearly you don’t have children!
Life pro tip: teach your children to do chores.
> Life pro tip: teach your children to do chores.
Before teaching your children to do chores: x hours per week for chores
After teaching your children to do chores: y hours per weeks to have annoying discussions with the child, and X hours per week cautioning the children to do the chores, and ensuring that your children do the chore properly. Here X > x.
Additional time for you: -((X-x)+y), where X>x and additionally y > 0.
I did a lot of chores growing up. Looking back, X>x was true for the first few months of each new chore; but X died down to zero as time went on.
I was thinking it's a function of the social setting. Single bloke 1h/week. Couple 5h/week. With kids continuous. Or some such.
I imagine standards have also shifted. It just wouldn’t have been possible to wash a child’s clothes after one wear before the invention of the washing machine. People also had far less clothing that they could have even needed to wash.
Source? Keynes was a serious economist, not a charlitan futurist.
John Maynard Keynes (1930) - Economic Possibilities for our Grandchildren
> For many ages to come the old Adam will be so strong in us that everybody will need to do some work if he is to be contented. We shall do more things for ourselves than is usual with the rich to-day, only too glad to have small duties and tasks and routines. But beyond this, we shall endeavour to spread the bread thin on the butter-to make what work there is still to be done to be as widely shared as possible. Three-hour shifts or a fifteen-hour week may put off the problem for a great while. For three hours a day is quite enough to satisfy the old Adam in most of us!
http://www.econ.yale.edu/smith/econ116a/keynes1.pdf
https://www.aspeninstitute.org/wp-content/uploads/files/cont...
As of now yes. But we are still in day 0.1 of GenAI. Do you think this will be the case when o3 models are 10x better and 100x cheaper? There will be a turning point but it’s not happened yet.
Yet we're what? 5 years into "AI will replace programmers in 6 months"?
10 years into "we'll have self driving cars next year"
We're 10 years into "it's just completely obvious that within 5 years deep learning is going to replace radiologists"
Moravec's paradox strikes again and again. But this time it's different and it's completely obvious now, right?
I basically agree with you, and I think the thing that is missing from a bunch of responses that disagree is that it seems fairly apparent now that AI has largely hit a brick wall in terms of the benefits of scaling. That is, most folks were pretty astounded by the gains you could get from just stuffing more training data into these models, but like someone who argues a 15 year old will be 50 feet tall based on the last 5 years' growth rate, people who are still arguing that past growth rates will continue apace don't seem to be honest (or aware) to me.
I'm not at all saying that it's impossible some improvement will be discovered in the future that allows AI progress to continue at a breakneck speed, but I am saying that the "progress will only accelerate" conclusion, based primarily on the progress since 2017 or so, is faulty reasoning.
What's annoying is plenty of us (researchers) predicted this and got laughed at. Now that it's happening, it's just quiet.> it seems fairly apparent now that AI has largely hit a brick wall in terms of the benefits of scaling
I don't know about the rest, but I spoke up because I didn't want to hit a brick wall, I want to keep going! I still want to keep going! But if accurate predictions (with good explanations) aren't a reason to shift resource allocation then we just keep making the same mistake over and over. We let the conmen come in and people who get too excited by success that they get blind to pitfalls.
And hey, I'm not saying give me money. This account is (mostly) anonymous. There's plenty of people that made accurate predictions and tried working in other directions but never got funding to test how methods scale up. We say there's no alternatives but there's been nothing else that's been given a tenth of the effort. Apples and oranges...
> What's annoying is plenty of us (researchers) predicted this and got laughed at. Now that it's happening, it's just quiet.
You need to model the business world and management more like a flock of sheep being herded by forces that mostly don't have to do with what actually is going to happen in future. It makes a lot more sense.
Yet I'm talking about what did happen.> mostly don't have to do with what actually is going to happen
I'm saying we should have memory. Look at predictions people make. Reward accurate ones, don't reward failures. Right now we reward whoever makes the craziest predictions. It hasn't always been this way, so we should go back to less crazy
Practically no one is herded by what is actually going to happen, hardly even by what is expected to happen. Business pretends that it is driven by expectations, but is mostly driven by the past, as in financial statements. What is the bonus we can get this year? There is of course the strategic thinking, I don't want to discount that part of business, but it is not the thing that will drive most of these, AI as a cost saving measure, decisions. This is the unimaginative part of AI application and as such relegated to the unimaginative managers.
> It is difficult to get a man to understand something, when his salary depends on his not understanding it.”
It's all a big hype bubble and not only is no one in the industry willing to pop it, they actively defend against popping a bubble that is clearly rupturing on its own. It's endemic of how modern businesses no longer care about a proper 10 year portfolio and more about how to make the next quarter look good.
There's just no skin in the game, and everyone's ransacking before the inevitable fire instead of figuring out how to prevent the fire to begin with.
> What's annoying is plenty of us (researchers) predicted this and got laughed at. Now that it's happening, it's just quiet.
Those people always do that. Shouting about cryptocurrencies and NFTs from the rooftops 3-4 years ago, now completely gone.
I suspect they're the same people, basically get rich quick schemers.
Sure, you were right.
But if you had been wrong and we would now have had superintelligence, the upside for its owners would presumably be great.
... Or at least that's the hypothesis. As a matter of fact intelligence is only somewhat useful in the real world :-)
I am not sure the owners would keep being that in case of real superintelligence, though.
I dont see any wall. Gemini 2.5 and o3/o4 are incredible improvements. Gen AI is miles ahead of where it was a year ago which was miles ahead of where it was 2 years ago.
The actual LLM part isn't much better than a year ago. What's better is that they've added additional logic and made it possible to intertwine traditional, expert-system style AI plus the power of the internet to augment LLMs so that they're actually useful.
This is an improvement for sure, but LLMs themselves are definitely hitting a wall. It was predicted that scaling alone would allow them to reach AGI level.
> It was predicted that scaling alone would allow them to reach AGI level.
This is a genuine attempt to inform myself. Could you think to those sort of claims from experts at the top?
There were definitely people "at the top" who were essentially arguing that more scale would get you to AGI - Ilya Sutskever of OpenAI comes to mind (e.g. "next-token prediction is enough for AGI").
There were definitely many other prominent researchers who vehemently disagreed, e.g. Yann LeCun. But it's very hard for a layperson (or, for that matter, another expert) to determine who is or would be "right" in this situation - most of these people have strong personalities to put it mildly, and they often have vested interests in pushing their preferred approach and view of how AI does/should work.
The improvements have less to do with scaling than adding new techniques like better fine tuning and reinforcement learning. The infinite scaling we were promised, that only required more content and more compute to reach god tier has indeed hit a wall.
I probably wasn't paying enough attention, but I don't remember that being the dominating claim that you're suggesting. Infinite scaling?
People were originally very surprised that you could get so much functionality by just pumping more data and adding more parameters to models. What made OpenAI initially so successful is that they were the first company willing to make big bets on these huge training runs.
After their success, I definitely saw a ton of blog posts and general "AI chatter" that to get to AGI all you really needed to do (obviously I'm simplifying things a bit here) was get more data and add more parameters, more "experts", etc. Heck, OpenAI had to scale back it's pronouncements (GPT 5 essentially became 4.5) when they found that they weren't getting the performance/functionality advances they expected after massively scaling up their model.
I basically agree with you also, but I have a somewhat contrarian view of scaling -> brick wall. I feel like applications of powerful local models is stagnating, perhaps because Apple has not done a good job so far with Apple Intelligence.
A year ago I expected a golden age of local model intelligence integrated into most software tools, and more powerful commercial tools like Google Jules to be something used perhaps 2 or 3 times a week for specific difficult tasks.
That said, my view of the future is probably now wrong, I am just saying what I expected.
> Yet we're what? 5 years into "AI will replace programmers in 6 months"?
Realistically, we're 2.5 years into it at most.
No, the hype cycle started around 2019, slowly at first. The technology this is built with is more like 20 years old, so no, we are not 2.5 years at most really.
If you can quote anyone well-known saying we'd be replacing programmers in 6 months back in 2019, I'd be interested to read it.
we're 2.5 years into the current hype trend, no way was this mainstream until at least 2022
GPT3 dropped in 2020. That's when it hit mainstream
GPT3 wasn't that impressive. GPT 3.5 is when it became "oh wow, this could really change things," and that was 2022.
It certainly made a heavy impression on this tiny new outlet called NYT in 2020 : https://archive.is/QtpMT
GPT-3 shook the research world but it was by no means mainstream until the ChatGPT release in Nov 2022.
I feel like no one was really talking about this stuff until midjourney and dalle, but I can agree to disagree
Four years into people mocking "we'll have self driving cars next year" while they are on the street daily driving around SF.
They are self driving the same way a tram or subway can be self driving. They traffic a tightly bounded designated area. They're not competing with human drivers. Still a marvel of human engineering, just quite expensive compared with other forms of public transport. It just doesn't compete in the same space and likely never will.
They are literally competing with human uber drivers in the area they operate and also having a much lower crash and injury rate.
I admit they don't operate everywhere - only certain routes. Still they are undoubtedly cars that drive themselves.
I imagine it'll be the same with AGI. We'll have robots / AIs that are much smarter than the average human and people will be saying they don't count because humans win X Factor or something.
Self-driving vehicles can only exist in cities of extreme wealth like SF. Try running them in Philadelphia and see what happens.
How are they competing, if their routes are limited?
The cotton gin processed short fiber cotton, but not long fiber cotton.
Did the cotton gin therefore not compete with human labor?
They're driving, but not well in my (limited) interactions with them. I had a waymo run me completely out of my lane a couple months ago as it interpreted 2 lanes of left turn as an extra wide lane instead (or, worse, changed lanes during the turn without a blinker or checking its sensors, though that seems unlikely).
Yes, but ...
The argument that self-driving cars should be allowed on public roads as long as they are statistically as safe as human drivers (on average) seems valid, but of course none of these cars have AGI... they perform well in the anticipated simulator conditions in which they were trained (as long as they have the necessary sensors, e.g. Waymo's lidar, to read the environment in reliable fashion), but will not perform well in emergency/unanticipated conditions they were not trained on. Even outside of emergencies, Waymos still sometimes need to "phone home" for remote assistance in knowing what to do.
So, yes, they are out there, perhaps as safe on average as a human (I'd be interested to see a breakdown of the stats), but I'd not personally be comfortable riding in one since I'm not senile, drunk, teenager, hothead, distracted (using phone while driving), etc - not part of the class that are dragging the human safety stats down. I'd also not trust a Tesla where penny pinching, or just arrogant stupidity, has resulted in a sensor-poor design liable to failure modes like running into parked trucks.
The challenge is that most people think they’re better than average drivers.I'd not personally be comfortable riding in one since I'm not senile, drunk, teenager, hothead, distracted (using phone while driving), etc - not part of the class that are dragging the human safety stats down.
I'm not sure what the "challenge" is there, but certainly true in terms of human psychology.
My point was that if you are part of one of these accident-prone groups, you are certainly worse than average, and are probably safer (both for yourself, and everyone around you) in a Waymo. However, if you are an intelligent non-impaired experienced driver, then maybe not, and almost certainly not if we're talking about emergency and dangerous situations which is where it really matters.
How can you know if you're a good driver in an emergency situation? We don't exactly get a lot of practice.
Sure, you don't know how well any specific driver is going to react in an emergency situation, and some are going to be far worse than others (e.g. panicking, or not thinking quickly enough), but the human has the advantage of general intelligence and therefore NOT having to rely on having had practice at the specific circumstance they find themselves in.
A recent example - a few weeks ago I was following another car in making a turn down a side road, when suddenly that car stops dead (for no externally apparent reason), and starts backing up fast about to hit me. I immediately hit my horn and prepare to back up myself to get out of the way, since it was obvious to me - as a human - that they didn't realize I was there, and without intervention would hit me.
Driving away I watch the car in my rear view mirror and see it pull a U-turn to get back out of the side road, making it apparent why they had stopped before. I learned something, but of course the driverless car is incapable of learning, and certainly has no theory of mind, and would behave same as last time - good or bad - if something similar happened again.
In my lens, as long as companies don't want to be held liable for an accident, the shouldn't be on roads. They need to be extremely confident to the point of putting their money where their mouths are. That's true "safety".
That's the main difference with a human driver. If I take an Uber and we crash, that driver is liable. Waymo would fight tooth and nail to blame anything else.
Mercedes is doing this for specific places and conditions.
Well, it depends on the details. I'd trust a Waymo as much as an Uber but I'm pretty skeptical of the Tesla stuff they are launching in Austin.
I'm quoting Elon.
I don't care about SF. I care about what I can but as a typical American. Not as an enthusiast in one of the most technologically advanced cities on the planet
They’re in other cities too…
Other cities still isn't available to average American.
You read the words but missed their meaning
[dead]
- [deleted]
As far as I've seen we appear to already have self driving vehicles, the main barriers are legal and regulatory concerns rather than the tech. If a company wanted to put a car on the road that beetles around by itself there aren't any crazy technical challenges to doing that - the issue is even if it was safer than a human driver the company would have a lot of liability problems.
This is just not true, Waymo, MobilEye, Tesla and Chinese companies are not bottlenecked by regulations but by high failure rate and / or economics.
They are only self-driving in a very controlled environments of few very good mapped out cities with good roads in good weather.
And it took what like 2 decades to get there. So no, we don't have self-driving even close. Those examples look more like hard-coded solution for custom test cases.
What? If that stuff works, no liability will have to be executed. How can you state that it works and claim liability problems at the same time?
> the main barriers are legal and regulatory concerns rather than the tech
they have failed in sfo, phoenix and other cities that rolled red carpet for them
Pretty solid evidence that self driving cars already exist though.
As prototypes, yes. But that's like pointing to Japanese robots in the 80's and expecting robot butlers any day now. Or maybe Boston dynamics 10 years ago. Or when OpenAI was into robotics.
There's a big gap between seeing something work in the lab and being ready for real world use. I know we do this in software, but that's a very abnormal thing (and honestly, maybe not the best)
Waymo is doing 250k paid rides/week.
When people say “we'll have self-driving cars next year”, I understand that self-driving cars will be widespread in the developed world and accessible to those who pay a premium. Given the status quo, I find it pointless to discuss the semantics of whether they exist or not.
Especially considering it would be weird to say "we'll have <something> next year" when we've technically had it for decades.
And more specifically, I'm referencing Elon where the context is that its going to be a software push into Teslas that people already own
You're confusing "exist" with "viable".
When someone talks about "having" self-driving cars next year, they're not talking about what are essentially pilot programs.
I don't think that is a reasonable generalisation. A lot of people would have been talking about the first person to take a real trip in a car that drives itself. A record that is in the past.
Not to mention that HN gets really tetchy about achieving specifically SAE Level 6 when in practice some pretty basic driver assist tools are probably closer to what people meant. It reminds me of a gentlemen I ran into who was convinced that the OpenAI DoTA bot with a >99% win rate couldn't really be said to be playing the game. If someone can take their hands off the wheel for 10 minutes we're there in a common language sense; the human in the car isn't actively in control.
Good point. On the "exist" interpretation, we've "had" flying cars for several decades.
I remember one reason phoenix was chosen as a trial location coz it was supposed to be one of the easiest places to drive.
It's pretty damning that it failed there.
Yeah, it’s a big grid with wide streets. Did it fail there? If so I imagine it’s just due to lack of business—there are almost no taxis in Phoenix. Mostly just from the airport.
100% this. I always argue that groundbreaking technologies are clearly groundbreaking from the start. It is almost a bit like a film, if you have to struggle to get into it in the first few minutes, you may as well spare yourself watching the rest.
[flagged]
I consulted a radiologist more than 5 years after Hinton said that it was completely obvious that radiologists would be replaced by AI in 5 years. I strongly suspect they were not an AI.
Why do I think this?
1) They smelled slightly funny. 2) They got the diagnosis wrong.
OK maybe #2 is a red herring. But I stand by the other reason.
I know a radiologist and talk a decent bit about AI usage in the field. Every radiologist today is making heavy use of AI. They pre screen everything and from what I understand it has led to massive productivity gains. It hasnt led to job losses yet but theres so much money on the line it really feels to me like we're just waiting for the straw that broke the camels back. No one wants to be the first to fully get rid of radiologists but once one hospital does the rest will quickly follow suit.
One word - liability.
The quote appears to be “We should stop training radiologists now, it’s just completely obvious within five years deep learning is going to do better than radiologists.”
So there's some room for interpretation, the weaker interpretation is less radical (that AI could beat humans in radiology tasks in 5 years).
I named 3 things...
You're going to have to specify which 2 you think happened
I have a fusion reactor to sell to you.
Some people are ahead of you by 3.5 years [0]:
> Helion has a clear path to net electricity by 2024, and has a long-term goal of delivering electricity for 1 cent per kilowatt-hour. (!)
You're missing the big picture. Helion can still make their goal. Once they have a working fusion reactor they can use the energy to build a time machine.
Of course, silly me. I should put more practice time into 4D chess.
We're halfway into 2025 and you're cutting a goal they should have reached by 2024. Did they reach that goal?
They didn't reach that goal. Why would they bother reaching an easier goal when they could shoot for a bigger one? /s Their new goal is to build a fusion plant by 2028 [0].
[0] https://observer.com/2025/01/sam-altman-nuclear-fusion-start...
Where did it happen?
They try it, but it’s not reliable
did you by any chance send money to nigerian prince ?
Over ten years for the we'll have self driving car spiel
We’re already heading toward the sigmoid plateau. The GPT 3 to 4 shift was massive. Nothing since had touched that. I could easily go back to the models I was using 1-2 years ago with little impact on my work.
I don’t use RAG, and have no doubt the infrastructure for integrating AI into a large codebase has improved. But the base model powering the whole operation seems stuck.
> I don’t use RAG, and have no doubt the infrastructure for integrating AI into a large codebase has improved
It really hasn't.
The problem is that a GenAI system needs to not only understand the large codebase but also the latest stable version of every transitive dependency it depends on. Which is typically in the order of hundreds or thousands.
Having it build a component with 10 year old, deprecated, CVE-riddled libraries is of limited use especially when libraries tend to be upgraded in interconnected waves. And so that component will likely not even work anyway.
I was assured that MCP was going to solve all of this but nope.
How did you think MCP was going to solve the issue of a large number of outdated dependencies?
Those large number of outdated dependencies are in the LLM "index" which can't be rapidly refreshed because of the training costs.
MCP would allow it to instead get this information at run-time from language servers, dependency repositories etc. But it hasn't proven to be effective.
> I could easily go back to the models I was using 1-2 years ago with little impact on my work.
I can't. GPT-4 was useless for me for software development. Claude 4 is not.
Interesting, what type of dev work do you do? Performance does vary widely across languages and domains.
Embedded software for robotics.
I use LLM’s daily and live them but at the current rate of progress it’s just not really something worth worrying about. Those that are hysterical about AI seem to think LLM’s are getting exponentially better when in fact diminishing returns are hitting hard. Could some new innovation change that? It’s possible but it’s not inevitable or at least not necessarily imminent.
I agree that the core models are only going to see slow progression from here on out, until something revolutionary happens... which might be a year from now, or maybe twenty years. Who knows.
But we are going to see a huge explosion in how those models are integrated into the rest of the tech ecosystem. Things that a current model could do right now, if only your car/watch/videogame/heart monitor/stuffed animal had a good working interface into an AI.
Not necessarily looking forward to that, but that's where the growth will come.
How are we in 0.1 of GenAI ? It's been developed for nearly a decade now.
And each successive model that has been released has done nothing to fundamentally change the use cases that the technology can be applied to i.e. those which are tolerant of a large percentage of incoherent mistakes. Which isn't all that many.
So you can keep your 10x better and 100x cheaper models because they are of limited usefulness let alone being a turning point for anything.
A decade?
The explosion of funding, awareness etc only happened after gpt-3 launch
Funding is behind the curve. Social networks existed in 2003 and Facebook became a billion dollar company a decade later. AI horror fantasies from the 90’s still haven’t come true. There is no god, there is no Skynet.
AlphaGo beating the top human player was in 2016. To my memory, that was one of the first public breakthroughs of the new era of machine learning.
Around 2010 when I was at university, a friend did their undergraduate thesis on neural networks. Among our cohort it was seen as a weird choice and a bit of a dead-end from the last AI winter.
That was five years ago not yesterday.
I didn't say yesterday.
Nonetheless it took openai til Nov 2022 for 1 Million users.
The overall awareness and breakthrough was probably not at 2020.
I think they will be 10-100x cheaper id be really surprised if we even doubled the quality though
How does it work if they get 10x better in 10 years ? Everything else will have already moved on and the actual technology shift will come from elsewhere.
Basically, what if GenAI is the Minitel and what we want is the internet.
10× better by what metric? Progress on LLMs has been amazing but already appears to be slowing down.
All these folks are once again seeing the first 1/4 of a sigmoid curve and extrapolating to infinity.
No doubt from me that it’s a sigmoid, but how high is the plateau? That’s also hard to know from early in the process, but it would be surprising if there’s not a fair bit of progress left to go.
Human brains seem like an existence proof for what’s possible, but it would be surprising if humans also represent the farthest physical limits of what’s technologically possible without the constraints of biology (hip size, energy budget etc).
Biological muscles are proof that you can make incredibly small and forceful actuators. But the state of robotics is nowhere near them, because the fundamental construction of every robotic actuator is completely different.
We’ve been building actuators for 100s of years and we still haven’t got anything comparable to a muscle. And even if you build a better hydraulic ram or brushless motor driven linear actuator you will still never achieve the same kind of behaviour, because the technologies are fundamentally different.
I don’t know where the ceiling of LLM performance will be, but as the building blocks are fundamentally different to those of biological computers, it seems unlikely that the limits will be in any way linked to those of the human brain. In much the same way the best hydraulic ram has completely different qualities to a human arm. In some dimensions it’s many orders of magnitudes better, but in others it’s much much worse.
Biological muscles come with a lot of baggage, very constrained operating environments, and limited endurance.
It’s not just that ‘we don’t know how to build them’, it’s that the actuators aren’t a standalone part - and we don’t know how to build (or maintain/run in industrial enviroments!) the ‘other stuff’ economically either.
I don’t think it’s hard to know. We’re already seeing several signs of being near the plateau in terms of capabilities. Most big breakthrough these days seems to be in areas where we haven’t spent the effort in training and model engineering. Like recent improvements in video generation. So of course we could get improvements in areas where we haven’t tried to use ML yet.
For text generation, it seems like the fast progress was mainly due to feeding the models exponentially more data and exponentially more compute power. But we know that the growth in data is over. The growth in compute has a shifted from a steep curve (just buy more chips) to a slow curve (have to make exponentially more factories if we want exponentially more chips)
Im sure we will have big improvements in efficiency. Im sure nearly everyone will use good LLMs to support them in their work, and they may even be able to do all they need to do on-device. But that doesn’t make the models significantly smarter.
The wonderful thing about a sigmoid is that, just as it seems like it's going exponential, it goes back to linear. So I'd guess we're not going to see 1000x from here - I could be wrong, but I think the low hanging fruit has been picked. I would be surprised in 10 years if AI were 100x better than it is now (per watt, maybe, since energy devoted to computing is essentially the limiting factor)
The thing about the latter 1/3rd of a sigmoid curve is, you're still making good progress, it's just not easy any more. The returns have begun to diminish, and I do think you could argue that's already happening for LLMs.
Progress so far has been half and half technique and brute force. Overall technique has now settled for a few years, so that's mostly in the tweaking phase. Brute force doesn't scale by itself and semiconductors have been running into a wall for the last few years. Those (plus stagnating outcomes) seem decent reasons to suspect the plateau is neigh.
Human brains are easy to do, just run evolution for neural networks.
with autonomous vehicles, the narrative of imperceptibly slow incremental change about chasing 9's is still the zeitgeist despite an actual 10x improvement in homicidality compared to humans already existing.
There is a lag in how humans are reacting to AI which is probably a reflexive aspect of human nature. There are so many strategies being employed to minimize progress in a technology which 3 years ago did not exist and now represents a frontier of countless individual disciplines.
This is my favorite thing to point out from the day we started talking about autonomous vehicles on tech sites.
If you took a Tesla or a Waymo and dropped into into a tier 2 city in India, it will stop moving.
Driving data is cultural data, not data about pure physics.
You will never get to full self driving, even with more processing power, because the underlying assumptions are incorrect. Doing more of the same thing, will not achieve the stated goal of full self driving.
You would need to have something like networked driving, or government supported networks of driving information, to deal with the cultural factor.
Same with GenAI - the tooling factor will not magically solve the people, process, power and economic factors.
> You would need to have something like networked driving, or government supported networks of driving information, to deal with the cultural factor.
Or actual intelligence. That observes its surroundings and learns what's going on. That can solve generic problems. Which is the definition of intelligence. One of the obvious proofs that what everybody is calling "AI" is fundamentally not intelligent, so it's a blatant misnomer.
One of my favorite things to question about autonomous driving is the goalposts. What do you mean the “stated goal of full self driving”, which is unachievable? Any vehicle, anywhere in the world, in any conditions? That seems an absurd goal that ignores the very real value in having vehicles that do not require drivers and are safer than humans but are limited to certain regions.
Absolutely driving is cultural (all things people do are cultural) but given 10’s of millions of miles driven by Waymo, clearly it has managed the cultural factor in the places they have been deployed. Modern autonomous driving is about how people drive far more than the rules of the road, even on the highly regulated streets of western countries. Absolutely the constraints of driving in Chennai are different, but what is fundamentally different? What leads to an impossible leap in processing power to operate there?
> What do you mean the “stated goal of full self driving”, which is unachievable? Any vehicle, anywhere in the world, in any conditions? That seems an absurd goal that ignores the very real value in having vehicles that do not require drivers and are safer than humans but are limited to certain regions.
I definitely recall reading some thinkpieces along the lines of "In the year 203X, there will be no more human drivers in America!" which was and still is clearly absurd. Just about any stupidly high goalpost you can think of has been uttered by someone in the world early on.
Anyway, I'd be interested in a breakdown on reliability figures in urban vs. suburban vs. rural environments, if there is such a thing, and not just the shallow take of "everything outside cities is trivial!" I sometimes see. Waymo is very heavily skewed toward (a short list of) cities, so I'd question whether that's just a matter of policy, or whether there are distinct challenges outside of them. Self-driving cars that only work in cities would be useful to people living there, but they wouldn't displace the majority of human driving-miles like some want them to.
I mean, even assuming the technical challenges to self-driving can be solved, it is obvious that there will still be human drivers because some humans enjoy driving, just as there are still people who enjoy riding horses even after cars replaced horses for normal transport purposes. Although as with horses, it is possible that human driving will be seen as secondary and limited to minor roads in the future.
- [deleted]
I'd apprecite that we dont hurry past the acknowledgement that self driving will be a cultural artifact. Its been championed as a purely technical one, and pointing this out has been unpopular since day 1, because it didn't gel with the zeitgeist.
As others will attest, when adherence to driving rules is spotty, behavior is highly variable and unpredictable. You need to have a degree of straight up agression, if you want to be able to handle an auto driver who is cheating the laws of physics.
Another example of something thats obvious based on crimes in India; people can and will come up to your car during a traffic jam, tap your chassis to make it sound like there was an impact, and then snatch your phone from the dashboard when you roll your window down to find out what happened.
This is simply to illustrate and contrast how pared down technical intuitions of "driving" are, when it comes to self driving discussions.
This is why I think level 5 is simply not happening, unless we redefine what self driving is, or the approach to achieving it. I feel theres more to be had from a centralized traffic orchestration network that supplements autonomous traffic, rather than trying to solve it onboard the vehicle.
Why couldn’t an autonomous vehicle adapt to different cultures? American driving culture has specific qualities and elements to learn, same with India or any other country.
Do you really think Waymos in SF operate solely on physics? There are volumes of data on driver behavior, when to pass, change lanes, react to aggressive drivers, etc.
Yeah exactly. It’s kind of absurd to take the position that it’s impossible to have “full self driving” because Indian driving is different than American driving. You can just change the model you’re using. You can have the model learn on the fly. There are so many possibilities.
Because this statement, unfortunately, ends up moving the underlying goal posts about what self driving IS.
And the point that I am making, is that this view was never baked into the original vision of self driving, resulting in predictions of a velocity that was simply impossible.
Physical reality does not have vibes, and is more amenable to prediction, than human behavior. Or Cow behavior, or wildlife if I were to include some other places.
Marketers gonna market. But if we ignore the semantics of what full self driving actually means for a minute, there is still a lot of possibilities for self driving in the future. It takes longer than we perceive initially because we don’t have insight into the nuances needed to achieve these things. It’s like when you plan a software project, you think it’s going to take less time than it does because you don’t have a detailed view until you’re already in the weeds.
To quote someone else, if my grandmother had wheels, she would be a bicycle.
This is a semantic discussion, because it is about what people mean when they talk about self driving.
Just ditching the meaning is unfair, because goddamit, the self driving dream was awesome. I am hoping to be proved wrong, but not because we moved our definition.
Carve a separate category out, which articulates the updated assumptions. Redefining it is a cop out and dare I say it, unbecoming of the original ambition.
Networked Autonomous vehicles?
"If you took a Tesla or a Waymo and dropped into into a tier 2 city in India, it will stop moving."
Lol. If you dropped the average westerner into Chennai, they would either: a) stop moving b) kill someone
> a technology which 3 years ago did not exist
Decades of machine learning research would like to have a word.
Frankly, we don't know. That "turning point" that seemed so close for many tech, never came for some of them. Think 3D-printing that was supposed to take over manufacturing. Or self-driving, that is "just around the corner" for a decade now. And still is probably a decade away. Only time will tell if GenAI/LLMs are color TV or 3D TV.
> Think 3D-printing that was supposed to take over manufacturing.
3D printing is making huge progress in heavy industries. It’s not sexy and does not make headlines but it absolutely is happening. It won’t replace traditional manufacturing at huge scales (either large pieces or very high throughput). But it’s bringing costs way down for fiddly parts or replacements. It is also affecting designs, which can be made simpler by using complex pieces that cannot be produced otherwise. It is not taking over, because it is not a silver bullet, but it is now indispensable in several industries.
You're misunderstanding the parent's complaint and frankly the complaints with AI. Certainly 3D printing is powerful and hasn't changed things. But you forgot that 30 years ago people were saying there would be one in every house because a printer can print a printer and how this would revolutionize everything because you could just print anything at home.
The same thing with AI. You'd be blind or lying if you said it hasn't advanced a lot. People aren't denying that. But people are fed up being constantly being promised the moon and getting a cheap plastic replica instead.
The tech is rapidly advancing and doing good. But it just can't keep up with the bubble of hype. That's the problem. The hype, not the tech.
Frankly, the hype harms the tech too. We can't solve problems with the tech if we're just throwing most of our money at vaporware. I'm upset with the hype BECAUSE I like the tech.
So don't confuse the difference. Make sure you understand what you're arguing against. Because it sounds like we should be on the same team, not arguing against one another. That just helps the people selling vaporware
>Think 3D-printing that was supposed to take over manufacturing
This was never the case, and this is obvious to anyone who has ever been to factories that doing mass-produced plastic
>Or self-driving, that is "just around the corner" for a decade now.
But it is really around the corner, all that remains is to accept it. That is, to start building and modifying the road infrastructure and changing the traffic rules to enable effective integration self-driving cars into road traffic.
What modifications to infrastructure are you anticipating needing?
> 5 years into "AI will replace programmers in 6 months"?
Programmers that don't use AI will get replaced by those that do. (no just by mandate, but by performance)
> 10 years into "we'll have self driving cars next year"
They're here now. Waymo does 250K paid rides/week.
How have you measured this performance boost?
There's a lot of "when" people are betting on, and not a lot of action to back it. If "when" is 20 years, then I still got plenty career ahead of me before I need to worry about that.
Remember when RPA was going to replace everyone?
Or low-code / no-code?
If not when.
> Do you think this will be the case when o3 models are 10x better and 100x cheaper?
why don't you bring it up then.
> There will be a turning point but it’s not happened yet.
do you know something that rest of us don't ?
ZIRP had little to do with it. Tech is less levered than any other major industry. What happened is that growth expectations for large tech companies were way out of line with reality and finally came back down to earth when the market finally realized that the big tech cos are actually mature profitable companies and not just big startups. The fact that this happened at the same time ZIRP ended is a coincidence.
Saw something similar the other day. X was awash with stories that IBM was laying off several thousand people in their HR dept. being let go due to Ai. Then over the course of the day the story shifted to IBM was outsourcing them all to India. Was a very interesting transition, seemed intentional.
IBM seemed to outsource recruiting to Indian firms too and it's awful. The accounts who contact me on LinkedIn are grossly unprofessional and downright nasty.
- [deleted]
> because he simply thought he could run a lot leaner
Because he suddenly had to pay interest on that gigantic loan he (and his business associates) took to buy Twitter.
It may not be the only reason for everything that happened, but it sure is simple and has some very good explanatory powers.
Other companies have different reasons to cut costs, but the incentive is still there.
Stocks are valued against the risk free interest, or so the saying goes.
Doubling interest rate from .1% to .2% does a lot for your DCF models, and in this case we went from zero (or in some cases negative) to several percentage units. Of course stock prices tanked. That's what any schoolbook will tell you, and that's what any investor will expect.
Companies thus have to start turning dials and adjust parameters to make number go up again.
Why would you interpret data cut off at 2020 so that you're just looking at a covid phenomenon? The buttons don't seem to do anything on that site, but why not consider 2010-2025?
That said, the vibe has definitely shifted. I started working in software in uni ~2009 and every job I've had, I'd applied for <10 positions and got a couple offers. Now, I barely get responses despite 10x the skills and experience I had back then.
Though I don't think AI has anything to do with it, probably more the explosion of cheap software labor on the global market, and you have to compete with the whole world for a job in your own city.
Kinda feels like some major part of the gravy train is up.
It looks like that specific graph only starts in 2020...
Why not just find one that starts in 2022 then. It would look even more dire.
FRED continues to amaze me with the kind of data they have availab.e
That's from Indeed. And, Indeed has fewer job postings overall [https://fred.stlouisfed.org/series/IHLIDXUS]. Should we normalize the software jobs with the total number of Indeed postings? Is Indeed getting less popular or more popular over this time period? Data is complicated
Look at that graph again. It's indexed to 100 in Feb 1, 2020. It's now at 106. In other words, after all the pandemic madness, the total number of job postings on indeed is slightly larger than it was before, not smaller.
But for software, it's a lot smaller.
This website has its own graph which looks different.
https://www.trueup.io/job-trend
I have never gone to Indeed to apply for a job.
> People a decade from now will think Elon slashed Twitter's employee count by 90% because of some AI initiative, and not because he simply thought he could run a lot leaner.
That part is so overblown. Twitter was still trying to hit moonshots. X is basically in "keep the lights on" mode as Musk doesn't need more. Yeah, if Google decides it doesn't want to grow anymore, it can probably cut it's workforce by 90%. And it will be as irrelevant as IBM in maximum 10 years.
What moonshots has Twitter gone for in the last decade? Feature velocity is also higher since the acquisition.
"Moonshots" was probably a bad term. Twitter devs used to be very active in open source, in Scala, actors, etc in particular. Fairly sure that's all dead. From most reports the majority of current Twitter devs are basically visa-shackled to the company.
What happened to X, the payment app?
ZIRP jobs, n., jobs the compensation for which is derived from zero interest loans, often in the form of venture capital, instead of reserves, profits or other sources
"When interest rates return to normal levels, the ZIRP jobs will disappear." -- Wall Street analyst
Macroeconomic policy always changes, recessions come and go, but it's not a permanent change in the way e-commerce or AI is.
Honestly, if anything i think AI is going to reverse the trend. Someone is going to have to be hired to clean up after them.
I think they said that about outsourcing software dev jobs. The reality is somewhere in the middle. extreme cases will need cleanup but overall it's here to stay, maybe with more babysitting.
I think the reality is Lemon Market Economics. We'll sacrifice quality for price. People want better quality but the truth is that it's a very information asymmetric game and it's really hard to tell quality. If it wasn't, we could all just rely on Amazon reviews and tech reviewers. But without informed consumers, price is all that matters even if it creates a market nobody wants.
If anyone will actually bother with cleaning up.
Thats the impression I got. Things overall get just worse in quality because people rely too much on low wages and copy pasting LLM answers
I think that's true in software development. A lot of the focus is on coding because that's really the domain of the people interested in AI, because ultimately they ARE software. But the killer app isn't software, it's anything where the operation is formulaic, but the formula can be tedious to figure out, but once you know it you can confirm that it's correct by working backwards. Software has far too many variables, not least of which is the end user. On the other hand things like accounting, finance, and engineering are far more suitable for trained models and back testing for conformity.
Get worse for who? The ruling class will simply never care how bad things get for working people if things are getting better for the ruling class.
The central problem with this statement is that we expect others to care, but we do not expect this from ourselves.
We have agency. Whether we are brainwashed or not. If we cared about ourselves, then we don’t need another class, or race, or whatever other grouping to do this for us.
I meant just regular products as example if I login to bitpanda on browser the parts that would hold the translation hold the keys for translations instead. Just countless examples and many security issues as well.
Regarding class struggle I think class division always existed but we the mass have all the tools to improve our situation.
The flaw with the Zirp narrative that companies managed to raise more money than ever before the moment they had a somewhat believable narrative instead of the crypto/web3/metaverse nonsense.
Yes. Tech is clearly a beneficiary of the Cantillon Effect.
Always disheartening how much people forget and tolerate the underlying deliberate human absurdity that created these events.
Almost no one has seen a world where the price of money wasn't centrally planned, a committee of experts deciding it based on gut feel like they did in command economies like the Soviet union.
And then thousands of people's lives are disrupted as the interest rate swings wildly due purely to government action (corona lockdowns and fed zirp response), and it all somehow just ends up people talking about AI instead.
The true wrongdoers get absolutely no consequences, and we all just carry on like there's no problem. Often because our taxes go to paying hordes of academics and economists to produce layers and layers of sophisticated propaganda that of course this system is the best one.
Absurd and shitty world.
It's simply the old Capital vs Labor struggle. CEOs and VCs all sing in the same choir, and for the past 3 years the tune is "be leaner".
p.s.: I'm a big fan of yours on Twitter.
Except Labor in Tech is unique in that it has zero class consciousness and often actively roots for their exploiters.
If we were to unionize, we could force this machine to a halt and shift the balance of power back in our favor.
But we don't, because many of us have been brainwashed to believe we're on the same side as the ones trying to squeeze us.
>If we were to unionize
Last time it was tried the union coerced everyone to root for their exploiters. People that unionize aren't magically different.
What “last time” are you referring to specifically?
I am also curious.
I think the issue at play here is the quickly changing job descriptions, RSU's and the higher paid bunch benefiting from very unequal pay across a job category.
Seems like they're happy to start cutting limbs to lose weight. It's hard to keep cutting fat if you've been aggressively cutting fat for so long. If the last CEO did their job there shouldn't be much fat left> the tune is "be leaner".
> If the last CEO did their job there shouldn't be much fat left
funny how that fat analogy works...because the head (brain) has a lot more fat content than muscles/limbs.
I never thought to extend the analogy like that, but I like it. It's showing. I mean look how people think my comments imply I don't know what triage is. Not knowing that would be counter to everything I'm saying, which is that a lot of these value numbers are poor guestimates at best. Happens every time I bring this up. It's absurd to think we could measure everything in terms of money. Even economists will tell you that's silly
yet this will continue until it grounds to a halt.
It's amazing and cringy the level of parroting performed by executives. Independent thought is very rare amongst business "leaders".
Let's make the laptops thinner. This way we can clean the oil off of the keyboard, putting it on the screen.
At this point I'm not sure it's lack of independent thought so much as lack of thought. I'm even beginning to question if people even use the products they work on. Shouldn't there be more pressure from engineers at this point? Is it yes men from top to bottom? Even CEOs seem to be yes men in response to share holders but that's like being a yes man to the wind.
When I bring this stuff up I'm called negative, a perfectionist, or told I'm out of touch with customers and or understand "value". Idk, maybe they're right. But I'm an engineer. My job is to find problems and fix them. I'm not negative, I'm trying to make the product better. And they're right, I don't understand value. I'm an engineer, it's not my job to make up a number about how valuable some bug fix is or isn't. What is this, "Whose Line Is It Anyways?" If you want made up dollar values go ask the business monkeys, I'm a code monkey
> I'm an engineer, it's not my job to make up a number about how valuable some bug fix is or isn't.
So you think all bugs are equally important to fix?
No, of course not. That would be laughably absurd. So do you think I'm trolling or you're misunderstanding? Because who isn't familiar with triage?
Do you think every bug's monetary value is perfectly aligned with user impact? Certainly that isn't true. If it were we'd be much better at security and would be more concerned with data privacy. There's no perfect metric for anything, and it would similarly be naïve to think you could place a dollar value on everything, let alone accurately. That's what I'm talking about.
My main concern as an engineer is making the best product I can.
The main concern of the manager is to make the best business.
Don't get confused and think those are the same things. Hopefully they align, but they don't always.
That inflection point seems to more specifically start at the day of the new administration's inauguration.
It’s a shame that this is the top comment because it’s backward looking (“here’s why white-collar workers lost their jobs in the last year”) instead of looking forward and noticing that even if interest rates are reduced back to zero these jobs will not be performed by humans ever again. THAT is the message here. These workers need to retrain and move on.
> even if interest rates are reduced back to zero these jobs will not be performed by humans ever again
It's not like companies laid off whole functions. These jobs will continue to be performed by humans - ZIRP just changes the number of humans and how much they get paid.
> These workers need to retrain and move on.
They only need to "retrain" insofar as they keep up with the current standards and practices. Software engineers are not going anywhere.
This is so cool. Had no idea FRED had data like this. They have everything.
give trump a few more years, and that probably will change.
> unique inflection near the start of 2025
I wonder what happened in January 2025...
Inflation & mismanagement.
Bingo!
[flagged]
For those outside of the US:
https://en.m.wikipedia.org/wiki/Second_inauguration_of_Donal...
Trump didnt kick off the layoffs.
It was the war with Russia that drove the fed to raise interest rates in 2022 - a measure that was intended to curb inflation triggered by spikes in the prices of economic inputs (gas, oil, fertilizer, etc.).
The tech layoffs started later that year.
Widespread job cuts are an intended effect of raising interest rates - more unemployed = less spending = keeps a lid on inflation.
AI is just cashing in on the trend.
"War with Russia" sounds like someone willingly started that war, and Russia was the target.
Of course, nothing is further from the truth. "Russian invasion of Ukraine" is what should be written there.
Fully agreed, but I suspect that was written that way because the Fed was rather more worried about the fact Russia was at war and under western sanctions, than that Ukraine was busy defending itself.
Perhaps "Russia's war" would have been a better phrasing that captures both spirits (but it's not a phrase you hear said much).
You think "Russia's war" captures more of the global relevance than "Russia's invasion of Ukraine"?
For example, Ukraine was a very important food supplier -- one of the top grain suppliers in the World -- and the invasion caused shortages of some foods. Another example is that Ukraine provided a good source of iron ore for EU-based manufacture. If nothing else that would be important to USAmericans as indicating a market opportunity.
Without that invasion and Putin's inspiration, would Trump have threatened invasion of USA's neighbours? That's got to be vital to USA finances too.
I don't see how that follows at all. "War with x" is a factual statement with no implications of moral culpability in either direction
Yeah, no idea what's going on in this thread. As far as I can tell, this connotation was just invented for the purposes of- well, I shouldn't guess motivations, but I can't think of any good ones.
Here's the BBC using it[1], CNN[2], The AP [3], The Conversation [4]
[1]https://www.bbc.com/news/articles/c0l0k4389g2o
[2]https://www.cnn.com/2024/07/21/europe/europe-conscription-wa...
[3]https://apnews.com/article/russia-ukraine-war-zelenskyy-star...
[4]https://theconversation.com/why-russias-armed-forces-have-pr...
"It was the war with Russia that drove the fed to raise interest rates in 2022" sounds like the fed or the US was at war with Russia. Your links 1, 3, 4 mention Ukraine in the same sentence as "war with Russia", which makes it clear that the US and the fed are not at war with Russia. Link 2 talks about a threat of war, not an actual war.
GP's complaint is that it implies that someone other than Russia started the war. I don't think mentioning another party who wasn't responsible should change that.
its not factual if war is a verb.
In the upthread usage: “it was the war with Russia”, war is a noun.
- [deleted]
Your demands of an absolute committment to maintaining the domestic establishment's war narrative while making a technical point have been noted. Slava Ukraini.
Who, precisely, do you consider to be the domestic establishment? Neither the President of the United States nor his Secretary of Defense subscribe to this narrative.
The deep state, most European leaders + states plus a huge chunk of Congress.
Also even some of Trump's team.
The establishment doesnt suddenly get swapped out because a new President gets in (even if the tides are shifting).
Truth is the best narrative, and it's better than – perhaps unconsciously – downplaying the culpability of the Russian Federation for the war.
Heroyam Slava.
It's curious how the "true narrative" people fight with such a passion for so frequently coincides with the dominant war narrative currently pushed by the imperial power center they live under.
That is, until 10 years later when they have a new narrative about a different military rival. They quietly stop pushing the old narrative and everyone quietly admits the old one was kinda bullshit all along.
E.g. my views didnt change one iota since 2003 but these views at some point magically stopped conferring a "Saddam sympathizer" moniker from people who demanded unthinking ideological commitment.
It works the same way with people who live under and unthinkingly consume Russian imperialist propaganda too. The more passionate ones make routine demands for ideological purity similar to the one above.
What annoys me is that war is war. Of course there’s an aggressor and defender that’s how war works. A bunch of people die. War happens because there is no peaceful answer to, “who is right”. There is no global true narrative. Does it really matter which side is justified? No. The side that wins writes history. But that doesn’t stop a bunch of people so far removed from the war it will never affect them even one tiny bit from burning cycles arguing about which narrative is the nice “true” one. Likely so they can feel good and comfy and morally superior when they close their eyes at night.
How would you characterize the conflict?
Proxy war between two rival imperial power centers dueling for influence over a strategic chunk of land and sea.
so "war with Russia" becomes "US proxy war with Russia over Ukraine" - fair enough
From earlier:
It certainly does. The Russian war against Ukraine began with unmarked soldiers, nicknamed "little green men," and Russia denying any involvement, claiming instead that Ukraine was in the midst of a civil war. When the latest Russian weapons appeared in Ukraine, Russia claimed that tourists must've bought them from military surplus stores.That is, until 10 years later when they have a new narrative about a different military rival. They quietly stop pushing the old narrative and everyone quietly admits the old one was kinda bullshit all along. /---/ It works the same way with people who live under and unthinkingly consume Russian imperialist propaganda too.
Then we went through a lot of bullshit - that Ukrainian nationalists were committing genocide in Donbas, or that Ukraine was secretly developing nuclear and biological weapons.
Now, 10 years later, the narrative has shifted to how this has always been a major confrontation with the USA and NATO, a "proxy war". No doubt, it will shift many more times. Looking forward to when the current "supreme commander" Putin will be regarded as a failure, much like Gorbachev, and blamed for causing the difficult 2030s.
It began with a false flag terrorist attack on civilians in Maidan square which was used to usurp a democratically elected president who was extremely popular in the east and south.
Nobody has ever been jailed for this terrorist attack and all the evidence points to Ukrainian fascists being culpable, including:
* The Berkut who were there being tried and the trial falling through because all of them were too obviously very far away from the protestor-controlled hotel where the snipers nest was set up.
* A Ukrainian war hero who had no reason to lie who was there telling people who was responsible (before being thrown in jail).
* A group of the snipers (mercenaries who were there who never got paid) went public.
It was as much a proxy war back then, it was just fought under the surface with NGO agitators instead of weapons deliveries.
Has there ever been a terrorist attack that was not a "false flag" according to internet loonies? :D
The fact that you have to make something like this up within the first ten words of your narrative really shows just how detached from reality it is.
I wonder what narratives will dominate after the war, when reality sets in: hundreds of thousands dead and never returning home; several times as many disabled, many of them severely; the might and pride of the Russian military sunk or blown up; returned soldiers running massive criminal rings like in the 1990s; state budget empty from massive military spending, leaving people to survive on their own as safety nets crumble. Some conspiracy story about snipers 10+ years ago in another country doesn't really cut it, and getting beaten in an imagined confrontation with the "collective West" sounds really pathetic too, especially when the other side didn't even step into the boxing ring. The USAF hasn't flown a single sortie against Russia, yet strategic bombers are already burning on airfields like in the opening hours of Operation Barbarossa.
>Has there ever been a terrorist attack that was not a "false flag" according to internet loonies? :D
Reichstag fire. All the nutter conspiracy theorists think Hitler did it. Obviously you know better.
>The fact that you have to make something like this up
Evidence doesnt mean much to some people. They will follow the narrative of their leaders whether it is dictated via Moscow blabbing about biolabs or Washington that they allied with freedom loving democrats in Ukraine rather than Nazi goons.
Excellent example. In case you're not aware (as the snark suggests), the broad consensus among historians since the 1960s holds that the Reichstag was indeed not set on fire by the Nazis.
2 trillion in unnecessary Covid related spending when Covid impact was winding down was the key reason for inflation. "$2000 checks!" was the campaign slogan
People sometimes conveniently forgets that inflation historically has taken some 12-24 months to trickle through the economic system. That was the case this time, too. And the first inflationary impulses, famous for being "transitory", was actually before the Russian invasion of Eastern Europe.
We're blaming the 1000 dollar stimulus checks to the people and not the massive PPP loans that the government never bothered to collect on? It's amazing how well billionaires trained us to fight amongst one another as they ransack in broad daylight.
No. The checks were mostly meaningless.
The near zero interest rates, pause on student loan payments, pause on rent payments, doubling of unemployment pay, and then the dustings of stimulus checks and bonus childcare checks, all while most white collar workers just continued working like nothing happened, created an incredibly cash rich environment that most people have never seen before.
And the PPP loans handouts to business owners just to throw more gas on the fire.
Don’t forget Fed Quantitative easing that inject a ton in as well.
A lot of things were going on in the early 2020s that at least anecdotally seem to have disproportionately affected software jobs so I’m skeptical it’s purely an interest rate phenomenon. But the consensus does seem to be that software has gone from being a ridiculously easy job market by professional job standards to at least a moderately challenging one.
Software in the US has (aside from maybe finance) been an almost uniquely well-compensated field. That will probably adjust over time especially given the inflow of grads primarily in it for the money.
>A lot of things were going on in the early 2020s that at least anecdotally seem to have disproportionately affected software jobs so I’m skeptical it’s purely an interest rate phenomenon.
Software industry was over hiring probably ever since dot-com bubble because after the bubble burst revenue and profits were rapidly growing and it never really stopped. I would rather blame the managers who constantly pushed for more workers instead of increasing the productivity of the existing workforce.
Add the Inflation Reduction Act which did the exact opposite of its title (increased government spending when the labor market was extremely tight)
It's all the same money printing. The issue is that people generally believe that emergency measures were justified in early 2020 when the crisis hit and there were so many unknowns, but not justified a year later when the virus was already endemic and the vaccine was out.
You have it backwards, inflation causes an increase in the money supply. When prices rise, it forces people to take on more debt causing an increase in the money supply. Those 2000 checks actually probably dampened inflation for a short while. Most people used those checks to pay down debt (which destroys money).
That's one of the factors. In Europe at least the other factor is high energy prices after the broken turbine theater and subsequent destruction of Nord Stream.
Prices and unemployment really started to rise after that. The EU buys overpriced LNG from the US, so the US is somewhat isolated from that. But the US is not isolated against the general economic downturn worldwide.
Politicians do not care. Merz, with barely 25% approval of the German population, continues the policies outlined by Hegseth during his visit to the EU. Trump still plays theater to appease his MAGA base, but Senators Rubio and Graham increasingly start holding the reins.
I don’t believe the war specifically drove the Fed to raise interest rates. Inflation and asset prices have risen sharply a year prior to the war.
There was a specific and particular expectation (and even patience) for inflation to drop naturally as the supply chains again reached equilibrium after Covid.
Russia's invasion of Ukraine however caused a whole bunch of economic inputs like energy and fertilizer to spike, and central banks world wide didn't want economies to "get used to" constant high inflation rates, causing a perpetual problem.
Work from home was the wrench in the governments plan. If the pandemic happened in 2000, the stimulus would have been needed, as the tools for remote work were way too poor back then.
But instead all the productivity workers just switched to their home office and things just kept working. The stimulus should have been shut off in early-mid 2021 when this was abundantly clear. But the government let it run because people were so jubilant in the money shower.
Biden credited the inflation to Putin, claiming that 70% was due to Putin’s price hikes.
That was not entirely true.
Trump’s pandemic spending (lockdowns, vaccines…), and subsequently Biden’s, but most importantly the curiously named Inflation Reduction Act were obvious drivers. You can’t stimulate an already overheated economy to the tune of 2 trillion without getting Larry Summers a bit worked up.
Raising interest rates has nothing to do with the 2022 war. If it did, rates would have come back down. Interest rate increases don't help with supply/demand driven price spikes. They do help with money supply and aggregate demand driven inflation, which was the cause of our recent inflation (that started way before Russia invaded Ukraine). The war was a convenient excuse because it deflects responsibility.
And remember when they first said inflation was "transitory" and caused by supply chain issues from the economy reopening after covid? They didn't raise interest rates then because, like I mentioned above, interest rates don't help with supply shocks. If they did, the Fed would have raised rates then.
Anecdotally, I detected a cooling starting in March of 2022.
Was actively looking at this time for months prior and it went from a few recruiters a day reaching out to a few a week.
- [deleted]
You are wrong, Trump's 2017 Tax cut bill had a provision that kicked in that caused the layoffs. Engineers became more expensive because now companies had to amortize their costs over 5 years instead of immediately.
There is no proof that higher interest rates lead to greater unemployment. In fact, macro employment kind of boomed during the referenced period. I'd posit that higher rates actually boosted macro employment stats . Why ? Because higher rates = higher income to rich people via interest income channel = higher fed budget deficits ( gov is net payer of interest) = higher GDP = lower unemployment ceterus paribus.
This is completely backwards. When interest rates are high, the expected returns of equity investments have to be even higher to justify the risk over risk-free fixed income assets.
And that's only the indirect effect on equity funding; debt funding just directly becomes more expensive.
- [deleted]
[flagged]
Why would it have anything to do with AI? Generative AI has been widely used for two years and the drop is exactly around January 20. What happened in AI around that time?
[flagged]
Elon Musk experiment is the worst anchor that can be used for comparison since the dude destabilized Twitter (re-branding, random layoffs, etc...). I'd be more interested in companies that went leaner but did it in a sane manner. The Internet user base grew between 2022 and now but Twitter might have lost users in that time period and certainly didn't make any new innovations beyond trying to charge its users more and confusing them.