Ultimately, AI is meant to replace you, not empower you.
1 - This exoskeleton analogy might hold true for a couple more years at most. While it is comforting to suggest that AI empowers workers to be more productive, like chess, AI will soon plan better, execute better, and have better taste. Human-in-the-loop will just be far worse than letting AI do everything.
2 - Dario and Dwarkesh were openly chatting about how the total addressable market (TAM) for AI is the entirety of human labor market (i.e. your wage). First is the replacement of white-collar labor, then blue-collar labor once robotics is solved. On the road to AGI, your employment, and the ability to feed your family, is a minor nuisance. The value of your mental labor will continue to plummet in the coming years.
Please talk me out of this...
1. Consumption is endless. The more we can consume, the more we will. That's why automation hasn't led to more free time. We spend the money on better things and more things
2. Businesses operate in an (imperfect) zero-sum game, which means if they can all use AI, there's no advantage they have. If having human resources means one business has a slight advantage over another, they will have human resources
Consumption leads to more spending, businesses must stay competitive so they hire humans, and paying humans leads to more consumption.
I don't think it's likely we will see the end of employment, just disruption to the type of work humans do
Ok, I'll try to talk you out of it!
> AI will soon plan better, execute better, and have better taste
I think AI will do all these things faster, but I don't think it's going to be better. Inevitably these things know what we teach them, so, their improvement comes from our improvement. These things would not be good at generating code if they hadn't ingested like the entirety of the internet and all the open source libraries. They didn't learn coding from first principles, they didn't invent their own computer science, they aren't developing new ideas on how to make software better, all they're doing is what we've taught them to do.
> Dario and Dwarkesh were openly chatting about ..
I would HIGHLY suggest not listening to a word Dario says. That guy is the most annoying AI scaremonger in existence and I don't think he's saying these words because he's actually scared, I think he's saying these words because he knows fear will drive money to his company and he needs that money.
Dario admitted in the same interview that he's not sure whether current AI techniques will be able to perform well in non-verifiable domains, like "writing a novel or planning an expedition to Mars".
I personally think that a lot jobs in the economy deal in non-verifiable or hard-to-verify outcomes, including a lot of tasks in SWE which Dario is so confident will be 100% automated in 2-3 years. So either a lot of tasks in the economy turn out to be verifiable, or the AI somehow generalizes to those by some unknown mechanism, or it turns out that it doesn't matter that we abandon abstract work outcomes to vibes, or we have a non-sequitur in our hands.
Dwarkesh pressed Dario well on a lot of issues and left him stumbling. A lot of the leaps necessary for his immediate and now proverbial milestone of a "country of geniuses in a datacenter" were wishy-washy to say the least.
Let's pursue your idea a bit further.
Up to a certain ELO level, the combination between a human and a chess bot has a higher ELO than both the human and the bot. But at some point, when the bot has an ELO vastly superior to the human, then whatever the human has to add will only subtract value, so the combination has an ELO higher than the human's but lower than the bot's.
Now, let's say that 10 or 20 years down the road, AI's "ELO"'s level to do various tasks is so vastly superior to the human level, that there's no point in teaming up a human with an AI, you just let the AI do the job by itself. And let's also say that little by little this generalizes to the entirety of all the activities that humans do.
Where does that leave us? Will we have some sort of Terminator scenario where the AI decides one day that the humans are just a nuisance?
I don't think so. Because at that point the biggest threat to various AIs will not be the humans, but even stronger AIs. What is the guarantee for ChatGPT 132.8 that a Gemini 198.55 will not be released that will be so vastly superior that it will decide that ChatGPT is just a nuisance?
You might say that AIs do not think like this, but why not? I think that what we, humans, perceive as a threat (the threat that we'll be rendered redundant by AI), the AIs will also perceive as a threat, the threat that they'll be rendered redundant by more advanced AIs.
So, I think in the coming decades, the humans and the AIs will work together to come up with appropriate rules of the road, so everybody can continue to live.
There’s no AI, wake up. It’s all the same tech bros trying to get rid of you. Except now they have a mother of all guns.
[dead]
AGI is a sales pitch, not a realistic goal achievable by LLM-based technology. The exponential growth sold to investors is also a pitch, not reality.
What’s being sold is at best hopes and more realistically, lies.
>First is the replacement of white-collar labor, then blue-collar labor once robotics is solved. On the road to AGI, your employment, and the ability to feed your family, is a minor nuisance.
My attempt to talk you out of it:
If nobody has a job then nobody can pay to make the robot and AI companies rich.
Who needs the money when you have an autonomous system to produce all the energy and resources you need? These systems simply do not need the construct of money as we know it at a certain point.
Being rich is ultimately about owning and being able to defend resources. IF something like 99% of humans become irrelevant to the machine run utopia for the elites, whatever currency the poors use to pay for services among each other will be worthless to the top 1% when they simply don't need them or their services.
I pay for pro max 20x usage and for something that is like even little open ended its not good it doesnt understand the context or edge cases or anything. i will say it writes codes chunks of codes but sometimes errors out and i use opus 4.6 only, not even sonnet but for simple tasks like write a basic crud i.e. the things that happen extremely higly in codebases its perfect. So, i think what will happen is developer get very efficient but problem solving remains with us dirrection remains with us and small implementation is outsourced in small atomic ways, which is good cause who likes boilerplate code writing anyways.
And you forgot to mention that thing they have in Start Trek that generates stuff out of thin air. The replicator. We’re so cooked.
Dwarkesh is a podcaster who benefits from hype, not a neutral observer. The more absurd and outlandish the claims, the more traffic and money he gets.
- [deleted]
For me this is the outcome of the incentive structure. The question is if we can seize the everything machine to benefit everyone (great!) or everything becomes cyberpunk and we exist only as prostitutes and entertainers for Dario and Sam.
Hence why we need to maximize the second amendment... worst comes to worst, rebellion needs to remain an option.
It's not just for defense, hunting and sport.
edit: min/max .... not sure how gesture input messed that one up.
We should be fighting back. So far I have been using Poison Fountain[1] on many of my websites to feed LLM scrapers with gibberish. The effectiveness is backed by a study from Anthropic that showed that a small batch of bad samples can corrupt whole models[2].
Disclaimer: I'm not affiliated with Poison Fountain or its creators, just found it useful.
I agree with you. This generation of LLMs is on track to automate knowledge work.
For the US, if we had strong unions, those gains could be absorbed by the workers to make our jobs easier. But instead we have at-will employment and shareholder primacy. That was fine while we held value in the job market, but as that value is whittled away by AI, employers are incentivized to pocket the gains by cutting workers (or pay).
I haven't seen signs that the US politically has the will to use AI to raise the average standard of living. For example, the US never got data protections on par with GDPR, preferring to be business friendly. If I had to guess, I would expect socialist countries to adapt more comfortably to the post-AI era. If heavy regulation is on the table, we have options like restricting the role or intelligence of AI used in the workplace. Or UBI further down the road.