"advertising would make ChatGPT a better product."
And with that, I will never read anything this guy writes again :)
I like and read Ben's stuff regularly; he often frames "better" from the business side. He will use terms like "revealed preference" to claim users actually prefer bad product designs (e.g. most users use free ad-based platforms), but a lot of human behavior is impulsive, habitual, constrained, and irrational.
To an MBA type, addictive drugs are the best products. They reveal people's latent preferences for being desperately poor and dependent. They see a grandma pouring her life savings into a gambling app and think "How can I get in on this?"
I think its more subtle; they fight for regulations they deem reasonable and against those they deem unreasonable. Anything that curtails growth of the business is unreasonable.
[dead]
To be fair, businesses should assume that customers actually "want" what they create demand for. In the case of misleading or dangerously addictive products, regulation should fall to government, because that's the only actor that can prevent a race to the bottom.
The folks who succeed most in business are the type who have an intuition for what's best. They're not some automaton reading too far into and amplifying the imperfect and shallow signals of "demand" in a marketplace.
Because all people everywhere are psychopaths who will stab you for $5 if they can get away with it? If you take that attitude, why even go to "work" or run a "business"? It'd be so much more efficient to just stab-stab-stab and take the money directly.
To be fair, organized predatory behavior is to be expected?
joke- The World Council of Animals meeting completes with morning sessions with "OK great, now who is for lunch?"
If you liked that, you'll enjoy his take on how, actually, bubbles are good: https://stratechery.com/2025/the-benefits-of-bubbles/
And he's right (and the sources he points out), that some bubbles are good. They end up being a way to pull in a large amount of capital to build out something completely new, but still unsure where the future will lead.
A speculative example could be AI ends up failing and crashing out, but not until we build out huge DCs and power generation that is used on the next valuable idea that wouldn't be possible w/o the DCs and power generation already existing.
The bubble argument was hard to wrap my head around
It sounded vaguely like the broken window fallacy- a broken window creating “work”
Is the value of bubbles in the trying out new products/ideas and pulling funds from unsuspecting bag holders?
Otherwise it sounds like a huge destruction of stakeholder value - but that seems to be how venture funding works
Huge DCs and Power Generation might be useful, long-lasting infrastructure, however, the racks full of GPUs and TPUs will depreciate rather quickly.
I think this is a bit overblown.
In the event of a crash, the current generation of cards will still be just fine for a wide variety of ai/ml tasks. The main problem is that we'll have more than we know what to do with if someone has to sell of their million card mega cluster...
The problem is the failure rate of GPUs is extremely high
yeah... and it's (partly) based on the claim that it has network effects like how Facebook has? I don't see that at all, there's basically no social or cross-account stuff in any of them and if anything LLMs are the best non-lock-in system we've ever had: none of them are totally stable or reliable, and they all work by simply telling it to do the thing you want. your prompts today will need tweaking tomorrow, regardless of if it's in ChatGPT or Gemini, especially for individuals who are using the websites (which also keep changing).
sure, there are APIs and that takes effort to switch... but many of them are nearly identical, and the ecosystem effect of ~all tools supporting multiple models seems far stronger than the network effect of your parents using ChatGPT specifically.
I’d argue that AI apis are nearly trivial to switch… the prompts can largely stay the same, and function calling pretty similar
Ben Thompson is a content creator. Even if Ben’s content does not directly benefit from ads, it is the fact that other content creator’s content having ads is what makes Ben’s content premium in comparison.
I would say that, on this topic (ads on internet content), Ben Thompson may not be as objective a perspective as he has on other topics.
People aren’t collectively paying him between $3 million a year and five million (estimated 40k+ subscribers paying a minimum of $120 a year) just because he doesn’t have ads.
The problem with ads in AI products is, can they be blocked effectively?
If there are ads on a side bar, related or not to what the user is searching for, any adblock will be able to deal with them (uBlock is still the best, by far).
But if "ads" are woven into the responses in a manner that could be more or less subtle, sometimes not even quoting a brand directly, but just setting the context, etc., this could become very difficult.
I realized that ads within context were going to be an issue a while ago so to combat this i started building my own solution for this which spiraled in to a local based agentic system with a different bigger goal then the simple original... Anyways, the issue you are describing is not that difficult to overcome. You simply set a local llm model layer before the cloud based providers. Everything goes in and out through this "firewall". The local layer hears the humans requests, sends it to the cloud based api model, receives the ad tainted reply, processes the reply scrubbing the ad content and replies to the user with the clean information. I've tested exactly this interaction and it works just fine. i think these types of systems will be the future of "ad block" . As people start using agentic systems more and more in their daily lives it will become crucial that they pipe all of the inputs and outputs through a local layer that has that humans best interests in mind. That's why my personal project expanded in to a local agentic orchestrator layer instead of a simple "firewall". i think agentic systems using other agentic systems are the future.
> i started building my own solution
How much ?
"advertising in ChatGPT would make DeepSeek/Qwen/<other AI> a better product"
There, fixed.
A better product to make money of course.
I am not 100% sure this is wrong?
I frequently ask chatgpt about researching products or looking at reviews, etc and it is pretty obvious that I want to buy something, and the bridge right now from 'researching products' to 'buying stuff' is basically non-existent on ChatGPT. ChatGPT having some affiliate relationships with merchants might actually be quite useful for a lot of people and would probably generate a ton of revenue.
It's likely they already make money on affiliates, but this is different, ads are product placement.
ChatGPT has recently been linking me directly to Amazon or other stores to buy what I'm researching.
Sure, but affiliate != ads. Rather, both affiliate links and paid ad slots are by definition not neutral and thus bias your results, no matter what anyone claims.
Ben Thompson is a sharp guy who can't see the forest for the trees. Nor most of the trees. He can only see the three biggest trees that are fighting over the same bit of sunlight.
Indeed. Why do people follow these clowns? They seem to read high level takes and spew out their nonsense theories.
They fail to mention Google's edge: Inter-Chip Interconnect and the allegedly 1/3 of price. Then they talk about software moat and it sounds like they never even compiled a hello world in either architecture. smh
And this comes out days after many in-depth posts like:
https://newsletter.semianalysis.com/p/tpuv7-google-takes-a-s...
A crude Google search AI summary of those would be better than this dumb blogpost.
Why? It turns out that I try to read people who have a different perspective than I do. Why am I trying to read everything that just confirms my current biases?
(Unless those writings are looking to dehumanize or strip people of rights or inflame hate - I'm not talking about propaganda or hate speech here.)
Personally when I go to the grocery store I pick fruits and vegetables that are ripe or are soon to be ripe, and I stay away from meat that is close to expiration or has an off putting appearance or odour to it.
With that said there's no accounting for taste.
You realize this “dumb blogspot” is written by the most successful writer in the industry as far as revenue from a paid newsletter? He has had every major tech CEO on his podcast and he is credited for being the inspiration for Substack.
The Substack founders unofficially marketed it early on as “Stratechery for independent authors”.
Your analysis concerning the technology instead of focusing on the business is about like Rob Malda not understanding the success of the “no wireless, less space than the Nomad lame”.
Even if you just read this article, he never argued that Google didn’t have the best technology, he was saying just the opposite. Nvidia is in good shape precisely because everyone who is not Google is now going to have to spend more on Nvidia to keep up.
He has said that AI may turn out to be a “sustaining innovation” first coined by Clay Christenson and that the big winners may be Google, Meta, Microsoft and Amazon because they can leverage their pre-existing businesses and infrastructure.
Even Apple might be better off since they are reportedly going to just throw a billion at Google for its model.
"Better product" here means "monetizes harder". You just have a different concept of product quality than hardline-capitalist finance bros.
better product = inflicting more suffering while generating more revenue