They know that LLMs as a product are racing towards commoditization. Bye bye profit margins. The only way to win is regulation allowing a few approved providers.
They are more likely trying to race towards wildly overinflated government contracts because they aren't going to profit how they're currently operating without some of that funny money.
Isn’t that a bit like saying: storage is commodity and thus profit margins will be/should be low.
All major cloud providers have high profit margins in the range of 30-40%.
Storage doesn't require the same capex/upfront investment to get that margin.
How much does it cost to train a cutting edge LLM? Those costs need to be factored into the margin from inferencing.
Buying hard drives and slotting them in also has capex associated with it, but far less in total, I'd guess.
They don't, though! I can buy hardware off of the shelf, host open source models on it, and then charge for inference:How much does it cost to train a cutting edge LLM? Those costs need to be factored into the margin from inferencing.Yes, which is why the companies that develop the models aren't cost viable. (Google and others who can subsidize it at a loss obviously are excepted)
Where is the return on the model development costs if anybody can host a roughly equivalent model for the same price and completely bypass the model development cost?
Your point is inline with the entire bear thesis on these companies.
For any use cases which are analytical/backend oriented, and don't scale 1:1 with number of users (of which there are a lot), you can already run a close to cutting edge model on a few thousand dollars of hardware. I do this at home already
Open source models are still a year or so behind the SotA models released the last few months. The price to performance is definitely in favor of Open Source models however.
DeepMind is actively using Google’s LLMs on groundbreaking research. Anthropic is focused on security for businesses.
For consumers it’s still a better deal for a subscription than to invest a few grand in a personal LLM machine. There will be a time in the future where diminishing returns shortens this gap significantly, but I’m sure top LLM researchers are planning for this and will do whatever they can to keep their firm alive beyond the cost of scaling.
Definitely
I am not suggesting these companies can't pivot or monetize elsewhere, but the return on developing a marginally better model in-house does not really justify the cost at this stage.
But to your point, developing research, drugs, security audits or any kind of services are all monetization of the application of the model, not the monetization of the development of new models.
Put more simply, say you develop the best LLM in the world, that's 15% better than peers on release at the cost of $5B. What is that same model/asset worth 1 year later when it performs at 85% of the latest LLM?
Already any 2023 and perhaps even 2024 vintage model is dead in the water and close to 0 value.
What is a best in class model built in 2025 going to be worth in 2026?
The asset is effectively 100% depreciated within a single year.
(Though I'm open to the idea that the results from past training runs can be reused for future models. This would certainly change the math)
For sure, all these companies are racing to have the strongest model, and as time goes on we quickly start reaching diminishing returns. DeepSeek came out at the beginning of this year, blew everyone's minds, and now look at how far the industry has progressed beyond it.
It doesn't even seem like these companies are in a battle of attrition to not be the first to go bankrupt. Watching this would be a lot more exciting if that was the case! I think if there was less competition between LLMs developers could slow down, maybe.
Looking at the prices of inference of open-source models, I would bet proprietary models are making a nice margin on API fees, but there is no way OpenAI will make their investors whole because they make a few dollars of revenue for a million tokens. I am terrified of the world we will live in if OpenAI will be able to reverse their balance sheet. I think there's no where else that investors want to put their money.
The other nightmare for these companies, is that any competitor can use their state of the art model for training another model. As some Chinese models are suspected to do. I personally think it's only fair, since those companies in the first place trained on a ton of data and nobody agreed to it. But it shows that training the frontier models have really low returns on investment
Yes you’re right. Capex spend is definitely higher.
In the end it comes all down to the value provided as you see in the storage example.
this is slightly more nuanced, since the AI portion is not making money. it's their side hustle
What profit margins?
It is unclear. Everyday I seem to read contradictory headlines about whether or not inference is profitable.
If inference has significant profitability and you're the only game in town, you could do really well.
But without regulation, as a commodity, the margin on inference approaches zero.
None of this even speaks to recouping the R&D costs it takes to stay competitive. If they're not able to pull up the ladder, these frontier model companies could have a really bad time.
Probably it's "operationally profitable" when ignoring capex, depreciation, dilution and other required expenses to stay current.
Of course that means it's unprofitable in practice/GAAP terms.
You'd have to have a pretty big margin on inference to make up for the model development costs alone.
A 30% margin on inference for a GPU that will last ~7 years will not cut it
It's still technically a profit margin if it's less than zero...
There are profit margins on inference from what I understand. However the hefty training costs obviously make it a money losing operation.
The ones they hoped for.
Perhaps P/E ratios?
Yeah, but we can self-host them. At this point in the span of it, it's more about infrastructure and compute power to meet demand and Google won because it has many business models, massive cashflow, TPUs, and the infrastructure to build expanding on their current, which would take new companies ~25 years to map out compute, data centers and have a viable, tangible infrastructure all while trying to figure out profits.
I'm not sure about how the regulation of things would work, but prompt injections and whatever other attacks we haven't seen yet where agents can be hijacked and made to do things sounds pretty scary.
It's a race towards AGI at this point. Not sure if that can be achieved as language != consciousness IMO
>Yeah, but we can self-host them
Who is "we", and what are the actual capabilities of the self-hosted models? Do they do the things that people want/are willing to pay money for? Can they integrate with my documents in O365/Google Drive or my calendar/email in hosted platforms? Can most users without a CS degree and a decade of Linux experience actually get them installed or interact with them? Are they integratable with the tools they use?
Statistically close to "everyone" cannot run great models locally. GPUs are expensive and niche, especially with large amounts of VRAM.
Correct. And glad you're aware of the challenges with running them.
I'm not saying the options are favorable for everybody, I'm saying the options are there if it becomes locked in to 1-3 companies.
>It's a race towards AGI at this point. Not sure if that can be achieved as language != consciousness IMO
However it is arguable that thought is relatable with conscienceness. I’m aware non-linguistic thought exists and is vital to any definition of conscienceness, but LLMs technically dont think in words, they think in tokens, so I could imagine this getting closer.
'think' is one of those words that used to mean something but is now hopelessly vague- in discussions like these it becomes a blunt instrument. IMO LLMs don't 'think' at all - they predict what their model is most likely to say based on previously observed patterns. There is no world model or novelty. They are exceptionally useful idea adjacency lookup tools. They compress and organize data in a way that makes it shockingly easy to access, but they only 'think' in the way the Dewey decimal system thinks.
if we were having this conversation in 2023 I would agree with you, but LLM's have advanced so much that they are essentially efficient lookup tables is an oversimplification so dramatic I know you don't understand what you're talking about.
No one accuses the Dewey decimal system of thinking.
If I am so ignorant maybe you'd like to expand on exactly why I'm wrong. It should be easy since the oversimplification is dramatic enough that it made you this aggressive.
No, I don't want to waste my time trying to change the view of someone so close-minded they can't accept that LLM's do anything close to "thinking"
Sorry.
That's what I thought. Big talk, no substance.
I'm not the other poster but he's probably referring to how your comment seems to only be talking about "pure" LLMs and seems pretty out of date, whereas most tools people are using in 2025 use LLMs as glue to stitch together other powerful systems.
The bottleneck for commoditization is hardware. The manufacture of the hardware required is led by tmsc and samsung being a close second. The tooling required for manufacture is centralized with ASML and several other smaller players like Zeiss and the design of the product centers around nvidia though there are players like AMD who are attempting to catch up.
It is a complex supply chain but each section of the chain is held by only a few companies. Hopefully this is enough competition to accelerate the development of computational technologies that can run and train these LLMs at home. I give it a decade or more.
Another way to win is through exclusive access to high quality training data. Training data quality and quantity represent an upper bound on LLM performance. That's why the frontier model developers are investing some of their "war chests" in purchasing exclusive rights to data locked up behind corporate firewalls, and even hiring human subject matter experts in order to create custom proprietary training data in certain strategic domains.
[dead]
The "few approved providers" model is what they have been fighting against since the Biden admin
The only way to win is commoditize your complement (IMO).
That's a good line but it only works if market forces don't commoditize you first. Blithely saying "commoditize your complement" is a bit like saying "draw the rest of the owl."
Free models given away by social media companies (because they want people to generate content) and hardware companies (because they want people to buy GPUs, or whatever replaces them). Can the current subscription models compete with free? It's just a prediction - it could well be wrong.