> OpenAI is projecting that its total revenue for 2030 will be more than $280 billion
For context, that is more than the annual revenue of all but 3 tech companies in the world (Nvidia, Apple, Google), and about the same as Microsoft.
OpenAI meanwhile is projected to make $20 billion in 2026. So a casual 1300% revenue growth in under 4 years for a company that is already valued in the hundreds of billions.
Must be nice to pull numbers out of one's ass with zero consequence.
> a casual 1300% revenue growth in under 4 years for a company that is already valued in the hundreds of billions.
Such a weird sentence. The correct causality should be: It's valued in the hundreds of billions because the investors expect a 1300% revenue growth.
And if we all buy umbrellas, then it will start to rain??
The metaphor for the original post was more like "You're already wearing a raincoat and umbrella, and you're forecasting a flood warning?" So, the flood warning (project revenue) may be completely incorrect, but it's not incongruous with the fact that I'm wearing a raincoat and umbrella (current investor valuation). :-)
I mean, if you go outside and everyone else is carrying an umbrella, it's probably going to rain.
Or the town has been hoodwinked by a smooth talking umbrella salesman.
Again!?!!
Mono! Doh!
If you go outside and they are burning witches, it's best to go along with it.
This greatly overestimates the rationality of markets.
perhaps tis not the rain but the sun they fear.
That would be compatible with them carrying umbrellas; https://www.etymonline.com/word/umbrella
If you go outside and see people buying tulips, it doesn't mean that tulips are great investments.
Another example is how Isaac Newton lost money on some other bubble as well: https://www.smithsonianmag.com/smart-news/market-crash-cost-... [ The market crash which cost newton fortune]
So even if NEWTON, the legendary ISAAC NEWTON could lose money in bubble and was left holding umbrellas when there was no rain.
From the book Intelligent investor, I want to get a quote so here it goes (opened the book from my shelf, the page number is 13)
The great physicist muttered that he "could calculate the motions of the heavenly bodies, but not hte madness of the people"
This quote seems soo applicable in today's world, I am gonna create a parent comment about it as well.
Also, For the rest of Newton's life, he forbade anyone to speak the words "South Sea" in his pressence.
Newton lost more than $3 Million in today's money because of the south sea company bubble.
People often use that example, but Newton, for all he was unquestionably a giant of physics, was a bit of a weird dude and not 100% rationalist[1]. Additionally, just because he was a great physicist doesn't mean he knew anything at all about investment. You can be an expert in one field and pretty dumb in others. Linus Pauling (a giant in chemistry) had beliefs in terms of medicine that were basically pseudoscience.
Intelligent investor is a great book though.
[1] eg he wrote more than a million words on alchemy during his lifetime https://webapp1.dlib.indiana.edu/newton/project/about.do
> ...was a bit of a weird dude and not 100% rationalist...
That covers everyone. Especially and including the rationalists. Part of being highly intelligent is being a bit weird because the habits and beliefs of ordinary people are those you'd expect of people with ordinary intelligence.
Anyone involved in small-time investing should be considering that they aren't rational when setting their strategy. Larger investment houses do what they can but even then every so often will suffer from group-think episodes.
Investors are valuing it at ~$500B, which already projects massive revenue growth. OpenAI is saying "actually we are going to grow 10x faster than that". And all of this is without bringing up the “profit” word.
Oracle said something very similar, a short while ago. Besides a short lived peak, it didn’t do any good to their stock thus market valuation.
How much money was WeWork supposed to bring in when they were valued at $50 billion and it dropped to $10b when they put out their S-1 and faced some public scrutiny for the first time? This happened before covid and the switch to WFH. Were their investors unaware of their actual finances?
They said casual, not causal.
I didn't read it wrong. And the illogical part isn't 'casual.' It's the whole sentence, especially 'already.'
I like the little blurb at the end which said that Codex had 1.5 million users. So, if you can get each of them to pony up a mere $186k a piece, they can hit those revenue numbers.
> Codex had 1.5 million users
I'm three of them and I never spent a cent on any llms, I doubt I'm the only one
Don't forget that Codex is free until March so the numbers are heavily inflated.
I, too, can make $280B in revenue by 2030 (by selling $10 bills for $5 (as long as I bamboozle enough investors into giving me sufficient capital, of course)).
OpenAI is a bet on LLMs replacing a large chunk of the labour force in whatever sector it’s best at replacing. It’s essentially looking to get companies to pay $5k-$10k a month to have coding agents replace the output of a single software engineer.
If the S-curve levels off below that level OpenAI will be an unsuccessful company.
I have used AI a bit, like it for a bunch of use cases. But god damn, these numbers are so big. Gotta wonder, are the returns even worth it? RAM prices up, electricity prices up, hard disk prices up… Maybe this is the price to pay for “progress”, but it sure is wild
Simple - they returns are not worth it. :-)
I honestly don't think that sounds terribly outrageous.
OpenAI and Anthropic aren't building companies that aim to be API endpoints or chatbots forever, their vision is clearly: you will do everything through them.
The gamble is that this change is going to reach deeper into every business it touches than Microsoft Office ever did, and that this will happen extremely quickly. The way things are headed I increasingly think that's not a terrible bet.
Consequences come later friendo.
[flagged]
I think he meant for Anthropic?
Its a circular economy...He is talking about the money moving from Nvidia to OpenAI and back to Nvidia. You got to go with the flow...
He is counting on hundreds of husbands: https://xkcd.com/605/
1.4T was estimate by gpt4/5, 600b by gpt5.3?
they'll probably fix it just like they did fix strawberry
their estimates will drop by ~20x which will be their max
as underdog in the race they'll grab fraction of even that
where are they planning to get that much money from? by showing adverts for 14h before you can prompt?
> Its a circular economy
Garbage in, garbage out, same as before.
How will Nvidia give revenue to OpenAI?
Nvidia gives money to OpenAI so they can buy GPUs that don't exist yet with memory that doesn't exist yet so they can plug them into their datacenters that don't exist yet powered by infrastructure that doesn't exist yet so they can all make profit that is mathematically impossible at this point - Stolen from someone else.
There are other forms of money transfer than revenue.
> and about the same as Microsoft
> Must be nice to pull numbers out of one's ass with zero consequence.
Seems accurate?
What they are saying is if Microsoft ends up buying the rest of their shares then i.e. Microsoft's total revenue by 2030 will be more than $280 billion.
I was a paying customer ($20 a month) until AI prompted a layoff in my dying field that is web design and front end design coding. Now everytime chatGPT yells at me about memory i tell it fine Im just gonna use Gemini! I bet a lot of ppl are doing the same thing as both sit at the top of the iPhone charts.
- [deleted]
Today I got a feature request from another team in a call. I typed into our slack channel as a note. Someone typed @cursor and moments later the feature was implemented (correctly) and ready to merge.
The tools are good! The main bottleneck right now is better scaffolding so that they can be thoroughly adopted and so that the agents can QA their own work.
I see no particular reason not to think that software engineering as we know it will be massively disrupted in the next few years, and probably other industries close behind.
The anecdote is compelling, but there's an interesting measurement gap. METR ran a randomized controlled trial with experienced open-source developers — they were actually 19% slower with AI assistance, but self-reported being 24% faster. A ~40 point perception gap.
Doesn't mean the tools aren't useful — it means we're probably measuring the wrong thing. "Prompt engineering" was always a dead end that obscured the deeper question: the structure an AI operates within — persistent context, feedback loops, behavioral constraints — matters more than the model or the prompts you feed it. The real intelligence might be in the harness, not the horse.
It really doesn't matter how "good" these tools feel, or whatever vague metric you want - they hemorrhage cash at a rate perhaps not seen in human history. In other words, that usage you like is costing them tons of money - the bet is that energy/compute will become vastly cheaper in a matter of a couple of years (extremely unlikely), or they find other ways to monetize that don't absolutely destroy the utility of their product (ads, an area we have seen google flop in spectacularly).
And even say the latter strategy works - ads are driven by consumption. If you believe 100% openAI's vision of these tools replacing huge swaths of the workforce reasonably quickly, who will be left to consume? It's all nonsense, and the numbers are nonsense if you spend any real time considering it. The fact SoftBank is a major investor should be a dead giveaway.
> In other words, that usage you like is costing them tons of money
Evidence? I’m sure someone will argue, but I think it’s generally accepted that inference can be done profitably at this point. The cost for equivalent capability is also plummeting.
I didn't think there would need to be more evidence than the fact they are saying they need to spend $600 billion in 4 years on $13bn revenue currently, but here we are.
Here you go: https://www.wsj.com/livecoverage/stock-market-today-dow-sp-5...
Right, but if OpenAI wanted to stop doing research and just monetize its current models, all indications are that it would be profitable. If not, various adjustments to pricing/ads/ etc could get it there. However, it has no reason to do this, and like all the other labs is going insanely into debt to develop more models. I'm not saying that it's necessarily going to work out, but they're far from the first company to prioritize growth over profitability
Nope. The only "all indications" are that they say so. They may be making a profit on API usage, but even that is very suspect - compare against how much it actually costs to rent a rack of B200s from Microsoft. But for the millions of people using Codex/Claude Code/Copilot, the costs of $20-$30-$200 clearly don't compare to the actual cost of inference.
What was the feature and what was the note?
It was a modest update to a UX ... certainly nothing world-changing. (It's also had success with some backend performance refactors, but this particular change was all frontend.) The note was basically just a transcription of what I was asked to do, and did not provide any technical hints as to how to go about the work. The agent figured out what codebase, application, and file to modify and made the correct edit.