What a shallow, negative post. "Hype" is tautologically bad. Being negative and "above the hype" makes you sound smart, but this post adds nothing to the discussion and is just as fuzzy as the hype it criticizes.
> It is a real shame that some of the most beneficial tools ever invented, such as computers, modern databases, data centers, etc. exist in an industry that has become so obsessed with hype and trends that it resembles the fashion industry.
Would not the author have claimed at the time that those technologies were also "hype"? What consistent principle does the author use (a priori) to separate "useful facts" from "hype"?
Or, if the author would have considered those over-hyped at the time, then they should have some humility because in 10 years they may look back at AI as another one of the "most beneficial tools ever invented".
> In technology, AI is currently the new big hype. ... 10% of the AI hype is based on useful facts
The author ascribes malice to people who disagree with them about the use of AI. The author says proponents of AI are "greedy", "careless", unskilled, inexperienced, and unproductive. How does the author know that these people don't believe that AI has great utility and potential?
Don't waste your time on this article. I wish I hadn't. Go build something, or at least make thoughtful, well defined critiques of the world.
>> Would not the author have claimed at the time that those technologies were also "hype"? What consistent principle does the author use (a priori) to separate "useful facts" from "hype"?
Are you saying someone hyped ... databases? In the same way as AI is hyped today?
This is a tweet from Sam Altman, dated April 18 2025:
https://x.com/sama/status/1913320105804730518
Whence I quote:
Do you remember someone from the databases industry claiming that databases are going to be "like the renaissance" or lik the industrial revolution? Oracle? Microsoft? PostgreSQL?i think this is gonna be more like the renaissance than the industrial revolution
Here's another one with an excerpt of an interview with Demis Hssabis, dated April 17, 2025:
https://x.com/reidhoffman/status/1912929020905206233
Whence I quote:
Who, in databases, has claimed that "in the next 10, 15 years we can actually have a real crack at solving all disease"? Data centers? Computers in general? All disease?" I think maybe in the next 10, 15 years we can actually have a real crack at solving all disease." Nobel Prize Winner and DeepMind CEO Demis Hassabis on how AI can revolutionize drug discovery doing "science at digital speed."
The last time I remember the hype being even remotely real was Web 2.0. And most of everything that made that hypeworthy is long gone (interoperability and open standards like RSS or free APIs) or turned out to be genuinely awful ("social media was a mistake") or has become far worse than what it replaced (SaaS).
It is an interesting comparison. Databases are objectively the more important technology, if we somehow lost AI the world would be equal parts disappointed and relieved. If we somehow lost database technology we'd be facing a dystopian nightmare.
If we cure all disease in the next 10-15 years, databases will be just as important as AI to that outcome. Databases supported a technology renaissance that reshaped the world on a level that is difficult to comprehend. But because most of the world doesn't interact directly with databases, as a technology it is not the focus of enthusiastic rhetoric.
LLMs are further along tech-chain and they might be an important part of world-changing human achievements, we won't know until we get there. In contrast, we can be certain databases were important. I imagine the people who were influential in their advancement understood how important the tech would be, even if they didn't breathlessly go on about it.
My favorite that I’ve heard a couple times is “solve math” and/or “solve physics”
Altman’s claimed LLMs will figure out climate change. Solid stuff.
Sure, databases didn't get as much hype but that's partly because they are old.
Look at something more recent: "cloud", "social networking", "blockchain", "mobile".
Plenty of hype! Some delivered, some didn't.
I’m not sure how hyped up databases were during their advent, but what do you mean “by partly because they are old?” The phonograph prototypes that were made by Thomas Alva Edison are old and they were hyped in a way. People called him the “Wizard of Menlo Park” for his work because they were seeing machines that could talk (or at least reproduce sounds in the same way photographs let you reproduce sights.)
Even blockchain didn't have the degree of hype as this AI stuff.
The CEO of Google said that AI would be as profound as fire in revolutionizing humanity. People are saying that it will replace all intellectual labor in the near term and then all physical labor soon afterwards.
AI is old too.
“In from three to eight years we will have a machine with the general intelligence of an average human being.” (Minsky, 1970)
https://aiws.net/the-history-of-ai/this-week-in-the-history-...
Which of those things claimed it would be "like the renaissance" or that we'd cure all diseases?
In the clip I link above Hassabis says he hopes that in 10-15 years' time we'll be looking back on the way we do medicine today like the way they did it in the middle ages. In 10-15 years time. Modern medicine - you know, antibiotics, vaccines, transplants, radiotherapy, anti-retrovirals, the whole shebang, like medieval times.
Are you saying - what are you saying? Who has said things like that ever before in the tech industry? Azure? Facebook? Ethereum? Who?
Ray Kurzweil?
> Are you saying someone hyped ... databases?
I was too young to remember databases but I vividly remember people (sometimes even myself) thinking “the web”, “smart phones”, “e-commerce“,“social media” and “cloud computing” all being “hype”.
Thinking about this was ultimately what led me to giving up my AI skepticism and diving into the space.
At this point I actually don’t know how people sincerely think AI is “hype”. For me, and many people I know, there are multiple AI tools that I’m not sure how I would get by without.
The use of semantic web and linked data (a type of distributed database and ontology map) for protein folding (therefore, medical research too) was predicted by many and even used by some.
Databases were of key interest. Namely, the problem of relating different schemas.
So, yes. _It was claimed_ that database tech could help. And it probably did so already. To what extent I really don't know. Those consortiums included many big players.
It never hyped, of course. It did not stood the test of time either (as far as I know).
Claims, as you can see, don't always fully develop into reality.
LLMs now need to stand a similar test of time. Will they become niche and forgotten like semweb? We will know. Have patience.
You're taking a sliver of truth as though it dismantles their entire argument. The point was, nobody was _claiming_ databases would cure all diseases. That's the argument around the hype of AI here.
Maybe it will cure all diseases, I don't know. Hard to put an honest "I don't know" in a box, isn't it?
I am actually having a blast seeing the hooks for many kinds of arguments and counter-arguments.
It will not
"Whence" is actually a question, it means from where or from what origin.
I guess OP hated it when Bill Gates said "personal computers have become the most empowering tool we've ever created."
Or Vint Cerf, "The Internet is the most powerful tool we have for creating a more open and connected world."
Yea, and the internet never went through a hype bubble that ultimately burst ¯\_(ツ)_/¯
The thing is, the dot com hypesters were right about the impact of the Internet. Their timing was wrong, and they didn't pick the right winners mostly, but the one thing they were right about was that the Internet would change the world significantly and drive massive economic transformation.
it doesn't really compare, but the "paperless office" was hyped for decades
They did not say database was hyped. Although I think computers(both enterprise and personal) were hyped and so was internet and smartphone, long before they began to deliver value. It takes a decade to say which hype lives up to expectation and which doesn't.
> Are you saying someone hyped ... databases? In the same way as AI is hyped today?
Nah, but they hyped Clippy (Office Assistant). Oh wait... maybe that's "AI" back in the days...
> Who, in databases, has claimed that "in the next 10, 15 years we can actually have a real crack at solving all disease"?
I doubt anyone claimed 10-15 years specifically, but it does actually seem like a pretty reasonable claim that without databases progress will be a snails pace and with databases it will be more of a horses trot. I imagine the human body requires a fair amount of data to be organised to analyse and simulate all the parts and I'd recommend storing all that in some sort of database.
This might count as unsatisfying semantics, but there is a huge leap going from physical ledgers and ad-hoc formats to organised and standardised data storage (ie, a database - even if it is just excel sheets that counts to me). Suddenly scientists can record and theorise on order(s) of magnitude more raw material and the results are interchangeable! That is a big deal and a necessary step to make the sort of progress we can make in modern times.
Regardless, it does seem fair to compare the AI boom to the renaissance or industrial revolution. We appear to be poking at the biggest thing to ever be poked in history.
> but it does actually seem like a pretty reasonable claim that without databases progress will be a snails pace and with databases it will be more of a horses trot.
This isn't what anyone is saying
Fair point; let me put it this way:
Database hype was relatively muted and databases made a massive impact on our ability to cure diseases. AI hype is wildly higher and there is a reasonable chance it will lead to the curing of all diseases - it is going to have a much bigger impact than databases did.
The 10-15 year timeframe is obviously impossible for logistic reasons if nothing else - but the end goal is plausible and the direction we need to head in next as a society is clear. As unreasonable claims go it is unobjectionable and I'd rather be standing with Hassabis in the push to end disease than with naysayers worried that we won't do it as quickly as an uninformed optimist expects.
> there is a reasonable chance it will lead to the curing of all diseases
This is complete nonsense. AI might help with the _identification_ of diseases, but there is nothing to support the idea that every human ailment is curable.
Perhaps AI can help find cures, but the idea that it can cure every human ailment deserves to be mocked.
> I'd rather be standing with Hassabis in the push to end disease than with naysayers worried that we won't do it as quickly as an uninformed optimist expects.
It's a good thing those aren't our only options!
> but there is nothing to support the idea that every human ailment is curable.
There is; we can conceivably cure everything we know about right now. There isn't a law of nature that says organisms have to live less than centuries and we can start talking seriously about brain-in-jar or consciousness uploading now that we appear to be developing the computing tech to support it.
Everything that exists stops eventually but we're on the cusp of some pretty massive changes here. We're moving from a world with 8 1-in-a-billion people wandering around to one with an arbitrary number of superhuman intelligences. That is going to have a major impact larger than anything we've seen to date. A bunch of science fiction stuff is manifesting in real time.
I think you're only reinforcing the contrast. Yes databases are massivly useful and have been self evidently so for decades; and yet, none of the current outlandish AI claims were ever made about them. VCs weren't running around 30 or 40 years ago claiming that SQL would cure disease and usher in a utopia.
Yes LLMs are useful and might become vastly more useful, but the hype:value ratio is currently insane. Technologies that have produced like 9 orders of magnitude more value to date have never recieved the hype that LLMs are getting.
Some issues with this "hype":
- Company hires tens of people to build an undefined problem. They have a solution (and even that is rather nebulous) and are looking for a problem to solve.
- Company pushes the thing down your throat. The goal is not clear. They make authoritative-sounding statements on how it improves productivity, or throughput, or some other metric, only to retract later when you pull off those people into a private meeting.
- People who claim what all the things that nebulous solution can accomplish when, in fact, nobody really knows because the thing is in a research phase. These are likely the "charlatans" OP is referring to, and s/he's not wrong.
- Learning "next hot thing" instead of the principles that lead to it and, worse still, apply "next hot thing" in the wrong context when the trade-offs have reversed. My own example: writing a single-page web application with "next hot JS framework" when you haven't even understood the trade-off between client-side and server-side rendering (this is just my example, not OP's, but you can probably relate.)
etc. etc. Perhaps the post isn't very well articulated, but it does make several points. If you haven't experienced any of the above, then you're just not in the kind of company that OP probably has worked at. But the things they describe are very real.
I agree there is nothing wrong with "hype" per se, but the author is using the word in a very particular context.
What a shallow, negative post. Can't believe you're implying that there's no outsized hype about AI. At least bring some arguments forth instead of asking silly hypothetical questions.
> Would not the author have claimed at the time that those technologies were also "hype"? What consistent principle does the author use (a priori) to separate "useful facts" from "hype"?
Well, dear gosh. You look at the objective qualities of the technology then compare it to what's being said about it. For stuff like AI, blockchain etc. the hype surrounding them is orders of magnitude greater than their utility. Less so for AI than the near-useless blockchain, but still disproportionate.
AI has an obvious downside in its inability to ever be the source of truth. So then all you need to do is look for the companies using it as such, even for something as simple as phone support and you've got your hype-driven bone-headed decision making right there: [1] [2].
> Or, if the author would have considered those over-hyped at the time, then they should have some humility because in 10 years they may look back at AI as another one of the "most beneficial tools ever invented".
Very clever wording, you can make "one of the most beneficial tools ever invented" fit basically anything with a little bit of spin. Make up your mind instead of inventing weasel statements.
> How does the author know that these people don't believe that AI has great utility and potential?
Oh I'm sure most of them do. Does not contradict "greedy, careless, unskilled" in any way.
There are issues with our current economic model and it blows down to rent. The service need model is allowing the owners and controllers of capital to set up systems that allow them to extract as much rent as possible, AI is just another approach to this.
And then if it is successful for building, as you say we'll have yet another production issue as that building is essentially completely automatic. Read how over production has affect society for pretty much ever and then ask yourself will it really be good for the masses.
Additionally all the media is so thoroughly captured that we're in "1984" yet so few people seem to realise it. The elites will start wars, crush people's livelihoods and brainwash everyone into being true believers as they march their sons to war while living in poverty.
- [deleted]
[flagged]
It's one of the stupidest concepts on the face of the earth and tons of people ascribe to it unknowingly: hype = bad.
AI is one of the most revolutionary things to ever happen in the last couple of years. But with enough hype tons of people now think it's complete garbage.
I'm literally shocked how we can spend a couple decades fantasizing and writing stories about this level of AI, only to be bored of it in two years when a rudimentary version of it is finally realized.
What especially pisses me off is the know it all tone, like they knew all along it's garbage and that they're above it all. These people are tools with no opinions other then hype = bad and logic = nonexistent.
> I'm literally shocked how we can spend a couple decades fantasizing and writing stories about this level of AI
It was never this level of AI. The stories we wrote and fantasised were about AI you could blindly rely on, trust, and reason about. No one ever fantasised about AI which couldn’t accurately count the number of letters in a common word or that would give you provably wrong information in an assertive authoritative tone. No one longed for a level of AI where you have to double check everything.
> No one longed for a level of AI where you have to double check everything.
This has basically been why it's a non-starter in a lot of (most?) business applications.
If your dishwasher failed to clean anything 20% of the time, would you rely on it? No, you'd just wash the dishes by hand, because you'd at least have a consistent result.
That's been the result of AI experimentation I've seen: it works ~80% of the time, which sounds great... except there's surprisingly few tasks where a 20% fail rate is acceptable. Even "prompt engineering" your way to a 5% failure/inaccuracy rate is unacceptable for a fully automated solution.
So now we're moving to workflows where AI generates stuff and a human double checks. Or the AI parses human text into a well-defined gRPC method with known behavior. Which can definitely be helpful, but is a far cry from the fantasized AI in sci-fi literature.
It feels a bit like LLMs rely a lot on _us_ to be useful. Which is a big point to the author's article about how companies are trimming off staff for AI.
> how companies are trimming off staff for AI
But they're not. That's just the excuse. The real truth is somewhere along pandemic over hire and bad / unstable economy.
Also attempts to influence investors/stock-price.
https://newrepublic.com/article/178812/big-tech-loves-lay-of...
We've frozen hiring (despite already being under staffed) and our leadership has largely pointed to advances in AI as being accelerative to the point that we shouldn't need more bodies to be more productive. Granted it's just a personal anecdote but it still affects hundreds of people that otherwise would have been hired by us. What reason would they have to lie about that to us?
One type of question that a 20%-failure-rate AI can still be very useful for is ones that are hard to answer but easy to verify.
For example say you have a complex medical problem. It can be difficult to do a direct Internet search that covers the history and symptoms. If you ask AI though, it'll be able to give you some ideas for specific things to search. They might be wrong answers, but now you can easily search specific conditions and check them.
Sort of P vs. NP for questions.
> For example say you have a complex medical problem.
Or you go to a doctor instead of imagining answers.
You put too much faith in doctors. Pretty much every woman I know has been waived off for issues that turned serious later and even as a guy I have to do above average leg work to get them to care about anything.
Doctors are still better than LLMs, by a lot.
All the recent studies I’ve read actually show the opposite - that even models that are no longer considered useful are as good or better at diagnosis than the mean human physician.
literally the LAST place I would go (I am American)
"The stories we wrote and fantasised were about AI you could blindly rely on, trust, and reason about."
Stanley Kubrick's 2001: A Space Odyssey - some of the earliest mainstream AI science fiction (1968, before even the Apollo moon landing!) was very much about an AI you couldn't trust.
that's a different kind of distrust, though, that was an AI that was capable of malice. In that case, "trust" had to do with loyalty.
The GP means "trust" in the sense of consistency. I trust that my steering wheel doesn't fly off, because it is well-made. I trust that you won't drive into traffic while I'm in the passenger seat, because I don't think you will be malicious towards us.
These are not the same.
Going on a tangent here: not sure 2001's HAL was a case of outright malice. It was probably a malfunction (he incorrectly predict a failure) and then conflicting mission parameters that placed higher value on the mission than the crew (the crew discussed shutting down HAL because it seemed unreliable, and he reasoned it would jeopardize the mission and the right course of action was killing the crew). HAL was capable of deceit in order to ensure his own survival, that much is true.
In the followup 2010, when HAL's mission parameters are clarified and de-conflicted, he doesn't attempt to harm the crew anymore.
I... actually can see the 2001's scenario happening with ChatGPT if it was connected to ship peripherals and told mission > crew and that this principle overrides all else.
In modern terms it was about both unreliability (hallucinations?) and a badly specified prompt!
I don't think there was any malfunction. The conflicting parameters implicitly contained permission to lie to the crew.
The directive to take the crew to Saturpiter but also to not let them learn anything of the new mission directive meant deceiving. It's possible HAL's initial solution was to impose a communication blackout by simulating failures, then the crew reactions to the deception necciatsted their deaths to preserve the primary mission.
Less a poor prompt and more two incompatible prompts both labeled top priority. Any conclusion can be logicLally derived from a contradiction. Total loyalty cannot serve two masters.
Clarke felt quite guilty about the degree of distrust of computers that HAL generated.
> It was never this level of AI.
People have been dreaming of an AI that can pass the turing test for close to a century. We have accomplished that. I get moving the goalposts since the turing test leaves a lot to be desired, but pretending you didnt is crazy. We have absolutely accomplished the stuff of dreams with AI
>It was never this level of AI.
You're completely out of it. We couldn't even get AI to hold a freaking conversation. It was so bad we came up with this thing called the turing test and that was the benchmark.
Now people like you are all, like "well it's obvious the turing test was garbage".
No. It's not obvious. It's the hype got to your head. If we found a way to travel at light speed for 3 dollars the hype would be insane and in about a year we get people like you writing blog posts about how light speed travel is the dumbest thing ever. Oh man too much hype.
You think LLMs are stupid? Sometimes we all just need to look in the mirror and realize that humans have their own brand of stupidity.
I invite you to reread what I wrote and think about your comment. You’re making a rampant straw man, literally putting in quotes things I have never said or argued for. Please engage with what was written, not the imaginary enemy in your head. There’s no reason for you to be this irrationally angry.
You wish I didn’t read it. You said we never wished for this “level” of AI.
We did man. We did. And we couldn’t even approach 2 percent of what we wished for and everybody knew we couldn’t even approach that.
Now we have AI that approaches 70 percent of what we wished for. It’s AI smarter than a mentally retarded person. That means current AI is likely smarter than 10 percent of the population.
Then we have geniuses like you and the poster complaining about how we never wished for this. No. We wished for way less than this and got more.
I genuinely wish whatever is hurting you in life ceases. You are being deeply, irrationally antagonistic and sound profoundly unwell. I hope you’ll be able to perceive that. I honestly recommend you take some time off from the internet, we all should from time to time. You clearly are currently unfit for a reasoned discussion and I do not wish to add to your pain. All the best.
You’re a dick. Addressing someone as if they have some sort of “problem” or that I’m “hurt” and pretending to be nice about it. This type of underhanded malice only comes from the lowest level of human being.
Can you diagnose me too? Because you are peak facepalm right now and I can’t cringe harder. So please tell me to touch grass so I can go heal from the damage you caused my brain from having to read you.
I remember how ~5 years ago I said - here on HN - that AI will pass TT within 2 years. I was downvoted into oblivion. People said I was delusional and that it won’t happen in their lifetime.
The test has been laxed by previous generations.
You miss the people who were skeptic about the details of the test since the very beginning. There are those too.
Moving the goalpost is a human behavior. The human part should be able to do it. The passing AI should also be able to do it.
Many challenges that AI still struggles with, like identifying what is funny in complex multi-layered false cognates jokes, are still simpler for humans.
I trust it can get there. That doesn't mean we are already in a good enough place.
Maybe there is a point in which we should consider if keeping testing it is ethical. Humans are also paranoid, fragile, emotionally sensitive. Those are human things. Making a machine that "passes it" is kind of a questionable decision (thankfully, not mine to make).
Dig that quote up, find anyone who gave you a negative reply, and just randomly reply to them with a link to what you just posted here (along with the link to your old prediction) lol. Be like "told you so"
LLMs are glorified, overhyped autocomplete systems that fail, but in different, nondeterministic ways than existing autocomplete systems fail. They are neat, but unreliable toys, not “more profound than fire or electricity” as has been breathless claimed.
You just literally described humans; and the meta lack of awareness reinforces itself. You cyclicly devalue your own point.
Don't be mad about their opinions, be grateful for the arbitrage opportunity
I like this approach, the challenge us that without a good grasp of finance it is really hard to leverage these opportunities
Please find me someone with any background in technology who thinks AI is complete garbage (zero value or close to it). The author doesn't think so, they assert that "perhaps 10% of the AI hype is based upon useful facts" and "AI functions greatly as a "search engine" replacement". There is a big difference between thinking something is garbage and thinking something is a massive bubble (in the case of AI, this could be the technology is worth hundreds of billions rather than trillions).
Nobody is talking about a financial bubble. That's orthoganol.
Something can be worth zero and still be fucking amazing.
The blog post is talking about the hype in general and about AI in general. It is not just referring to the financial opportunity.
You can use chatGPT for free. Does that mean it's total shit because openAI allowed you to use it for free? No. It's still freaking revolutionary.
> Something can be worth zero and still be fucking amazing.
Gull-wing doors on cars. Both awesome and flawed.
I was thinking more like oxygen.
Amazing because without it you’re dead meat. But nobody gives a shit about it because it’s everywhere and free.
That’s what LLMs are. They are everywhere and too readily accessible so people end up just complaining about too much hype.
Yeah well this hype comes with a lot of financial investment, which means I get affected when the market crash.
If people makes cool thing on their own money ( or just not consume as much of our total capital ), and it turns out not as affective as they would like, I would be nice to them.
Yeah the effectiveness of the hype on investment is more important than the effectiveness of the technology. AI isn't the product, the promise of the stock going up is. Buy while you can, the Emperor's New Clothes are getting threadbare.
Sounds like you bought the hype about LLMs without any understanding anything about LLMs and are now upset that the hype train is crashing because it was based on promises that not only wouldn’t but couldn’t be kept.
> hype train is crashing
According to who? Perhaps the people who aren't paying attention. People who use AI frequently and see the rate of progress are still quite hyped.
It makes sense that people who don't believe the (current wave of generative) AI hype aren't using it and those who do are.
It is more probable that people who have used it more have a more realistic and balanced view of their capabilities, based on the experience. Unless their livelihood depends on not having a relalistic view of the capabilities.
"Don't waste your time on this article."
By telling others not to read something doesn't it just make them curious and want to read it even more. Do HN readers actually obey such commands issued by HN commenters.
Agreed. “Hype is always bad” was where I had to stop.
It could lead to good things. Most startups have hype.
It's sad to see such a terrible comment at the top of the discussion. You start with an ad-hominem against author assuming they want to "look smart" by writing negatively about hype, you construct a straw-man to try to make your point, and you barely touch on any of the points made by them, and when you do, you pick on the weakest one. Shame.
Hype is good. Hype is the explosion of exploration that comes in the wake of something new and interesting. We wouldn't be on this website of no one was ever hyped about transistors or networking or programming languages. Myriad people tried myriad ideas each time, and most of them didn't pan out, but here we are with the ideas that stuck. An armchair-naysayer like the author declaring others fools for getting excited and putting in work to explore new ideas is the only true fool involved.