My anxiety about falling behind with AI plummeted after I realized many of these tweets are overblown in this way. I use AI every day, how is everyone getting more spectacular results than me? Turns out: they exaggerate.
Here are several real stories I dug into:
"My brick-and-mortar business wouldn't even exist without AI" --> meant they used Claude to help them search for lawyers in their local area and summarize permits they needed
"I'm now doing the work of 10 product managers" --> actually meant they create draft PRD's. Did not mention firing 10 PMs
"I launched an entire product line this weekend" --> meant they created a website with a sign up, and it shows them a single javascript page, no customers
"I wrote a novel while I made coffee this morning" --> used a ChatGPT agent to make a messy mediocre PDF
Getting viral on X is the current replacement for selling courses for {daytrading,amazon FBA,crypto}.
The content of the tweets isn't the thing.. bull-posting or invoking Cunningham's Law is. X is the destination for formula posting and some of those blue checkmarks are getting "reach" rev share kickbacks.
Same with Linkedin. Ive seen a lot of posts telling u to comment something to get a secret guide on how to do Y.
If it was successful, they wouldnt be telling everyone about it
Yeah, if you get enough impressions, you get some revenue, so you don't need to sell any courses, just viral content. Which is why some (not ALL) exaggerate as suggested.
It's a bit insane how much reach you need before you'd earn anything impactful, though.
I average 1-2M impressions/month, and have some video clips on X/Twitter that have gotten 100K+ views, and average earnings of around $42/month (over the past year).
I imagine you'd need hundreds of millions of impressions/views on Twitter to earn a living with their current rates.
Thanks a lot for your transparency Jeff! Much needed in this area. And your content is quality, much unlike what else being discussed here.
It is really hard to actually make anything substantial on social media exposure. Unfortunately this does not stop many from exaggerating claims in order to (maybe become) be internet famous, or seeing high number of clicks etc. So it is both bad business for creators, and poisoning the discourse for readers - the only real winners are the social media companies and the product companies that get hyped up.
> Unfortunately this does not stop many from exaggerating claims in order to (maybe become) be internet famous
I've been thinking about this a lot lately in another context -- vira priests being anti-vax and realized it's the other way around: their motivation doesn't matter, but the viewers don't want to see moderate content, they want to see highly polarized and controversial topics.
The same with the claims about AI. Nobody wants to hear AI boosts productivity in nuanced way, people either want to hear about 10X or -10X so the market dictates the content/meme.
I'm not as familiar with your content but how often do you post? I have a friend who posts 'meme' type of content (all original) and he makes a decent amount, but he has it all queued up.
The worst is Reddit these days.
I pretty much never even went there for technical topics at all, just funny memes and such, but one day recently I started seeing crazy AI hype stories getting posted, and sadly I made a huge mistake and I clicked on one once, and now it’s all I get.
Endless posts from subs like r/agi, r/singularity, as well as the various product specific subs (for Claude, OpenAI, etc). These aren’t even links to external articles, these are supposedly personal accounts of someone being blown away by what the latest release of this or that model or tool can do. Every single one of these posts boils down to some irritating “game over for software engineers” hype fest, sometimes with skeptical comments calling out the clearly AI-generated text and overblown claims, sometimes not. Usually comments pointing out flaws in whatever’s being hyped are just dismissed with a hand wave about how the flaw may have been true at one time, but the latest and greatest version has no such flaws and is truly miraculous, even if it’s just a minor update for that week. It’s always the same pattern.
There’s clearly a lot of astroturfing going on.
> There’s clearly a lot of astroturfing going on.
Yeah I think so too. I even see it here on HN
I'm just tuning it all out. The big test is just installing the damn thing and seeing what it can do. There's 0 barrier to trying it
Lo and behold, here’s a concrete example I stumbled across just a few seconds after opening Reddit again (really gotta stop doing that):
Completely in the same boat as you, the constant bombardment on Reddit is getting really detrimental to my wellbeing at this point lol
- [deleted]
>There’s clearly a lot of astroturfing going on.
Reddit is like 90% astroturfing, trolls, and bots.
[dead]
I feel the same. I understand some of the excitement. Whenn I use it I feel more productive as it seems I get more code done. But I never finish anything earlier because it never fails to introduce a bizarre bug or behaviour that no one in sane made doing the task would
> "I wrote a novel while I made coffee this morning" --> used a ChatGPT agent to make a messy mediocre PDF
There was a story years ago about someone who made hundreds of novels on Amazon, in aggregate they pulled in a decent penny. I wonder if someone's doing the same but with ChatGPT instead.
Pretty sure there was a whole era where people were doing this with public domain works, as well as works generated by Markov chains spitting out barely-plausible-at-first-glance spaghetti. I think that well started to dry up before LLMs even hit the scene.
"AI helped me make money by evading anti-spam controls" doesn't have quite the same ring to it. :p
"Adding the abbreviation 'AI' to my marketing for online courses for making millions making marketing for online courses made me millions!"
It had happened in Japan. There was on author who were updating 30+ series simultaneously on Kakuyomi, the largest Japanese web novel site. A few of them got top ranked.
Afaik, I think the way people are making money in this space is selling courses that teach you how to sell mass produced AI slop on Amazon, rather than actually doing it
People say outrageous things when they’re follower farming.
At the end of the day, it doesn't really get you that much if you get 70% of the way there on your initial prompt (which you probably spent some time discussing, thinking through, clarifying requirements on). Paid, deliverable work is expected to involve validation, accountability, security, reliability, etc.
Taking that 70% solution and adding these things is harder than if a human got you 70% there, because the mistakes LLMs make are designed to look right, while being wrong in ways a sane human would never be. This makes their mistakes easy to overlook, requiring more careful line-by-line review in any domain where people are paying you. They also duplicate code and are super verbose, so they produce a ton tech debt -> more tokens for future agents to clog their contexts with.
I like using them, they have real value when used correctly, but I'm skeptical that this value is going to translate to massive real business value in the next few years, especially when you weigh that with the risk and tech debt that comes along with it.
> and are super verbose...
Since I don't code for money any more, my main daily LLM use is for some web searches, especially those where multiple semantic meanings would be difficult specify with a traditional search or even compound logical operators. It's good for this but the answers tend to be too verbose and in ways no reasonably competent human would be. There's a weird mismatch between the raw capability and the need to explicitly prompt "in one sentence" when it would be contextually obvious to a human.
Imo getting 70% of the way is very valuable for quickly creating throwaway prototypes, exploring approaches and learning new stuff.
However getting the AI to build production quality code is sometimes quite frustrating, and requires a very hands-on approach.
Yep - no doubt that LLMs are useful. I use them every day, for lots of stuff. It's a lot better than Google search was in its prime. Will it translate to massively increased output for the typical engineer esp. senior/staff+)? I don't think it will without a radical change to the architecture. But that is an opinion.
I completely agree, I found it very funny that I have been transitioning from an "LLM sceptic" to a "LLM advocate", without changing my viewpoint. I have long said that LLM's won't be replacing swathes of the workforce any time soon and that LLM's are of course useful for specific tasks, especially prototyping and drafting.
I have gone from being challenged on the first point, to the second. The hype is not what it has been.
“I used AI to make a super profitable stock trading bot” —-> using fake money with historical data
“I used AI to make an entire NES emulator in an afternoon!” —-> a project that has been done hundreds of times and posted all over github with plenty of references
> “I used AI to make a super profitable stock trading bot” —-> using fake money with historical data
Stocks are another matter. There were wonder "algorithms" even before "AI". I helped some friends tweak some. They had the enthusiasm and I had the programming expertise and I was curious.
That was a couple years ago. None of them is rich and retired now - which was what the test runs were showing - and I think most aren't even trading any more.
I vibe coded a few ideas i had in my mind for a while. My basic stack is html, single page, local storage and lightweight js.
It is really good in doing this.
those ideas are like UI experiments or small tools helping me doing stuff.
Its also super great in ELI5'ing anything
Same result if you copied and pasted from a couple passionate blogs.
Not in the same timeframe. My experiments take an hour.
I actually read through the logs and the code in the rare instances someone actually posts their prompts and the generated output. If I'm being overly cynical about the tech, I want to know.
The last one I did it on was breathlessly touted as "I used [LLM] to do some advanced digital forensics!"
Dawg. The LLM grepped for a single keyword you gave it and then faffed about putting it into json several times before throwing it away and generating some markdown instead. When you told it the result was bad, it grepped for a second word and did the process again.
It looks impressive with all these json files and bash scripts flying by, but what it actually did was turn a single word grep into blog post markdown and you still had to help it.
Some of you have never been on enterprise software sales calls and it shows.
> Some of you have never been on enterprise software sales calls and it shows.
Hah—I'm struggling to decide whether everyone experiencing it would be a good thing in terms of inoculating people's minds, or a terrible thing in terms of what it says about a society where it happens.
- [deleted]
"I used AI to write a GPU-only MoE forward and backward pass to supplement the manual implementation in PyTorch that only supported a few specific GPUs" -> https://github.com/lostmsu/grouped_mm_bf16 100% vibe coded.
Pretty much every x non-political/celeb account with 5K followers+ is a paid influencer shill lol.
Welcome to the internet
One of my favorite stories from the dotcom bust is when people, after the bust, said something along the lines of: "Take Pets.com. Who the hell would buy 40lb dogfood bags over the internet? And what business would offer that?? It doesn't make sense at all economically! No wonder they went out of business."
Yet here we are, 20 years later, routinely ordering FURNITURE on the internent and often delivered "free".
My point being, sure, there is a lot of hype around AI but that doesn't mean that there aren't nuggets of very useful projects happening.
True, but I think the point of that story is it’s really hard to predict what’s crap and what’s just too early.
It doesn’t guarantee the skeptics are wrong all the time.
"Look at what happened with the internet!" also doesn't mean the same will happen with AI
Neither argument works
There's 0 requirement that [new technology] must follow the path of the internet though. So it's kind of an irrelevant non sequitur.
Pets.com was both selling everything at a loss and spending millions on advertising. It wasn't the concept that was the issue.