It’s truly strange that people keep citing the quality of Claude code’s leaked source as if it’s proof vibe coding doesn’t work.
If anything, it’s the exact opposite. It shows that you can build a crazy popular & successful product while violating all the traditional rules about “good” code.
TBH Claude Code is surprisingly shit to use given the technical resources and the amount of money behind it. Looking past the bugs and missing features, it's so obvious it's not built by people who care about the product from a developer/craftsman perspective. It's missing all the signs of polish/care, it feels like someone shipped an internal PoC to prod and kept hacking on it. And now they are just tacking on features to sell more buzzwords and internal prototypes. Classic user facing/commercial software story.
But we (the dev community) are kind of spoiled, because we have a lot of great developer tools that come from people passionate about their work, skilled at what they do and take pride in what they put out. I don't count myself among one of those people but I have benefited from their work throughout my career and have gotten used to it in my tooling.
All that being said Opus is hands down the best coding model for me (and I'm actively trying all of them) and I'll tolerate it as long as I can get it to do what I need, even with the warts and annoyances.
> TBH Claude Code is surprisingly shit to use given the technical resources and the amount of money behind it.
What harness would you recommend instead?
pi, oh-my-pi, opencode - none of them have subsidized Claude though
Opencode can't lazy load skills, mcps, or agents and has limitations on context. It's a total nonstarter from my experience using it at work.
The most obvious sign to me from the start that somebody wasn't really paying attention to how the Claude app(s) work is that on iOS, you have to leave the app active the entire time a response is streaming or it will error out.
> It shows that you can build a crazy popular & successful product while violating all the traditional rules about “good” code.
We already knew that. This is a matter of people who didn't know that or didn't want to acknowledge that thinking they now have proof that it doesn't matter for creating a crazy popular & successful product, as if it's a gotcha on those who advocate for good practices. When your goal is to create something successful that you can cash out, good practices and quality are/were never a concern. This is the basis for YAGNI, move-fast-and-break-things, and worse-is-better. We've know this since at least betamax-vs-VHS (although maybe the WiB VHS cultural knowledge is forgotten these days).
WiB is different from Move Fast and Break Things and again different from YAGNI though.
WiB doesn't mean the thing is worse, it means it does less. Claude Code interestingly does WAY more than something like Pi which is genuinely WiB.
Move Fast and Break Things comes from the assumption that if you capture a market quick enough you will then have time to fix things.
YAGNI is simply a reminder that not preparing for contingencies can result in a simpler code base since you're unlikely to use the contingencies.
The spaghetti that people are making fun of in Claude Code is none of these things except maybe Move Fast and Break Things.
VHS was not worse is better. It’s better is better.
Specifically, VHS had both longer recording times and cheaper VCRs (due to Matsushita’s liberal licensing) than Betamax did. Beta only had slightly better picture quality if you were willing to sacrifice recording length per tape. Most Betamax users adopted the βII format which lowered picture quality to VHS levels in order to squeeze more recording time onto the tape. At that point Betamax’s only advantage was a slightly more compact cassette.
Also to correct another common myth, porn was widely available on both formats and was not the cause of VHS’s success over Betamax.
Betamax was arguably better.
It depends which definition of "better" you use. VHS won the adoption race, so it was better there. While Betamax may have been technologically superior, in hindsight we can say it apparently failed to address other key aspects of the technology adoption lifecycle.
Not in ways that the market cared about.
- [deleted]
Arguably better quality, but at the cost of being shorter. In the great trade off of time, size, and quality, I think VHS chose a better combination.
Bad code works fine until it doesn't. In my experience, with humans, doing the right thing is worth it over doing the bad thing if your time horizon is a few months. Once you're in years, absolutely do the right thing, you're actually throwing time away if you don't. And I don't mean "big refactor", I mean at-change-time, when you think "this change feels like an icky hack."
For LLMs, I don't really know. I only have a couple years experience at that.
If you make a working and functional bad code, and put it on maintenance mode, it can keep churning for decades with no major issues.
Everything depends on context. Most code written by humans is indeed, garbage.
The fix time horizon changes too, don't discard that.
I suspect if people saw the handwritten code of many, many, many products that they used every day they would be shocked. I've worked at BigCos and startups, and a lot of the terrible code that makes it to production was shocking when I first started.
This isn't a dig at anyone, I've certainly shipped my share of bad code as well. Deadlines, despite my wishes sometimes, continue to exist. Sometimes you have to ship a hack to make a customer or manager happy, and then replacing those hacks with better code just never happens.
For that matter, the first draft of nearly anything I write is usually not great. I might just be stupid, but I doubt I'm unique; when I've written nice, beautiful, optimized code, it's usually a second or third draft, because I ultimately I don't think I fully understand the problem and the assumptions I am allowed to make until I've finished the first draft. Usually for my personal projects, my first dozen or so commits will be pretty messy, and then I'll have cleanup branches that I merge to make the code less terrible.
This isn't inherently bad, but a lot of the time I am simply not given time to do a second or third draft of the code, because, again, deadlines, so my initial "just get it working" draft is what ships into production. I don't love it, and I kind of dread of some of the code with my name attached to it at BigCo ever gets leaked, but that's just how it is in the corporate world sometimes.
This is the product that's claiming "coding is a solved problem" though.
I get a junior developer or a team of developers with varying levels of experience and a lot of pressure to deliver producing crummy code, but not the very tool that's supposed to be the state-of-the-art coder.
> and then replacing those hacks with better code just never happens
Yeah, we even have an idiom for this - "Temporary is always permanent"
> you can build a crazy popular & successful product while violating all the traditional rules about “good” code
which has always been true
Yes, and to add, in case it's not obvious: in my experience the maintenance, mental (and emotional costs, call me sensitive) cost of bad code compounds exponentially the more hacks you throw at it
Sure, for humans. Not sure they'll be the primary readers of code going forward
I'm pretty sure that will be true with AI as well.
No accounting for taste, but part of makes code hard for me to reason about is when it has lots of combinatorial complexity, where the amount of states that can happen makes it difficult to know all the possible good and bad states that your program can be in. Combinatorial complexity is something that objectively can be expensive for any form of computer, be it a human brain or silicon. If the code is written in such a way that the number of correct and incorrect states are impossible to know, then the problem becomes undecidable.
I do think there is code that is "objectively" difficult to work with.
All the good practices about strong typing, typically in Scala or Rust, also work great for AI.
If you make sure the compiler catches most issues, AI will run it, see it doesn't build and fix what needs to be fixed.
So I agree that a lot of things that make code good, including comments and documentation, is beneficial for AI.
There are a number of things that make code hard to reason about for humans, and combinatorial complexity is just one of them. Another one is, say, size of working memory, or having to navigate across a large number of files to understand a piece of logic. These two examples are not necessarily expensive for computers.
I don't entirely disagree that there is code that's objectively difficult to work with, but I suspect that the Venn diagram of "code that's hard for humans" and "code that's hard for computers" has much less overlap than you're suggesting.
Certainly with current models I have found that the Venn diagram of "code that's hard for humans" and "code that's hard for computers" has actually been remarkably similar, I suspect because it's trained on a lot of terrible code on Github.
I'm sure that these models will get better, and I agree that the overlap will be lower at that point, but I still think what I said will be true.
What do you think about the argument that we are entering a world where code is so cheap to write, you can throw the old one away and build a new one after you've validated the business model, found a niche, whatever?
I mean, it seems like that has always been true to an extent, but now it may be even more true? Once you know you're sitting on a lode of gold, it's a lot easier to know how much to invest in the mine.
It hasn't always been true, it started with rapid development tools in the late 90's I believe.
And some people thought they were building "disposable" code, only to see their hacks being used for decades. I'm thinking about VB but also behemoth Excel files.
I actually think that might actually be a good path forward.
I hate self-promotion but I posted my opinions on this last night https://blog.tombert.com/Posts/Technical/2026/04-April/Stop-...
The tl;dr of this is that I don't think that the code itself is what needs to be preserved, the prompt and chat is the actual important and useful thing here. At some point I think it makes more sense to fine tune the prompts to get increasingly more specific and just regenerate the the code based on that spec, and store that in Git.
This is actually a pretty good callout.
Observability into how a foundation model generated product arrived to that state is significantly more important than the underlying codebase, as it's the prompt context that is the architecture.
Yeah, I'm just a little tired of seeing these pull requests of multi-thousand-line pull requests where no one has actually looked at the code.
The solution people are coming up with now is using AI for code reviews and I have to ask "why involve Git at all then?". If AI is writing the code, testing the code, reviewing the code, and merging the code, then it seems to me that we can just remove these steps and simply PR the prompts themselves.
Yep.
Also, the approach you described is what a number of AI for Code Review products are using under-the-hood, but human-in-the-loop is still recognized as critical.
It's the same way how written design docs and comments are significantly more valuable than uncommented and undocumented source.
AIs struggle with tech debt as much if not more than humans.
Ive noticed that theyre often quite bad at refactoring, also.
Because LLMs are designed as emulators of actual human reasoning, it wouldn't surprise me if we discover that the things that make software easy for humans to reason about also make it easier for LLMs to reason about.
Now with AI, you're not only dealing with maintenance and mental overhead, but also the overhead of the Anthropic subscription (or whatever AI company) to deal with this spaghetti. Some may decide that's an okay tradeoff, but personally it seems insane to delegate a majority of development work to a blackbox, cloud-hosted LLM that can be rug pulled from underneath of you at any moment (and you're unable to hold it accountable if it screws up)
It’s also possible to sell chairs that are uncomfortable and food that tastes terrible. Yet somehow we still have carpenters and chefs; Herman Miller and The French Laundry.
Some business models will require “good” code, and some won’t. That’s how it is right now as well. But pretending that all business models will no longer require “good” code is like pretending that Michelin should’ve retired its list after the microwave was invented.
Those high end restaurants are more like art and exploration of food then something practical like code. The only similarity is maybe research in academia. There's not real industry uses of code that's like art.
I used the extreme of the spectrum, I can’t imagine you’re arguing that food is binary good / bad? There’s a litany of food options and quality, matching different business models of convenience and experience.
Research in academia seems less appropriate because that’s famously not really a business model, except maybe in the extractive sense
Not only true but I would guess it's the normal case. Most software is a huge pile of tech debt held together by zip-ties. Even greenfield projects quickly trend this way, as "just make it work" pressure overrides any posturing about a clean codebase.
long ago, wordpress plugins were often a proper mess
Still, talk about "good" code exist for a reason. When the code is really bad, you end up paying the price by having to spend too more and more time and develop new features, with greater risk to introduce bugs. I've seen that in companies in the past, where bad code meant less stability and more time to ship features that we needed to retain customers or get new ones.
Now whether this is still true with AI, or if vibe coding means bad code no longer have this long term stability and velocity cost because AI are better than humans at working with this bad code... We don't know yet.
It depends on the urgency. Not every product is urgent. CC arguable was very urgent; even a day of delay meant the competitors could come out with something slightly more appealing.
Not according to some on HN. They consider it impossible to create a successful business with imperfect code. Lol
A cornerstone of this community is "if you're not embarrassed by the first release you've waited too long to release", which is a recognition that imperfect code is not needed to create a successful business. That's why ShowHN exists.
See also Salesforce, Oracle, SAP
Wordpress hides behind a cabinet
One truism about coding agents is that they struggle to work with bad code. Code quality matters as much as always, the experts say, and AI agents (left unfettered) produce bad code at an unprecedented rate. That's why good practices matter so much! If you use specs and test it like so and blah blah blah, that makes it all sustainable. And if anyone knows how to do it right, presumably it's Anthropic.
This codebase has existed for maybe 18 months, written by THE experts on agentic coding. If it is already unintelligible, that bodes poorly for how much it is possible to "accelerate" coding without taking on substantial technical debt.
Still, it's probably true that Claude Code (etc) will be more successful working on clean, well-structured code, just like human coders are. So short-term, maybe not such a big deal, but long-term I think it's still an unresolved issue.
I imagine it is way more affordable in terms of tokens to implement a feature in a well organized code base, rather than a hacky mess of a codebase that is the result of 30 band-aid fixes stacked on top of each other.
This product rides a hype wave. This is why it is crazy popular and successful.
The situation there is akin to Viaweb - Viaweb also rode hype wave and code situation was awful as well (see PG's stories about fixing bugs during customer's issue reproduction theater).
What did Viaweb's buyer do? They rewrote thing in C++.
If history rhymes, then buyer of Anthropic would do something close to "rewrite it in C++" to the current Claude Code implementation.
This is also why they had to release it quickly. They got the first mover advantage but if they delayed to make its code better, a competitor could have taken the wave instead of them.
I don't disagree with your general premise that eventually it'll just be rewritten, but I have to push back on the idea that Anthropic will be acquired. Their most recent valuation was $380B, and even if they wanted to be acquired (which I doubt) essentially no company has the necessary capital.
Any company worth more could (in principle) acquire it with a share swap [1]. Even a smaller company could buy it with an LBO [2].
[delayed]
Not AI but perfect example is Cloudflare. They have implemented public suffix list (to check if a domain is valid) 10 different times in 10 different ways. In one place, they have even embedded the list in frontend (pages custom domain). You report issues, they fix that one service, their own stuff isn't even aware that it exists in other places.
Meta has four different implementations of the same page to create a “page” for your business… which is required to be able to advertise on any of their services.
Each one is broken, doesn’t have working error handling, and prevents you from giving them money. They all exist to insert the same record somewhere. Lost revenue, and they seem to have no idea.
Amazons flagship ios app has had at least three highly visible bugs, for years. They’re like thorns in my eye sockets, every time I use it. They don’t care.
These companies are working with BILLIONS of dollars in engineering resources, unlimited AI resources, and with massive revenue effects for small changes.
Sometimes the world just doesn’t make sense.
It's just lazy engineering. They get assigned a task, they must implement it or fix it to keep their job. Proper implantation takes more knowledge, more research and more brain pressure.
AI could play a big rule here. Husky (git hook) but AI. It will score lazy engineering. You lazy implement enough times, you loose your job.
"Wildly successful but unpolished product first-to-market with a new technology gets dethroned by a competitor with superior execution" is a story as old as tech.
1. Vibe coding is a spectrum of just how much human supervision (and/or scaffolding in the form of human-written tests/specs) is involved.
2. The problem with "bad code" has nothing to do with the short-term success of the product but with the ability to evolve it successfully over time. In other words, it's about long-term success, not short-term success.
3. Perhaps most importantly, Claude Code is a fairly simple product at its core, and most of its value comes from the model, not from its own code (and the same is true on the cost side). Claude Code is relatively a low stakes product. This means that the problems caused by bad code matter less in this instance, and they're managed further by Claude Code not being at the extreme "vibey" end of the spectrum.
1 is definitely false right now. I gave specs, tests, full datasets, reference code to translate to an llm and still produce garbage code/fall flat on it's face. I just spent one week translating a codebase from go to cpp and i had to throw the whole thing out because it put in some horrible bugs that it could not fix even burning 500$ worth of tokens and me babysitting it. As i said it had everything at it's disposal: tests, reference impl, lots of data to work with. I finally got my lazy ass to inplement it and lo and behold i did it in 2 days with no bugs (that i know of) and the code quality is miles better than that undigested vomit. The codebase was a protocol library for decoding network traffic that used a lot of bit twiddling, flow control, huffman table compression, mildly complicated stuff. So no - if you want working non-trivial code that you can rely on then definitely don't use a llm to do it. Use it for autocomplete, small bits of code but never let the damn thing do the thinking for you.
The very definition of "vibe coding" is using AI to write software and not even look at the code it produces.
People use two definitions.
There's this definition of LLM generation + "no thorough review or testing"
And there's the more normative one: just LLM generation.[1][2][3]
"Not even looking at it" is very difficult as part of a definition. What if you look at it once? Or just glance at it? Is it now no longer vibe coding? What if I read a diff every ten commits? Or look at the code when something breaks?
At which point is it no longer vibe coding according to this narrower definition?
[1] https://www.collinsdictionary.com/dictionary/english/vibe-co...
[2] https://www.merriam-webster.com/dictionary/vibe%20coding
What I'm missing so far is how they produced such awful code with the same product I'm using, which definitely would have called out some of those issues.
Perhaps the problem is getting multiple vibe-coders synced up when working on a large repo.
I suspect a lot of it is just older, before Opus 4.5+ got good at calling out issues.
You can, but:
- Good code is what enables you to be able to build very complex software without an unreasonable number of bugs.
- Good code is what enables you to be responsive to changing customer needs and times. Whether you view that as valuable is another matter though. I guess it is a business decision. There have been plenty of business that have gone bust though by neglecting that.
Good code is for your own sanity, the machine does not care.
This, 100x.
I do M&As at my company - as a cto. I have seen lots of successful companies' codebases, and literally none of them elegant. Including very profitable companies with good, loved products.
The only good code I know is in the open source domain and in the demoscene. The commercial code is mostly crap - and still makes money.
This kinda puts it in words, most of us naturally expected 2025- LLMs to be able to generate OSS / demo / high craft code. Not messy commercial one.
It kind of reminds me of grammar police type personalities. They are so hung up on the fact it reads “ugly” they can’t see the message; this code powers a rapidly growing $400B company. They admit refactoring is easy, but fail to realize they probably know that too and it’s just not a priority yet.
They won’t stay 400B for long, and Claude Code will have no effect on that.
The underlying model powers the valuation.
Not the front end
You can send a submarine down to crushing depths while violating all the traditional rules about "good" engineering, too.
Right, and often the tested depth isnt maximum. So you slowly acclimate to worse and worse code practices if the effort needed to undo it is the same as doing.
Yes, that is how Facebook, Yahoo and many other companies started out. But they rewrote their code when it became to big to be maintainable. The problem with shoddy code is not necessarily that it doesn't work but that it becomes impossible to change.
Makes me stare at mid nineties After Effects’ core rendering engine
devaluing craftsmanship is fundamentally insulting.
There's also a business incentive for code produced by LLM companies to be hard to maintain. So you keep needing them in the future.
It basically shifting work to future people. This mess will stop working and will introduce unsolvable obscure bugs one day, and someone actually will have to look at it.
It already costed many developers months and hundreds of dollars worth of tokens because of a bug. There will be more.
I'd imagine the AI engineers on million dollar TC are not vibe coding the models though, which is the actual sauce.
Yes that plus having tens of billions of gulf money certainly helps you subsidize your moronic failures with money that isn't yours while you continue, and fail to, achieve profitability in any time horizon within a single lifespan.
Also Claude owes its popularity mostly to the excellent model running behind the scenes.
The tooling can be hacky and of questionable quality yet, with such a model, things can still work out pretty well.
The moat is their training and fine-tuning for common programming languages.
>> Also Claude owes its popularity mostly to the excellent model running behind the scenes.
It's a bit of both. Claude Code was the tool that made Anthropic's developer mindshare explode. Yes, the models are good, but before CC they were mostly just available via multiplexers like Cursor and Copilot, via the relatively expensive API.
Huh what moronic failure did Anthropic do? Every Claude Code user I know loves it.
I don’t know about moronic, but:
I don't know if the comment was referring to this, but recently, people have been posting stuff about them requiring their new hire Jared Sumner, author of the Bun runtime, to first and foremost fix memory leaks that caused very high memory consumption for claude's CLI. The original source was them posting about the matter on X I think.
And at first glance, none of it was about complex runtime optimizations not present in Node, it was all "standard" closure-related JS/TS memory leak debugging (which can be a nightmare).
I don't have a link at hand because threads about it were mostly on Xitter. But I'm sure there are also more accessible retros about the posts on regular websites (HN threads, too).
Ah I believe codex has similar issues. Terrible code quality but goes to show it doesn't really matter in the end.
> it doesn't really matter in the end
if you have one of the top models in a disruptive new product category where everyone else is sprinting also, sure..
Code quality never really mattered to users of the software. You can have the most <whatever metric you care about> code and still have zero users or have high user frustration from users that you do have.
Code quality only matters in maintainability to developers. IMO it's a very subjective metric
Yes that was pretty much my own takeaway, too.
After some experience, it feels to me (currently primarily a JS/TS developer) like most SPAs are ridden by memory leaks and insane memory usage. And, while it doesn't run in the browser, the same think seems to apply to Claude CLI.
Lexical closures used in long-living abstractions, especially when leveraging reactivity and similar ideas, seems to be a recipe for memory-devouring apps, regardless of browser rendering being involved or not.
The problems metastasize because most apps never run into scenarios where it matters, a page reload or exit always is close enough on the horizon to deprioritize memory usage issues.
But as soon as there are large allocations, such as the strings involved in LLM agent orchestration, or in non-trivial other scenarios, the "just ship it" approac requires careful revision.
Refactoring shit that used to "just work" with memory leaks is not always easy, no matter whose shit it is.
The people who don’t love it probably stopped using it.
You don’t have to go far on this site to find someone that doesn’t like Claude code.
If you want an example of something moronic, look at the ram usage of Claude code. It can use gigabytes of memory to work with a few megabytes of text.
Recently there was a bug where CC would consume day/week/month quota in just a few hours, or hundreds of dollars in API costs in a few prompts.
I've used and hate it, it's garbage.
There is right now another HN thread where a lot of users hate Claude Code.
To be fair, their complaints are about very recent changes that break their workflow, while previously they were quite content with it.
There’s a sample group issue here beyond the obvious limitations of your personal experience. If they didn’t love it, they likely left it for another LLM. If they have issues with LLM’s writ large, they’re going to dislike and avoid all of them regardless.
In the current market, most people using one LLM are likely going to have a positive view of it. Very little is forcing you to stick with one you dislike aside from corporate mandates.
There have certainly been periods of irrational exuberance in the tech industry, but there are also many companies that were criticized for being unprofitable which are now, as far as I can tell, quite profitable. Amazon, Uber, I'm sure many more. I'm curious what the basis is to say that Anthropic could never achieve profitability? Are the numbers that bad?
- [deleted]
your prediction is going to be wrong, even with all those caveats
99.999999% of products can't get away with what Anthropic is able to - this is a one in a billion disruptive product with minimal competition, and its success so far is mostly due to Claude the model, not the agent harness
Also, many of the complaints seem more like giddy joy than anything.
The negative emotion regex, for example, is only used for a log/telemetry metric. Sampling "wtf?" along would probably be enough. Why would you use an agent for that?
I don't see how a vibe-coded app is freed from the same trade-offs that apply to a fast-moving human-coded one.
Especially since a human is still driving it, thus they will take the same shortcuts they did before: instead of a formal planning phase, they'll just yolo it with the agent. Instead of cleaning up technical debt, they want to fix specific issues that are easy to review, not touch 10 files to do a refactor that's hard to review. The highest priority issues are bugs and new integrations, not tech debt, just like it always was.
This is really just a reminder of how little upside there is to coding in the open.
I think the thing is that people expect one of the largest companies in the world to have well written code.
Claude’s source code is fine for a 1-3 person team. It’s atrocious for a flagship product from a company valued over $380 BILLION.
Like if that’s the best ai coding can do given infinite money? Yeah, the emperor has no clothes. If it’s not the best that can be done, then what kinda clowns are running the show over there?
The difference here is that everyone else in this product category are also sprinting full steam ahead trying to get as many users as they can
If they DIDN'T heavily vibe-code it they might fall behind. Speed of implementation short term might beat out long-term maintenance and iteration they'd get from quality code
They're just taking on massive tech debt
I just think this is the nature of all software, and it was wrong to assume AI fundamentally changes it.
Seems like you're also under the impression that privately developed software should be immaculate if the company is worth enough billions, but you'd be wrong about that too.
Yes, you would expect a company paying millions in TC to the best software developers on the planet could produce a product that is best in class, and you would get code quality for free. Except it's regularly beaten in benchmarks and user validation by open source agents, some built by a single person (pi), with horrible code quality leading to all sorts of bad UX and buggy behaviour.
Either they're massively overpaying some scrubs to underperform with the new paradigm, or they are squeezing every last drop out of vibe coding and this is the result.
Do we know if the original code was vibe coded? It's like chicken and an egg dilemma.
It's not a chicken and egg dilemma, the model can be used independently of Claude to write code, the heavy lifting is still done on their servers.
I read this posts and I wonder how many people are thisdelusional or dishonest. I am programmer for 40 years and in most companies 90% of coders are so called stack overflow coders or google coders. Every coder who is honest will admit it and AI is already better than those 90%.FAR better. At least most influencer coder start to admit the fact that the code is actually awesome, if you know what you are doing. I am more of a code reviewer and I plan the implementation, what is far more exciting than writing the code itself. I have the feeling most feel the way I do but there are still those stack ovwerflow coders who are afraid to lose their jobs. And they will.
Honestly for such a powerful tool, it’s pretty damn janky. Permissions don’t always work, hitting escape doesn’t always register correctly, the formatting breaks on its own to name a few of the issues i’ve had. It’s popular and successful but it’s got lots of thorns
This is a really wrong perspective on software. Short term monkey style coding does not produce products. You might get money but that is not what it is about.
This is similar to retarded builders in Turkey saying “wow, I can make the same building, sell for the same price, but spend way less” and then millions of people becoming victim when there is an earthquake.
This is not how responsible people should think about things in society
> This is a really wrong perspective on software. Short term monkey style coding does not produce products. You might get money but that is not what it is about.
Getting money is 100% what it is about and Claude Code is great product.
Nobody rewards responsibility though. It's all about making number go up.
...go up as fast as possible.
> This is a really wrong perspective on software. Short term monkey style coding does not produce products. You might get money but that is not what it is about
You're not alone in thinking that, but unfortunately I think it's a minority opinion. The only thing most people and most businesses care about is money. And frankly not even longterm, sustainable money. Most companies seem happy to extract short term profits, pay out the executives with big bonuses, then rot until they collapse
I found that to be true years ago when I spooled the source of the Twitch leaks.
To me it said, clearly: nobody cares about your code quality other than your ability to ship interesting features.
It was incredibly eye-opening to me, I went in expecting different lessons honestly.
The model is the product.
It shows that you can have a garbage front end if people perceive value in your back end.
It also means that any competitor that improves on this part of the experience is going to eat your lunch.
Its a buggy pos though, "popular and successful" have never been indicators of quality in any sense.
I think this is a pretty interesting comment because it gets to the heart of differing views on what quality means.
For you, non-buggy software is important. You could also reasonably take a more business centered approach, where having some number of paying customers is an indicator of quality (you've built something people are willing to pay for!) Personally I lean towards the second camp, the bugs are annoying but there is a good sprinkling of magic in the product which overall makes it something I really enjoy using.
All that is to say, I don't think there is a straightforward definition of quality that everyone is going to agree on.
ok, well if youd like to trade in 14billion dollars of revenue for better quality feel free.
Value to customer. Literally the only thing that matters.
Value isn't a one-shot, though. Value sustained over time is what matters.
Well, if unmaintainable code gets in the way of the "sustained over time" part, then that is still a real problem.
Hardly. Claude Code is basically just a wrapper around an LLM with a CLI.
Obviously it does some fairly smart stuff under the hood, but it's not exactly comparable to a large software project.
But to your point, that doesn't mean you can't vibe code some poorly built product and sell it. But people have always been able to sell poorly built software projects. They can just do it a bit quicker now.
>Hardly. Claude Code is basically just a wrapper around an LLM with a CLI.
I don't know why people keep acting like harnesses are all the same but we know they aren't because people have swapped them out with the same models and receive vastly different results in code quality and token use.
I think it is crazy popular for the model and not the crappy vibe code.
> It shows that you can build a crazy popular & successful product while violating all the traditional rules about “good” code.
That was always the case. Landlords still want rent, the IRS still has figurative guns. Shipping shit code to please these folks and keep the company alive will always win over code quality, unless the system can be edited to financially incentivize code quality. The current loss function on society is literally "ship shit now and pay your taxes and rent".
>. It shows that you can build a crazy popular & successful product while violating all the traditional rules about “good” code.
The product is also a bit wonky and doesn't always provide the benefits it's hyped for. It often doesn't even produce any result for me, just keeps me waiting and waiting... and nothing happens, which is what I expect from a vibe coded app.
[dead]
Yes, just get hundreds of billions of dollars in investments to build a leading product, and then use your massive legal team to force the usage of your highly subsidised and marketed subscription plan through your vibe coded software. This is excellent evidence that code doesn't matter.
> Yes, just get hundreds of billions of dollars in investments to build a leading product, and then use your massive legal team to force the usage of your highly subsidised and marketed subscription plan through your vibe coded software.
What? Your comment makes absolutely zero sense. Legal team forces people to use Claude Code?
I know this isn't your point, but Anthropic has raised about $70 billion, not "hundreds of billions".
And they don't need a massive legal team to declare that you can't use their software subscription with other people's software.
I don't think anyone who used Claude code on the terminal had anything good to say about it. It was people using it through vs code that had a good time.
I have used Claude Code in the terminal to the tune of ~20m tokens in the last month and I have very little to complain about. There are definitely quirks that are annoying (as all software has, including vs code or jetbrains IDEs) but broadly speaking it does what it says on the tin ime
I prefer using it via the terminal. Might be anchoring bias, but I have had issues with slash commands not registering and hooks not working in the plugin.