Sorry for LLM flavor in the article. It's valid criticism and I will rewrite it when I get the chance. I just wanted to share the story and I didn't have time to write it completely from scratch, plus I'm not that great of a writer. I thought filtering my thoughts through LLM editor would eliminate the distraction of my poor writing abilities, which for most people I think it worked. For others, it created another distraction, ragebait in fact, which was not my iuntention. So between working 80 hours a week at the prompt factory and raising two kids I will find some time to de-ragebaitify the article, although it seems to have unintentionally propelled it to the front page, for which I am admittedly thankful for.
I'm sorry that the focus on whether this article was written by an LLM or not, rather than the fact that you spent years on a labor of love. It's an excellent effort and I don't care whether the article about it was written by an LLM or not, I enjoyed it.
If this comment wasn’t from an LLM, you write well enough to not need one butchering your text.
I'm sensitive to detecting LLM writing and it wasn't distracting at all reading this. It read well! Awesome work.
I agree with this completely.
I would say you shouldn't apologize, that's what ai tools are for, to help us humans. Instead of rewriting manually, try a specialized for writing tool such as bookswriter.xyz or sudowrite
There certainly are arenas where LLM writing provokes adverse reactions, and in general I think people are becoming less tolerant of it.
Personally, I often find the smell of AI annoying, but I don't mind the way you used it in this article, and after all, there are some good use cases for AI writing.
I assume it will become easier and commonplace to configure LLMs to produce 'de-ragebatified' writing styles.
Pretty soon I reckon we'll be so inundated with AI content in all media that it simply won't be possible or rational to be offended by it, it will just become our new reality, our new world.
- [deleted]
- [deleted]
- [deleted]
It's definitely a time/energy vs quality tradeoff. You say you're not that great of a writer, and I'll give you the benefit of the doubt on that. But I can tell that your natural narration style is much higher quality and more enjoyable to read than what an LLM can generate (even if it's trying to copy your personal style).
It's a tough tradeoff for me both as a consumer and as a do-er. I am very sensitive to LLM-isms. Like many other millennials, (even if perhaps not quite most) I grew up online from a young age when text was the only viable communication medium, so I learned to notice incredibly small nuances of how someone writes and use those nuances to infer/personify the narrator. LLM's not only stick out like a sore thumb, their language actually "jams up" my 'text-personifier' neurological circuits. It's like my brain is saying "WTF? Why can't I synthesize any reasonable model of the person who wrote this?" the entire time, even if I know it's AI. That's frustrating, exhausting, and alienating.
So yeah, as I said: It's a tough tradeoff for me both as a consumer and as a do-er. I'm glad you used an LLM to do the write-up so that it shows up here and I can enjoy the work you did. I often use LLM's to write documentation at my startup for both my own reference and for my cofounders. I don't like it, but it's better than not doing it, and sometimes it's better to spend that time on other things, especially when the thing I'm documenting is subject to change very shortly.
I think the sweet spot, for me, is this:
If you're going to write it with an LLM, do so unapologetically. Put a disclaimer at the top. Understand that what you are delivering to your audience is not the LLM output, but whatever output was generated from your own input (work, vision, ingenuity, perseverance). Keep the LLM generated content concise, sharing only the necessary narrative and information to give consumers the context they need to understand whatever the actual work product is. The less slop I have to struggle through, the better - LLM's are absolutely awful at narration. And then make it easy to explore your actual work product!
I'm not sure my strategy might cause posts to never reach the front-page. I hope that our audiences can understand that this might be the best compromise and come to accept it in some cases. I will continue to point out when HN posts show strong signs of being LLM generated (as judged by my own tuned sense of nuance, empathy, and theory of mind...not whether they use em-dashes) but the intent isn't to tell people "this isn't worth reading". The intent of disclosing LLM generation is to inform people that the best way to consume the content is to switch to their personal "I'm reading LLM generated content"-mode and experience it through that lens.
Interestingly, my startup seems to have taken a somewhat similar strategy with vibe-coding. We're all aware that vibe-generated codebases are objectively worse, harder to read, and harder to maintain than our best hand-written code. It tends to fail on dumb edge cases and just doesn't have the "vision" that hand-written code would, because it glosses over decisions that we'd have paused and thought about for awhile before adjusting our vision and proceeding. But doing the initial proof-of-concept or prototype with LLM's greatly speeds up the period of exploration where we go "We're pretty sure there's a good a way to do this, and we're pretty sure we know what that way is, but there's a few unknowns that need to be proven". With hand-coding, those "unknowns" can take a long time to work through. With vibe-coding, we can try several different strategies, learn about the reality of implementing those strategies, and then go back and hand-write something more maintainable from scratch once we're pretty sure we've landed on the approach that we judge will be "stable". The timeline/priority for converting vibe-code to hand-code depends on how long we expect that code to last, how central it is to the system, and how important it is for humans to be able to debug, maintain, and interface with it.