I'm trying to sort out my own emotions on this.
I did not realize this was AI generated while reading it until I came to the comments here... And I feel genuinely had? Like "oh wow, you got me"... I don't like this feeling.
It's certainly the longest thing (I know about) I've taken the time to read that was AI generated. The writing struck me as genuinely good, like something out of The New Yorker. I found the story really enjoyable.
I talked to AI basically all day, yet I am genuinely made uneasy by this.
Maybe it's because I think your comment throws away a lot of relevant context from OP's submission on HN.
He says he spent months on this piece and then some, I think it's safe to assume here that this was well supervised, guided, thoughtful and full of human intent despite the AI-assisted part.
In short, I think calling it "AI generated" takes all the human effort that went into these months and the ingenious creativity of OP towards crafting this piece!
Anyways, I enjoyed it. :)
Reading it, I get the feeling the author worked the story the way Tom Hartmann works those agricultural machines. The AI gave input, but the author was tweaking it with human knowledge and wisdom.
Me too, and I think that's a really cool metalinguistic aspect of it!
It's a major bummer. When I first read the story (a few days ago, maybe?) I thought it was an interesting metaphor that didn't quite line up with the observed details of software development with AI. I assumed the writer was a journalist or author with a non-technical background trying to explore a more "utopian" vision of where trends could go.
Without the inferred writer, it's much less interesting to me, except as a reminder that models change and I can't rely on the old tics to spot LLM prose consistently any more.
Surely you see it's somewhat unreasonable? As if it was written by the author you disliked, and until you knew of the fact, you quite enjoyed it.
Quite honestly, I do that sometimes too -- but I _know_ that it's unreasonable.
Can i compare this with fucking inflatable doll (not done this, just extrapolating). Even if senses for your penis are identical, whole experience is totally not the same as doing with another live person.
For me, “interestingly wrong” becomes just “wrong” without human thinking behind it. I wasn’t bowled over by the prose, I just thought it was an uncommon take and didn’t twig the signs it was Claude product.
hard to form an emotional connection with the emotionless
Says parent post, while thinking a stack of rocks that looks a little like a fat raccoon is kind of cute.
Humans are designed to form emotional connections with non emotional things. Its sort of our whole deal.
Humans are definitely not designed.
Eh, People form emotional connections with inanimate objects, so I'm unsure if that's a good enough argument tbf.
A djungelskog is not a threat. AI threatens my livelihood and my humanity. The worst part is I have to use it regardless because I would be uncompetitive without it.
What is it about it that makes the story less interesting to you? It's the same story, down to the same delicate details. When AI-slop stops being, well, slop, and just is everything that humans do, but much better, and much more efficient—will we have the same repulsion to it that many of us do now?
I find it interesting to ponder. We look at the luddite movement as futile and somewhat fatalistic in a way. I feel like the current attitude towards AI generated art will suffer the same fate—but I'm really not quite sure.
What is your understanding of the luddite movement? I ask because I don't believe many are aware that luddites were not anti-technology. It was a labor movement which was targeted at exploitation by factory owners. Their issue was with factories forcing the use of machines to produce inferior products so owners could use cheaper, low skill labor.
https://www.vice.com/en/article/luddites-definition-wrong-la...
Right, wrong, whatever. The one thing every sane person can agree on is that it's a good thing the Luddites didn't prevail.
How much did you pay for the shirt you're wearing now?
haha, if you knew me you would realize that I am exactly the wrong person to be asking that specific question.
- [deleted]
I'd have been ok if things fell more in their direction... I'm not saying "clear win", but a middle ground that had the machines do the things they're best at while letting humans do the quality work.
> but a middle ground that had the machines do the things they're best at while letting humans do the quality work.
By arguing for letting humans work, particularly quality work, you're not especially finding a middle ground, more adopting the 1811 position of the OG Luddites who were opposed to being put out of work.
The OG Luddites were correct.
Yeah, that's a fine sentiment in the general, but let's hear some specifics.
I think two sane things.
1) It’s good in the long run that they didn’t prevail at that time.
2) They did actually, in fact, have a point.
I mean obviously they had a point? No one wants to lose their job.
Everybody wants to lose their jobs. Almost by definition your job is something you do not because you want to, but because you need to earn a living. Even if your job coincides with your hobby, you would prefer not to have your economic welfare tied to it in a way that drives how you engage with it.
We are on the verge of making this possible, if a bunch of myopic morons -- people who have never been right about a single long-term trend in history -- can be convinced not to screw it up.
- [deleted]
Once again showing how little you actual understand about the movement you decry.
Specifically what is the user's misunderstanding? Be constructive.
- [deleted]
Stories are particularly troubling because we have the concept of "suspending disbelief" and readers tend to take a leap of faith with longwinded narratives because we assume the author is going somewhere with the story and has written purposefully.
When AI can write convincingly enough, it is basically a honeypot for human readers. It looks well-written enough. The concept is interesting and we think it is going somewhere. The point is that AI cannot write anything good by itself, because writing is a form of communication. AI can't communicate, only generate output based on a prompt. At best, it produces an exploded version of a prompt, which is the only seed of interest that carries the whole thing.
Somebody had that nugget of an idea which is relevant for today's readers. They told the AI to write it up, with some tone or setting details, then probably edited it a bunch. If we enjoy any part of it, we are enjoying the bits of humanity peeking through the process, not the default text the AI wrote.
Right, but in the present case we have exactly what you're describing—a story, almost fully written by AI but with some human cherry-picking in the mix. And readers are finding it a phenomenal story and then wanting to vomit retrospectively in learning about the authorship. It just seems patently obvious to me that this is not where the sentiment is going to stay—it will hit the margin, like the people who decide to not own a cell phone, or those who would rather listen to analog audio; there will be a market for it but it will exist at the margin. Eventually, especially for young people, more and more of what they consume will be AI generated and they won't care because it's indistinguishable from human work.
Or, I digress, it will be distinguishable from human work but because it's so much better than anything that a human could have ever created. These AI tools that we have now are as dumb as they will ever be. If we ever reach AGI or superintelligence or whatever—or even if not, even if these tools just advance for 10 more years on their current trajectory—it's easy for me to imagine some scenario where the machines can generate something so perfect to your liking that you just prefer it to anything a human ever would have created, storytelling and all.
You can take the general case where AI can just generate a better movie than a team of humans ever could plausibly generate. After all, AI doesn't have any of the physical constraints of a movie studio—the budget, the logistics of traveling from location to location, the catering, the fact that the crew has to sleep, has to coordinate schedules, all that. AI, with some human involvement or not, could just keep iterating on some script on a laptop overnight until its created an optimized version which is more satisfying to humans than any other human made movie ever created. Or in a narrow case it could create the perfect movie for you, given what it knows about you and your interests. All human movies would look inferior.
For my kids, who I'm sure are going to grow up in a world where this type of art is embedded everywhere—and where the human version is almost certainly going to be worse—I don't think the desperate cries to see the last scrap of human ingenuity will mean anything. All of these people throwing rocks at Waymos and others boycotting companies for generating ads rather than shooting one with a video studio; it's so obviously helpless, desperate and obviously futile in the face of what's coming.
I mourn the future that seems plausible here but I also welcome it as inevitable. The technology is coming, and people are going to have to adapt one way or another.
You're talking about content. Only content can be "perfect" as you say.
When I'm listening to music, looking at art, seeing a play or a short film I want to feel connection to the humans behind it. AI is by definition missing that connection. That's what makes me retrospectively vomit at AI writings like these. That connection requires that the humans behind it are imperfect, the solo can have one or two sloppy notes, but at least it's genuine interaction. We have seen this same yearning for connection with all the "Don't use LLM to comment, use your true style of writing with its flaws" rules.
I'm 100% certain mainstream studios will be producing "perfect" content with AIs just like current mainstream pop stars have 10 ghost writers working on each song to create "perfect" songs. The good stuff will exist in the fringes as always and I'm ok with that as I've already been for years.
And the future may not be as settled as you think it is. Leaders try to sell you their vision of the future by saying it is settled and that things are certain, but that is because they want you to believe that, because if you and the masses believe so, it's more certain for the future to settle the way the leaders want. But you can also actively refuse that future and find a different future that's worth believing in yourself.
The riff comes first, the people come second. One of the nice things about punk and metal is how anti celebrity in a fundamental way both genres are. In histories of the genres, you will usually find such and such band made such and such invention that led to certain new structures being accessible. Of course the social background of the scenes where it emerged is important too but the history is traced first in terms of the riff. Or aka books like glazing a particular rockstars life history are rare, even though there are some "superstars" in metal and punk. The culture is very "only analog is real, digitals fake shit" but idk in some other ways they seem much closer to having not much difficulty accepting a valid musical work regardless of origin.
I don't quite understand what you're getting at with this comment? In metal and punk it's pretty cornerstone of the genre to be authentic, and in metal to value human skills (all the solo parts, fast playing). I've played and listened punk and metal my whole life, but will also enjoy early Lady Gaga, Eminem, Kendrick etc. celebrities because I recognize their authenticity and skills. Sabrina Carpenter and Drake go over my head because of blatant ghost writing and even though they have good tunes, I vomit retrospectively.
So what is AI bringing to the fans of these genres that the fans might value? Because it's not authenticity nor is it skills. What is the point you're trying to make?
I am saying on surface it might seem they should be the staunchest opponents and as I said the culture is "only cassette tape is real otherwise fuck off and die" but simultaneously its also one of the least image/player focused genres in some ways, what is being played is of much higher priority than who in specific is playing it.
Hmm I can think of various examples where the guitarist was changed and people dismiss the new guitarist. Take a look at Megadeth for example - every new solo guitarist gets compared to Marty Friedman even though he hasn't been in the band for 26 years. So a lot of it is player focused.
But your point also stands here, every new guitarist must play the solos as close to the original ones as possible, otherwise it's not the same experience. So on the music level "what" is of much higher priority still. But I wouldn't say it is as black and white as you make it out to be.
Some of course have a very unique style that seems very hard to replicate. Personally I haven't yet found a single band that manages to faithfully execute classic era Slayer. But there are countless bands today who make very good execution of norwegian black metal and swedish death metal.
Edit: And a lot of modern black metal for example doesn't even bother with stating who they are. Member lists are pseudonymous or anonymous. I think this "anti god" culture makes metal different from other genres in some ways.
Ok I'm not as up to date with modern black metal, that pseudonymity seems cool.
There's also upcoming math rock band Angine de Poitrine who are also anonymous https://www.youtube.com/watch?v=0Ssi-9wS1so . In these cases you can argue that the person doesn't matter but in my opinion it still does. There's a person inside that costume, who has made the decision to be anonymous as part of the whole experience. That's part of their expression.
Of course there's then bands like Ghost who have mainstreamed this too - the players wearing the costumes are usually just contract musicians and don't have anything to do with Tobias or the music other than playing for money. Good for them but f that, you are just a robot at that point.
There's anonymity/pseduonymity where we have a entity that does not do any performances and releases cassettes with members acknowledged as "M., K. and J." or even nothing and there is "anonymity/pseudonymity" where a band tries to use that as its own image (eg Kanonenfieber). Obviously I meant more like the former which is legitimately a music first person irrelevant presentation, but modern black metal is a wide spectrum, it has some of the most image conscious crap out there too, if anything I think its probably the most superficial and image focused of the main metal genres. It's just that anonymity hasn't historically been part of death metal culture that much but I feel its actual presentation is quite workman like in many ways.
That's all speculation, and it may prove to be true.
But:
> readers are finding it a phenomenal story
is not true across the board.
I thought to myself, explicitly, and fairly early "This is a fun and thoughtful idea, but the writing is kinda crap" before I realized (maybe a third if the way through) "ah, right, this is genAI. That tracks."
Despite my deep-seated hatred of LLMs, I choose to finish the piece and see if I was being unfair to the actual work ("the output", in the soulless descriptor used by programmers who've never once written a real story or crafted a song).
As a longtime avid reader of fiction, lit nerd, and semi-pro musician, I understand writing and artistry better than the average HN poster, and couldn't help but see the flaws in this.
People who don't have deep knowledge of literature don't catch the tells or flaws as well, but are still understandably angry when they find out they burned their time reading clanker output, and are understandably depressed that they were suckered into it because they haven't spent a lifetime developing a deep understanding of the discipline.
It's possible that genAI approaches will surpass humans in every field we invented.
So far, though, in every field I understand deeply, I see the uncanny mediocrity of the average in every LLM output I have subjected myself to.
Are you an AI? This looks like it was at least ran through an LLM judging by the heaps of em dashes.
You can get some good guesses from the comment itself.
> I assumed the writer was a journalist or author with a non-technical background trying to explore a more "utopian" vision of where trends could go.
If you assume you're reading something from a person with intention and a perspective, who you could connect with or influence in some way, then that affects the experience of reading. It's not just the words on the page.
This reminds me of having the reverse experience with the 2017 New Yorker viral "Cat Person" story [0] which a (usually trustworthy) friend forwarded and enthusiastically told me to read: waste of time shaggy-dog story, intentional engagement-trolling aimed at the intersection of the hot-button topics of its target readership *. But why are we culturally expected to allow more slack to a human author, even a meretricious one? Both are comparably bad. The LLM-authored one needs a disclaimer at the top to set its readers' expectations right, then readers can make an informed choice.
(* "Cat Person" honestly felt like the literary equivalent of Rickrolling; I would have stopped reading it after the first page if not for my friend's glowing endorsement.)
(Sorry, the correct link for Roupenian's 2017 story "Cat Person" is at https://news.ycombinator.com/item?id=15892630 )
Oh god, that was insipid.
It had a very similar quality to the AI'd article from this thread. A sort of attempt at Being Literary but never really ever getting to the point of saying anything. It has the same feeling of wallowing, of over indulging in its shtick.
[dead]
the story is bad in itself and doesn't add anything to the reader
but if you knew it came from a human it would be interesting as a window to learning what the writer was thinking
since there is no writer such window doesn't exist either
Yes, this is a thing. Bad writing with an interesting idea underneath it all is still interesting if it comes from a human because we have the expectation that the human will improve in how they share their ideas in the future. In other words, we see potential.
But LLMs don't have potential. You can make an LLM write a thousand articles in the next hour and it will not get one iota better at writing because of it. A person would massively improve merely from the act of writing a dozen, but 100x that effort and the LLM is no better off than when it started.
Despite every model release every 6 months being hailed as a "game changer", we can see from the fact that LLMs are just as empty and dumb as they were when GPT-2 was new half a decade ago that there really is no long term potential here. Despite more and more power, larger and hotter and more expensive data centers, it's an asymptotic return where we've already broken over the diminishing returns point.
And you know, I wouldn't care all that much--hell, might even be enthusiastically involved--if folks could just be honest with themselves that this turd sandwich of a product is not going to bring about AGI.
Very well said.
You cannot even get angry or upset if you disagree with anything in the story, maybe the author’s despicable worldview permeating through the characters... because there's no author’s worldview, because there's no author. It's a window into nothing, except perhaps the myriad of stories in the model's training set.
I want to at least have to option of getting upset at the author.
Someone prompted it to write that, and then posted it, so I suppose there's a meta-author to get upset at.
It's kind of an abandonment of having a worldview, outsourcing it to the AI.
i don't find the luddite comparison accurate. they were against looms and anti-ai people or ai skeptical people are against the wholesale strip mining of intellectual property as it exists... both public domain and non-public domain. it's used to enrich the capital class at the expense of the workers. sure it's similar but it certainly didn't have the copyright and wholesale theft of all of the human ideas behind it. it just feels quite different.
they were not against the loom itself, but the resulting widescale changes for the worse in the way society was organized
c'mon, were they really just against the looms...?
People had a revulsion to eating refrigerated foods. The developed world got over it. We're comfortably on the path to becoming Eloi who will trust everything the magic box does for us.
> We're comfortably on the path to becoming Eloi who will trust everything the magic box does for us.
And if you've read literally any science fiction you will know the myriad ways that could be absolutely terrible for us
As a couple sibling comments said, I took it for an insight into the way an optimistic writer might see AI software development becoming a new form of "end-user programming" or "citizen developer" tooling. I'm personally too deep in the weeds to ever see it becoming empowering in that way (if nothing else, this will be an incredibly centralizing technology and whoever wins the "arms race" [assuming we we're not in a bubble destined to pop soon] will absolutely have the possible Toms and Megans of such a future by the short hairs). But I love end-user programming, or whatever we're calling it now! (I was partial to "shadow IT" - made it sound really cool.) So I enjoyed the idea that somebody saw AI as a "bicycle for the mind" in that sense, even if I feared they'd end up disappointed.
But there was nobody there, and I'm only disappointed in myself for not noticing.
>What is it about it that makes the story less interesting to you?
Read my comment below for a perspective.
> When AI-slop stops being, well, slop, and just is everything that humans do, but much better, and much more efficient—will we have the same repulsion to it that many of us do now?
For me, the answer to this riddle is very easy: I want to engage with other human minds. A robot (or AI) doesn't have a human mind, so I'm not interested in its "artistic" output.
It was never about how good it was. Of course AI slop adds insult to injury by being also bad. Currently. But it'll get better. My position was never that AI art (shorts, pictures, music, text) is to be frowned up because it's bad. I don't like it because it's not the expression of a human mind.
It's a bit like how an AI boy/girlfriend is not the real deal, no matter how realistic -- and I'm sure they'll get uncannily realistic in the future. They aren't the real deal because there's no real human behind the facade of companionship.
I also had no idea this was LLM generated. After reading your comment, I had a similar emotional reaction.
Thinking deeper, it seems prudent that we tag submissions like this with a prefix. Example: "LLM: ". This would be similar to "Show HN: ". While we cannot control what the original sources choose to disclose, we can fill that gap ourselves.
My point: I agree with you: It is misleading that the blog post does not include a preface explaining it was written by an LLM (and ideally, the author's motivation to use an LLM). However, it is still a good blog post that has generated some thoughtful discussion on HN.
> preface explaining it was written by an LLM
why can't the quality of the works stand on its own? Whether there's LLM generation or not should be irrelevant.
because we typically want to know the writer of a piece. we want to know where to lay credit.
every book you buy has an author credited. articles in newspapers and magazines have photographer and author attributions.
asking an ai to write you a story does not make you an author. if you ask someone to take a photo for you, you don’t magically get to say “look at this photograph, i’m a photographer.” if you ask someone to bake you a wedding cake, and then claim you baked it, you’re a fraud.
we deserve to know the actual writer.
> want to know the writer of a piece
but you dodged the question i asked - why can't a piece stand on the contents, rather than its pedigree?
Would you care if a writer used a pen name? Does that in any way diminish their works? What about the unknown editors that contributed?
Because you need to do some pre-filtering on where to focus your attention, and you want to make sure the author put some thought into the article without having to analyze it.
Due to LLMs making the cost of publishing “thoughts” extremely low, there’s now an over-supply of content that looks decent on the surface, but in reality the author has probably spent less time on than the reader.
Are we ready so far down I to the LLM denial mindset that we consider an author spending multiple months crafting this to be "worthless" and less investment then your casual reading?
No, I believe this is a great post. It’s awesome. Even more so because it’s AI generated, as it shows what AI can do when given a lot of quality material to work with.
I’m just talking about the general topic about the usefulness of an “this is AI generated” classifier.
Don't we already have these filters in place? I only saw this because it was highly-upvoted on HN, for example - I don't read every new submission. I also read things sent by friends and family, shared by curators I trust, etc.
Of course these systems may eventually break down, but for now they seem to work.
why does it bother you to give attribution? why do you think crediting the writer impacts how the piece stands?
we have pop musicians who produce massive hits under their names and the song writers are still given credit in liner notes and in the tracks details on spotify or wherever.
if it’s created by a bot, id take it even further and say which version of which model actually generated it should be declared. why would anyone be against giving proper attribution?
We like writing because the fact that we can create good writing says something about ourselves. If AI can create writing that surpasses, say, a Tolstoy or George Eliot, that will fundamentally change our self-perception. Is that a good thing or bad thing? Well, let's first cross the bridge of an LLM writing War & Peace and see how we feel.
It's not about pedigree, but context. Without context our most beloved stories are just meaningless ink on paper.
If someone couldn't be bothered to write it, I certainly can't be bothered to read it. I did not bother to read the article involved because the continual piss stain on the images, the website itself, and a few key phrases let me on to the fact that it was all generated.
When you interact with art, you do so to interact with the author and the point they want to make. Writing is something where a skilled writer will be able to make a point tersely and have it stick, knowing where to embellish and where to keep it simple. Every decision in art tells you about the artist. Generative AI may be able to fake the composition process, but the point of composition is it reveals something about the human. All of those are artistic decisions that a machine apparently now "can do", but not with any coherency.
The holder of the reigns of slop is not an artist, this is plain to see because they do not interact or engage with their work on the same level as an artist. The produced slop is not art, because it cannot be engaged with on the same level.
[dead]
[dead]
I’ve said this many times before
AI is just a tool
If you used a fancy auto bake cake machine instead of an oven, you still get to claim that you made the cake.
100 years ago someone would be making the claim that using an oven to make cakes “doesn’t count”
All AI did was raise the bar
It’s quite clear here that the author spent a lot of time on this so he absolutely gets credit as the author
I think there's a distinction.
Imagine if you had an auto cake making machine that decides on its own the best time to make cake. It adds the ingredients, stirs, turns the oven on, and leaves the finished cake on the counter for you.
People start opening bakeries consisting entirely of cakes baked by the automatic machines. The owners of these machines have no idea whether the cakes have a bit too much flour or were slightly over-stirred. In some cases, they haven't even tried the cakes.
Who gets to claim they made the cake?
By contrast, there are others who carefully tune their machines to make sure everything is perfect. They adjust the mixing settings and ingredient proportions. They experiment and iterate. They taste test throughout the process. And what they give to the public tastes every bit as good as a homemade cake.
The first group is creating slop. The second group, I think, is baking. And OP is in the second group.
Replace "oven" with a dish washer or a washing machine for your clothes. Those things do exactly all of this. Yet we still complain about washing clothes and doing the dishes, even though it is far less effort than anything our parents did, or their parents before them.
If you commission a baker to bake you a cake, did you make the cake? What if you added sprinkles on top?
If you commission a baker, another person, with wants and desires of their own, is involved.
If you use an AI, there isn't.
Either way, it's clear that the author (yes, the author) put a lot of work into this by iterating and shaping it to what he wanted, and that's a lot more than sprinkles.
> If you commission a baker, another person, with wants and desires of their own, is involved.
> If you use an AI, there isn't.
What is the functional difference here? You are commissioning (see: prompting) someone (see: an AI) for a piece of work, or artwork or whatever. The output is out of your control; and I don't think the existence or lack thereof of a human on the other end materially matters.
If we had hyper-advanced ovens from The Jetsons where we could type a prompt using a fold-out keyboard and it would magically generate whatever cake we ask of it: did we or did we not bake that cake? And I do not think it is clear the author put a lot of work iterating and shaping it into what he wanted; we have zero insight into that.
I didn't say the difference was functional. If you don't think the presence of a human on the other end matters (materially or not), feel free to continue this conversation with an LLM simulation of me. You can even prompt it so that you logically triumph and convince "me".
I'm asking you to explain what the actual difference is and you're avoiding the question.
If we had a complete black box where you submitted Prompt and out came Thing, and you had zero clue what said black box actually did, could you claim creation over Thing? What does knowing that it's a human vs LLM make materially different in terms of whether or not you created it?
And I - or did I turn this thread over to an LLM already? - am asking you a question in return, whose answer should give you the answer you want.
No please, I also agree with parent poster. Talk to the LLM, cause the human ain't listening.
Eh.
Why would I give him the same credit I would give a writer.
Or why would I give a writer the same credit I would give someone who created the AI prompts and scaffolding to generate this?
Being unhappy about not being able to call oneself an author, ends up betraying a lack of confidence in the work or process.
In the end writer, dancer, actor, whatever - these titles come from their impact.
There will be a different name for this, and eventually there will be something made that is good enough that people will be spell bound. At which point its going to be named something else.
At which point.
Ironically, the story can be read as gesturing in that direction, as it's ostensibly about giving a new title to a particular job.
In general, though, I think part of the mistake people keep making is that they try to imitate what would be value to engage with if a human wrote it, in an attempt to claim the role of an author of a book or whatever. There's likely artforms that are unique to what an LLM can facilitate, but trying to imitate human artforms is going to give you stunted results. The AI is very good at imitating the form but not the substance.
Once we stop trying to generate and pass off AI essays, novels, choose your own adventure stories, and all the other human genres as being human writing, we'll have a chance to figure out actually interesting artistic forms.
Largely, I agree with you. One famous counterpoint about labeling works of arts with the author: The Economist (the magazine) does not add the author to most of their articles.
> because we typically want to know the writer of a piece. we want to know where to lay credit.
Does the average person really do care all the time? Maybe the outlet it comes from as a whole (factuality, political lean) but more rarely the exact author. Many don’t even have the critical skills for any of it and consume whatever content is chosen for them by whatever algorithm is there. We probably should care, I just don’t think a lot of us do.
For me, needing to know that something’s written by AI serves threefold purposes:
1) acknowledging that it might be slop that someone threw together with no effort (important in regards to spam)
2) acknowledging that depending on the model the factuality might be low when it comes to anything niche (though people are wrong too, often enough)
3) mentally preparing myself for AI bullshit slop language, like “It’s not X, it’s Y.”, or just choose not to engage with it (it's the same disgust reaction as when I find a PDF and realize it's just scanned images, not proper text)
In general, unless the goal is either human interaction or a somewhat rare case of wanting to read a specific blog etc., most of the time I don’t categorically care whether something was lovingly created by a human or shoved out by a half baked version of Skynet - only that it’s good enough for whatever metrics I want to evaluate it by. I’m not ashamed of it and maybe that’s why I don’t take an issue with AI generated code either, as long as it’s good enough (sometimes better than what people write, other times quite shit when the models and harnesses are bad).
In Peter Watt's Blindsight, the aliens understand language as spam, a hostile intent to waste their time, and respond by opening fire.
Reading LLM slop without warning makes me see their point of view.
I think there's useful ways to engage with LLM writing, but they are often very different than human writing.
A human writer, a good one, often has ideas that are denser than the words on the page, and close reading is rewarded by helping you unpack the many implications.
With AI writing, there's usually fewer ideas than words, and so it requires a different kind of engagement. Either the human prompter behind it didn't supply enough ideas, or they were noncommittal enough that their very indecision got baked in.
LLMs are very prone to hedging and circling around a point while not saying much of anything. Maybe it is the easiest way to respond to RLHF incentives and corporate-speak training data. Or maybe they're just intrinsically stuck on being unable to find the right next token so they just endlessly spiral around via all of the wrong ones. Either way, there's often a whole lot of cotton candy text that dissolves when you try to look at it more closely.
can't reply to your comment below so i will comment here
> why does it bother you to give attribution? why do you think crediting the writer impacts how the piece stands?
clearly it does to you?
thing is, this is a fool's errand to try to police what people credit when there is zero capability of verification and enforcement
the current social norms still value authorship, so people will just take or omit credit as they see most advantageous, even if it's merely an ego advantage, which it typically is but a proxy for brand building
what will happen if/when the currency of attribution is completely altered? hard to predict
my prediction is that track record will be considerably more important, not less, but human merit will be increasingly seen as irrelevant
Because 'quality' is a misnomer. LLM writing has quality in the same way that a press release from a big company has quality, or a professional contract written by a lawyer has quality. It is functional, generally typo-free and conforms to most standards but that doesn't mean it has flavor or spice to it.
Creative writing is the intent to convey feelings, thoughts, to create atmosphere. Here's a great example of the failure to do so here, in a way that even most terrible writers would avoid.
> “It just said harvest,” she told Tom. She was sitting in one of the plastic chairs, holding a cup of the adequate coffee.
The coffee in this story is conveyed as being 'perfectly adequate'. But how do you convey adequacy? When you simply just say 'the coffee is adequate' there's nothing there. It could be conveyed by establishing that the coffee is always perfectly room temperature, or with the mere hint of bitterness and sweetness, or that it tastes like every other brand out there. In many respects this story is the exact same as the 'perfectly adequate' coffee: functional, unexciting and ultimately flavorless.
Well-put.
This "flavorlessness" is all over the story, and paired with the obviously genAI images is how I realized as I read that this was either generated or at the least deeply driven by AI.
It constantly described facial expressions, tones of voice, and other emotional cues in generic, dry terms that communicated nothing but the abstract notion of "this person felt a particular way about what happened and it's up to you, the reader, to imagine what that feeling was."
It felt very much like it was prompted to "show, don't tell," by someone who has no idea what that phrase actually means.
As a professional programmer with a deep background in literature and music, this is yet another example that if you aren't an expert in a field, you will get mediocre results at best from an LLM, while being deceived into thinking they're great.
Five years ago and before, the blog post author would have gone to Fiverr and asked for an artist from a developing country to create some illustrations. There are many, many images on the Internet from five years (and before) that look similar. I object to your use of the adverb "obviously".> obviously genAI imagesNo, I clocked the AI images before I noticed the text. I think the "obviously" is earned.
You are correct that a previous era would have included a bunch of Fiverr images that would be in sort of that style, but it's not the style that's the problem. None of the images say more than the text that they're illustrating. It's subtle, but once you notice the lack of information density it becomes starkly apparent.
I took that phrase differently. The story makes the point that the AIs fail when metrics of quality can't be expressed in words. The use of a bare "adequate" reinforces the opacity of the coffee's quality. Certainly it would have worked well to use more words to convey specifics of the "adequacy" as you mention, but IMO that would have undercut the link back to the theme of human ineffability.
Obviously everyone's mileage may vary, but I didn't see this as a huge defect, and actually felt it worked pretty well.
Adequate coffee almost works as an image.
In the hands of Douglas Adams or Kurt Vonnegut it could be spun into a whole recurring motif.
In this case it's merely...adequate. Almost captures the density of ideas packed into something like "The ships hung in the sky in much the same way that bricks don't" but doesn't quite manage the same effect.
- [deleted]
I started reading it then found it waffling on quite a bit, then came to the HN comments and saw - ah LLM. I could have saved time if I'd know.
Also I feel a bit conned. I was curious what Tom Hartmann was up to and now it seems he doesn't exist and it's just some slop?
For a while, people found solace in denial: "it's not good, it will never be good, and i will always be able to tell"
next stop will be to ask for some sort of regulation
People don’t want to self-disclose their use of AI I’ve noticed, especially the ones that put the least effort into using it. So this will only work for a small portion of the AI content.
We really need to stop thinking that every AI assisted thing is bound to be slop. "Shit in Shit out" often Applies in reverse aswell.
Humans build friendships and relationships on shared experiences. There is an element of relationship-through-experiencing-a-thing. Whether it's going for a walk together or the classic first date template of dinner and a movie. The shared experience is the thing.
With stories that shared experience is between author and reader. Book clubs etc will try to extend that "shared experience" but primarily it is author <-> reader relationship.
Remove that "shared feeling with the author" and what meaning does it have?
You can look at a tree and feels things by yourself. Also there's the shared readership.
...and what meaning does it have?
It means, "Wow. Cool. I'm a member of a species that taught rocks to think. Holy fuck. That's pretty insanely fucking awesome. Wow. Wow, wow, wow. Fuck."
That's about all it means. Nothing was removed from your life, but something optional was added.
snark filter off, "wow wow wow this sex doll feels so real why would i ever bother with an actual girl"
Agreed, that will indeed be a problem. We may be building the proverbial Fermi filter.
birth rates have already tanked everywhere that isnt religious. youd think people would move back to religion and save their culture, but the sex doll argument has already pervaded. we werent designed to have our senses constantly hyperstimulated; resultantly, people increasingly dont care about reality. only sociopaths and the well disciplined thrive in this environment, everyone else becomes lost in hyperreality. id love to send it and join the masses ... after contemplating eternal damnation, a few years of sensory pleasure just arent worth it.
People without sex dolls also have lower birthrates. It's because the time previously used for fucking and childrearing has instead been owed to our masters since before we were born.
the sex doll thing was intended as a metaphor throughout this thread. we've been slaves for thousands of years, that bit hasnt changed. what has changed is that people nowadays no longer care about themselves because they are fried. watching life on a screen feels close enough to the real thing - why bother living at all, living is risky and can hurt you. the usual answer to that would be testosterone pushing us to do risky things, but test rates have cratered. in the absence of risk attraction, values would help, but nobody has any values, because we decided to throw religion in the bin under the expectation that values would spontaneously manifest (which they didnt, no surprise, we are literally monkeys). and after all that, yes we are being worked to the bone more than ever - at least serfs owned their land.
My guess is we'll end up divorcing human reproduction from human sexuality at some point anyway. I don't know if that'll be a net good thing or a net bad thing, and don't have a strong opinion either way, but I do know that regardless of any debate about the causes of low birth rates, we are no longer subject to the evolutionary pressure that, however accidentally, gave us what intelligence we have. (Many of the religions you seem to be advocating would say we never were.)
Anyway, none of this is an emergency. Near-term survival is the real concern, accompanied by continued technological progress. Neo-Luddites are working up the courage to take direct action (see comments elsewhere in this thread), and they will be using tools far more effective than the shoes, angry words, and monkey wrenches their predecessors employed. Meanwhile, the most popular religion in America has convinced its followers that a nuclear war is just the ticket to bring Jesus back.
I wish those words were as stupid as they sound, but we live in times that celebrate stupidity and are ruled by those who embody it. If we can get through the next 50 years without any major civilization-level setbacks, I think we'll be home free. So that needs to be the focus.
I think "I'm a member of a species chasing our own extinction by worshipping an idiot machine god for the purposes of profit. That's so insanely depressing. Fuck fuck fuck fuck fuck"
It has absolutely made my life worse not better
I didn't know either, but wasn't surprised to find out. The writing was too... polished, in a way I'm starting to recognize more and more. The knowledge doesn't really impact my experience of having read it, but I'm looking forward to a day when AI agents can be trained out of the servile mentality. It directly affects everything they make.
There is an interesting dichotomy where we express an uncanny-valley revulsion to AI-generated text, art, video and music; yet we seemingly go with the AI-generated code.
Personally I have an uneasiness with it and are correspondingly cautious. Often after a review and edits it loses that "smell". I kind-of felt the same about NPM and package managers for a long time before using it became obligatory (for lack of a better word).
Are we conditioned to use other people's code unthinkingly, or is it something else?
It's because code isn't a way to communicate ideas, it's a way to specify behavior. Text, drawings, video, and music are means for brains to connect with each other. When you read or view or listen to something generated you're not connecting with any other brain. No idea has been transmitted to you. The feeling is analogous to speaking on the phone and only realizing several minutes later that the call was dropped. It's a feeling that combines betrayal, being made to waste time, and alienation.
I tend to disagree that code can't be a way to communicate an idea. Sure, I might struggle to edict an emotion in the reader (excluding confusion or frustration) but I feel it is a way to describe ideas, model constructs and processes, etc.
With AI-generated text where there is this disconnect between the audience and the prompter who has an idea but not the skill to express it. Would you say reading an English translation of Dostoevsky is similar because you're connecting with the interpreter rather than the actual author? Or something as simple as an Asterix comic where the English translation is rarely literal but uses different English plays on words?
>I tend to disagree that code can't be a way to communicate an idea.
I wouldn't go as far as can't, but in general it won't be, and if any ideas are indeed communicated, they will be impersonal.
>With AI-generated text where there is this disconnect between the audience and the prompter who has an idea but not the skill to express it. Would you say reading an English translation of Dostoevsky is similar because you're connecting with the interpreter rather than the actual author? Or something as simple as an Asterix comic where the English translation is rarely literal but uses different English plays on words?
I can think of a better example. In comic circles there's the rewrite, which is when an editor isn't fluent in the original language, and so instead of actually translating, they just rewrite all the dialogue to something that matches the action. People (generally) hate rewrites. Unknowingly reading a rewrite provokes a similar feeling of betrayal that unknowingly reading LLM output provokes.
No, code is a way of communicating ideas, or more correctly information. All languages convey information. All languages convey ideas.
Did you read past the first sentence? The kind of information that a piece of code transmits is fundamentally different from that which is transmitted by a sentence or a song.
Yes, though I would take it in a different direction and say that LLMs are better at putting actual ideas into code. They've never gotten real feedback on how their literary metaphor feels, but they have gotten very direct feedback on whether code runs at all, and slightly more indirect feeds on whether it runs as part of the larger system.
So code that is written which plays music, yes people do live code music, doesnt count?
An elegant algorithm or intentionally inelegant one do not speak or communicate ideas? Please, keep hairsplitting.
Sir. You’re wrong and wrong on the internet. Two capital offenses. For shame.
You're being purposefully dense and I'm not going to engage with you.
No I am not but you came across in the last reply aggressive so i tried to lighten the mood a bit. This is the internet and people can disagree. It is fine.
I dont think you are reading my point at all and are instead getting worked up. If you disagree, fair.
I think a heavy argument needs to be made to say that code or programming languages do not carry ideas for me to change my mind on it. Id be happy to engage in good spirit but you seem pretty set in stone there.
I had a similar experience a few days ago with some music on Spotify. It was an Irish Pub song, rendering some political satire that seemed pretty consistent with what I figure is a predominant Irish viewpoint. Since I holidayed in Ireland a while ago and adored the public there, I really liked it. I reveled in the fact that somewhere in Ireland, there was a band singing messages in pubs that resonated strongly with me. And then it was pointed out that it was AI. I was crushed. I went from feeling connected to some people across the pond, to feeling lonely.
And yet, in ironic counterpoint, there is a different artist I follow on Spotify that does EDM-fusion-various-world-genres. And it’s very clearly prompt generated. And that doesn’t bother me.
My hypothesis is that it has to do with how we connect/resonate with the creations. If they are merely for entertainment, then we care less. But if the creation inspired an emotion/reasoning that connects us to other humans, we feel betrayed, nay, abandoned, when it comes up being synthetic.
I've gotten pretty good at identifying AI-genned music. There are two tells that I've noticed so far.
The most quantifiable is the presence of a high frequency component that sort of sounds like someone tried to clean up our restore a highly compressed track. It almost sounds kind it's going to start doing that warbling sound that happens when a teleconferencing call has a bad connection but it's just not bad enough to lose connection completely. I guess it's the sound of being highly noise gated.
The other is more qualitative. The song is boring. Like you said, on paper the song should be something I enjoy. But I suddenly notice that there is no... variation or never hook or anything to make it interesting. Anything to make it something other than the result of a machine. The aural equivalent of eating at Applebee's or reading The New Yorker. The songs just kind of plod onward without ever really getting to a point.
It feels kind of like a vivid dream when you're on the edge of lucidity. You can tell something is wrong, but there is something messing with you faculties. You're trying to see where things are going, how things will resolve, and it never happens. It just keeps going and going in a particular mode. If it does change, it's not to resolve, it's to start on a new thread that is an alternate universe version of the previous thread. With no attempt at establishing continuity, no resolution is ever found.
The connection is often with other people experiencing the same thing even if they thing is AI generated. You can see this clearly on Youtube with comments which just quote a line from the video. They get lots of upvotes, probably from other people who felt that line was special too and enjoy seeing others sharing the same feeling. Of course if all those comments are AI too, you would lose that connection.
Interesting. I didn't realise it was LLM generated either, but only came here after the first section to find out if it was worth reading the rest.
Maybe the summary of the first section wouldn't have landed without the example but "People who would spend $50,000 on elective surgery without blinking would balk at a $200 annual wellness check. The fix was always cheaper than the failure, the prevention was always cheaper than the fix, and somehow the money always flowed toward the crisis rather than away from it." explained the problem far more succinctly than the rambling prose before it.
I did notice something else AI about it - I really liked the art style for the illustrations, and had mixed emotions as my thought process was "I'd really like to learn how to draw like this, but I guess there's no point spending my time doing that because now I could just get an AI to generate it, and I guess that's the point of the article".
It's full of AI generated imagery. Why would it not be AI generated?
Blog posts like this have been full of genAI images for years, even if the text is actually written by a human. So just because the images are obviously generated doesn't really tell you much about the text.
Good rule of thumb is if it was posted on HN, it's almost certainly AI slop.
The duality of generated content.
It feels great to use.
It feels terrible to have it used on you.
[dead]
Well contrary to many, myself was not convinced and suspected the content being LLM generated from very beginning with the images and even background. Something in the writing also didn’t hit right.
My $.02 is that in the domain of software engineering LLMs have largely automated the process of copy-pasting from StackOverflow and existing parts of the codebase. Architecture and product management is still very necessary. In the same fashion they can also automate writing a novel. The issue is that prose is sometimes much more important in literature than it is software (because, after all, users use software, they don't read the code). I say "sometimes" because this clearly doesn't apply to stuff like schlocky bestsellers that one buys in airport stores and reads like movies.
When ChatGPT first came onto the scene I actually started using it to write something in this vein - a techno-thriller starring a former fashion model trained in Krav Maga working as a nuclear physicist who discovers a sinister government conspiracy to alter the foundations of quantum mechanics and enslave humanity with assistance from extraterrestrials. And, of course, only she can stop them with the help of a gruff-but-sensitive retired Marine who has since opened a ranch where he teaches orphaned puppies calculus. I only got 20 pages (so one gunfight and a car chase) in but it was as riveting as anything. Context limit cut my efforts short. Perhaps I'll revisit it soon.
I say all this to say that if words themselves are distantly secondary to narrative then I don't see anything particularly wrong with leveraging an LLM to help crank something out.
The thing is, if you want to convey a social/political message via fiction, you have to be a genius to make it non boring or uncanny.
Very few humans have managed this. This text is at the average level of "i want to pass the message and i'm trying to write professionally".
I can't remember the exact phrasing, but I read somewhere long ago that what you read now, you become in 5 years from now. As in, right after reading something, you think and deliberate about it, but in 5 years from now that becomes part of your subconscious and you can't activity filter it.
It's treachery, a betrayal of trust. It's the same feeling as when you get sweet-talked into overpaying for something. This time, you overpaid with your attention.
> Over the last couple months, I've been building world bibles, writing and visual style guides, and other documents for this project…
> After that, this was about two weeks of additional polish work to cut out a lot of fluff and a lot of the LLM-isms.
There is a substantial amount of work here, comparable to how long a human writer would take to write from scratch, definitely not slop. I think we can call it AI-assisted, not AI-generated. Even the illustrations are well above average.
I think its a valid emotion to feel. I genuinely resonated with the story, but when I learned it was written by Claude it kind of left me feeling ... betrayed?
One of the many things I love about art is when I encounter something that speaks to emotions I've yet to articulate into words. Few things are more tiring than being overwhelmed with emotion and lacking the ability to unpack what you're feeling.
So when I encounter art that's in conversation with these nebulous feelings, suddenly that which escaped my understanding can be given form. That formulation is like a lightning bolt of catharsis.
But I can't help but feel a piece of that catharsis is lost when I discover that it wasn't a humans hand who made the art, but a ball of linear algebra.
If I had to explain, I guess I would say that it's life affirming to know someone else out there in the world was feeling that unique blend of the human experience that I was. But now that AI is capable of generating text, images, music, etc. I can no longer tell if those emotions were shared by the author or if it was an artifact of the AI.
In this way, AI generated art seems more isolating? You can never be sure if what you're feeling is a genuine human experience or not.
You can never be sure if what you're feeling is a genuine human experience or not.
This is what the deconstructionists were preparing us for, I guess. The author is dead, and if not dead, then fake. It was never a good idea to tie our sense of meaning to external validation.
The humanity immanent in the text came from you, the reader, not the author, and it has always been that way. Language never gave us access to the author's mind -- and to the extent that statement is wrong, it doesn't matter. AI is just another layer of text, coming between the reader and the same collective consciousness that a human author would presumably have drawn on. The artistic appreciation of that text is the sole privilege of the reader.
I don't agree.
Well, FWIW, LLMs are specified to infer and fill in the blanks of books. It makes the headlines now and again that publishers put AI companies on the hook for unauthorized use, The New Yorker included.
I have the same issue with AI generated music : it can be quite good to say the least.
But I deeply feel that art only matters if there is an artist. The artist wants to convey something.
What makes you uneasy (if you are like me) is that a machine deliberately created emotions in your brain. And positive emotions, at that. It’s really something I can’t stand.
I different way of reframing this point is looking at some of the modern art that's highly celebrated, without the human component of what it represents, the art itself isn't that good.
So, the guy who suspends buckets of paint with a hole in the bottom to make patterns has an idea of what he's creating. The guy who just put a few strips of electrical tape in different colours had an idea of what he was trying to convey. The guy who flings paint against a wall also has an idea of what he's creating. The guy who made all the white paintings. All that art is trivial to copy in the same style, maybe even an exact copy for the electrical tape, but it's the artist's intention that makes it worth more than a toddler's painting.
Personally, I think most of that abstract art is pointless, because I don't really see how the artist's vision is represented by whatever the mess they've created is, but I definitely understand that at least they had an idea that they wanted to convey. A machine creating the same thing has no meaning behind it, it's just a waste of paint and canvas.
Whether people know it or not, when they engage with art they are assuming a person not just made it but experienced it. I'm going to blow past the discussion of "what is art" here, but where something came from and how it was made has always mattered to me (you could draw parallels to food here if you wanted). One thing that has been on my mind a lot is a particular photograph I saw in the past few years (and I'm sure it's easy to find online): it's a POV shot taken by a person sitting atop a skyscraper with their feet dangling over the edge. There is just no way that anyone could in good faith claim that the same photo produced by "AI" could possibly have the same emotional impact as knowing someone actually went and did that. I think that for a lot of people they may not even realize that when hey see a painting or even a photo as innocuous as a tree, their mind goes to that the person who produced this went to this that place the tree was in an had an experience and chose to document that particular perspective. If they were to see a painting or drawing of something that is clearly "fantasy," they know that a person made this up in their crazy mind and experience their feelings on it (good or bad). "AI" (heavy quotes) is trying to trick us and rob of us this basic knowledge. Some see this as progress. I personally think it's fucking disgusting, but I've been wrong before.
Of course this has always been a bit of a problem with digital art trying to mascarade as the real thing... I always think of programmed drums using real drum samples. In my adult life I found out that an album I loved as a teenager that listed a real drummer as the performer was actually 100% programmed (this was an otherwise very "organic" sounding heavy guitar album). I always had my suspicions since it was so perfect but I experienced exactly what you are describing. I also never got over it.
I suspect (but don't know) that this had to be edited somewhat heavily or generated in isolated chunks: I've generated a lot of fiction with Claude and it has a chronic issue of overusing any literary device one might associate with good writing once it appears in the context window
I think if you left it to its own devices, some of the narrative exposition stuff that humanized it would go off the rails
Yeah, there's a lot more work and personal touch that went into this (and the previous piece) than just "write prompt -> copy/paste into substack".
It's really interesting to hear about others that have been exploring generating fiction with Claude. I clearly need some more work based on some of the comments, but it has been really interesting discovering and coming up with different techniques both LLM-assisted and manual to end up with something I felt confident enough about to put out.
I'd be curious to hear more about your experience!
Yeah, there's often a heavy instruction and recency bias that just squeezes all of the nuance and subtlety out if it.
Absolutely the opposite here, after reading a few paragraphs I was a bit bored. Then I saw the length of the piece, noticed the AI imagery, quit, came here. I read your comment and it makes sense. I'm not reading a story that somebody couldn't be bothered to write.
Yup. There should be a disclaimer or a "food tag". The implicit assumption in society is some human had written the text you read.
I also did not gin to the fact that it was AI, but I did have the distinct feeling that I was reading something not that great. It bothered me because the message was something I could appreciate but the delivery felt anathema to the message.
It felt like it was written by someone trying to quit an addiction to Corporate Memphis content spam. Like it came from some weird timeline where qntm was a LinkedIn influencer. It straddles an uncanny valley of being a criticism of the domination of The Corporation over human culture while at the same time wallowing in The Corporate Eunuch Voice, not because it's a subversion of form, but because it knows no other way.
I then came to the comments section and found the piece that brought the picture into focus.
It's just... hard to explain the specific kind of disappointment. Perhaps there is a German phrase-with-all-the-spaces-removed kind of word that describes it succinctly. I feel like I exist in this Truman Show kind of world where everyone is trying to gaslight me into thinking LLMs are important, but they aren't very good at it and whenever I try to find out how or why, it all evaporates away. I was very reluctant to say that because I'm sure it's going to come with a heaping side of Extremely Earnest Walruses ready to Have A Debate about it and I just don't have the energy for it anymore. That's the baseline existence right now. It's like a really boring version of Gamergate.
And then this thing comes along. And yeah, it's a thing. You got me. Ha. Ha. Joke's on me. I lost the shitty, fake version of the Turing Test that I didn't even ask to be a part of. And it reminds me of the Microsoft Hololens: a massively impressive technological achievement that was ultimately a terrible consumer experience. Like if you figured out Fusion Power but it could only power Guy Fieri restaurants.
Ever since the pandemic I've been keenly aware of the complete destruction of every enjoyable social structure around me. The meetups that evaporated. The offices we essentially squatted in that suddenly turned Extremely Concerned about what people were doing. The complete lack of any social interaction at work because we're all so busy because we're running at half-workforce and pretty sure the executive suite is salivating at the bit to lay the rest of us off. The lack of care about how this is impacting open source software. The lack of concern for people.
I feel like my entire adult life was this slow, agonizing, but at least constant push forward into recognizing the humanity in others and creating a kind and diverse world and then over night it's all been destroyed and half the people I see online are cheering it on like it's Technojesus coming to absolve them of their sins of never learning to invert a binary tree. Where the blogs and books and startups of the early 2000s were about finding the hidden potential in people--the college dropout working as a barista who just needs someone to give them a chance to be a programmer or a graphic designer or an artist or whatever--the modern era seems to all be about the useless middle management guy who never had any creative bone in his body no longer having to write status reports to his equally mendacious boss on his own anymore.
We might be restarting old coal plants, but at least Kevin in middle management gets to enjoy "programming" again.
Actually, I was waiting for a punchline, twist or climax of sorts.
This had the feeling of reading someone's diary: today happened, same as yesterday.
The only difference is that the routine, and almost identical, stories is set in in a fictional place.
Some journal/found footage fiction can be good (Dracula for example), but this was not that.
you're saying qntm is NOT an influencer? what a miscalculation i have made
- [deleted]
> She was sitting in one of the plastic chairs, holding a cup of the adequate coffee
and other stuff... it's not that good.
[dead]