I write documentation for a living. Although my output is writing, my job is observing, listening and understanding. I can only write well because I have an intimate understanding of my readers' problems, anxieties and confusion. This decides what I write about, and how to write about it. This sort of curation can only come from a thinking, feeling human being.
I revise my local public transit guide every time I experience a foreign public transit system. I improve my writing by walking in my readers' shoes and experiencing their confusion. Empathy is the engine that powers my work.
Most of my information is carefully collected from a network of people I have a good relationship with, and from a large and trusting audience. It took me years to build the infrastructure to surface useful information. AI can only report what someone was bothered to write down, but I actually go out in the real world and ask questions.
I have built tools to collect people's experience at the immigration office. I have had many conversations with lawyers and other experts. I have interviewed hundreds of my readers. I have put a lot of information on the internet for the first time. AI writing is only as good as the data it feeds on. I hunt for my own data.
People who think that AI can do this and the other things have an almost insulting understanding of the jobs they are trying to replace.
The problem is that so many things have been monopolized or oligopolized by equally-mediocre actors so that quality ultimately no longer matters because it's not like people have any options.
You mention you've done work for public transit - well, if public transit documentation suddenly starts being terrible, will it lead to an immediate, noticeable drop in revenue? Doubt it. Firing the technical writer however has an immediate and quantifiable effect on the budget.
Apply the same for software (have you seen how bad tech is lately?) or basically any kind of vertical with a nontrivial barrier to entry where someone can't just say "this sucks and I'm gonna build a better one in a weekend".
You are right. We are seeing a transition from the user as a customer to the user as a resource. It's almost like a cartel of shitty treatment.
I don't work for the public transit company; I introduce immigrants to Berlin's public transit. To answer to the broader question, good documentation is one of the many little things that affect how you feel about a company. The BVG clearly cares about that, because their marketing department is famously competent. Good documentation also means that fewer people will queue at their service centre and waste an employee's time. Documentation is the cheaper form of customer service.
Besides, how people feels about the public transit company does matter, because their funding is partly a political question. No one will come to defend a much-hated, customer-hostile service.
Counterpoint - I think it’s going to become much easier for hobbyists and motivated small companies to make bigger projects. I expect to see more OSS, more competition, and eventually better quality-per-price (probably even better absolute quality at the “$0 / sell your data” tier).
Sure, the megacorps may start rotting from the inside out, but we already see a retrenchment to smaller private communities, and if more of the benefits of the big platforms trickle down, why wouldn’t that continue?
Nicbou, do you see AI as increasing your personal output? If it lets enthusiastic individuals get more leverage on good causes then I still have hope.
When it became cheaper to publish text did the quality go up?
When it became cheaper to make games did the quality go up?
When it became cheaper to mass produce X (sneakers, tshirts, anything really) did the quality go up?
It's a world that is made of an abundance of trash. The volume of low quality production saturates the market and drowns out whatever high quality things still remain. In such a world you're just better of reallocating your resources from the production quality towards the the shouting match of marketing and try to win by finding ways to be more visible than the others. (SEO hacking etc shenanigans)
When you drive down the cost of doing something to zero you you also effectively destroy the economy based around that thing. Like online print, basically nobody can make a living with focusing on publishing news or articles but alternative revenue streams (ads) are needed. Same for games too.
> When it became cheaper to … did the quality go up?
No, but the availability (more people can afford it) and diversity (different needs are met) increased. I would say that's a positive. Some of the expensive "legacy" things still exist and people pay for it (e.g. newspapers / professional journalism).
Of course low quality stuff increased by a lot and you're right, that leads to problems.
Newspapers and professional journalism are indeed doing well right now, nothing to worry about.
Well yeah more people can afford shitty things that end up in the landfill two weeks later. To me this is the essence of "consumerism".
Rather than think in terms of making things cheaper for people to afford we should think how to produce wealthier people who could afford better than the cheapest of cheapest crap.
But in the context of softwares, the landfill argument doesn't fit exactly well (well, sure someone can argue that storage on say, github might take more drives but the scale would be very cheaper than say landfill filled with physical things as well
> Rather than think in terms of making things cheaper for people to afford we should think how to produce wealthier people who could afford better than the cheapest of cheapest crap.
This problem actually runs deep and is systemic. I am genuinely not sure how one can do it when the basis of wealth derives from what exactly? The growth of stock markets which people call bubbles or the US debt crisis which is fueling up in recent years to basically fuel the consumerism spree itself. I am not sure.
If you were to make people wealthy, they might still buy cheapest of cheapest crap just at a 10x more magnitude in many cases (or atleast that's what I observed US to do with how many people buy and sell usually very simple saas tools at times)
Re software and landfill.. true to some extent but there are still ramifications as you pointed out electricity demand and hardware infrastructure to support it. Also in the 80's when the computer games market crashed they literally dumped games cartridges in a hole in the desert!
Maybe my opinion is just biased and I'm in the comfortable position to pass judgment but I'd like to believe that more people would be more ethical and conscious about their materialistic needs if things had more value and were better quality and instead of focusing on the "price" as the primary value proposition people were actually able to afford to buy other than the cheapest of things.
Wouldn't the economy also be in much better shape if more people could buy things such as handmade shoes or suits?
> Re software and landfill.. true to some extent but there are still ramifications as you pointed out electricity demand and hardware infrastructure to support it. Also in the 80's when the computer games market crashed they literally dumped games cartridges in a hole in the desert!
I hear ya but I wonder how that reflects on Open source software which was the GP request created by LLM let's say. Yes I know it can have bugs but its free of cost and you can own it and modify it with source code availability and run it on your own hardware
There really isn't much of a difference in terms of hardware/electricity just because of these Open source projects
But probably some for LLM's so its a little tricky but I feel like open source projects/ running far with ideas gets incentivized
Atleast I feel like its one of the more acceptable uses of LLM in so far. Its better because you are open sourcing it for others to run. If someone doesn't want to use it, that's their freedom but you built it for yourself or running with an idea which couldn't have existed if you didn't know the details on implementations or would have taken months or years for 0 gains when now you can do it in less time
It significantly improves to see which ideas would be beneficial or not and I feel like if AI is so worrying then if an idea is good and it can be tested, it can always be rewritten or documented heavily by a human. In fact there are even job posts about slop janitor on linkedin lol
> Wouldn't the economy also be in much better shape if more people could buy things such as handmade shoes or suits?
Yes but also its far from happening and would require a real shake up in all things and its just a dream right now. i agree with ya but its not gonna happen or not something one can change, trust me I tried.
This requires system wide change that one person is very unlikely to bring but I wish you best in your endeavour
But what I can do on a more individualistic freedom level is create open source projects via LLM's if there is a concept I don't know of and then open sourcing it for the general public and if even one to two people find it useful, its all good and I am always experimenting.
> Rather than think in terms of making things cheaper for people to afford we should think how to produce wealthier people who could afford better than the cheapest of cheapest crap.
I'm not trying to be snarky, but, if the principle is broadly applied, then what is the difference between these two? (I agree that, if it can only be applied to a limited population, making a few poor people wealthier might be better than making a few products cheaper.)
I think you found, but possibly didn't recognize, the problem. When availability goes up, but the quality of that which is widely available goes down, you get class stratification where the haves get quality, reliable journalism / software / games / etc. while the not-haves get slop. This becomes generational when education becomes part of this scenario.
I agree with you, however:
One of the qualia of a product is cost. Another is contemporaneity.
If we put these together, we see a wide array of products which, rather than just being trash, hit a sweet spot for "up-to-date yet didn't break the wallet" and you end up with https://shein.com/
These are not thought of as the same people that subscribe to the Buy It For Life subreddit, but some may use Shein for a club shirt and BIFL for an espresso machine. They make a choice.
What's more, a “Technivorm Moccamaster” costs 10x a “Mr. Coffee” because of the build and repairability, not because of the coffee. (Amazon Basics cost ½ that again.)
Maybe Fashion was the original SEO hack. Whoever came up with the phrase "gone out of style" wrought much of this.
When it became cheaper to publish text, for example with the invention of the printing press, the quality of what the average person had in his possession went up: you went from very few having hand-copied texts to Erasmus describing himself running into some border guard reading one of his books (in Latin). The absolute quality of texts published might have decreased a bit, but the quality per capita of what individuals owned went up.
When it became cheaper to mass produce sneakers, tshirts, and anything, the quality of the individual product probably did go down, but more people around the world were able to afford the product, which raised the standard of living for people in the aggregate. Now, if these products were absolute trash, life wouldn't make much sense, but there's a friction point in there between high quality and trash, where things are acceptable and affordable to the many. Making things cheaper isn't a net negative for human progress: hitting that friction point of acceptable affordability helps spread progress more democratically and raise the standard of living.
The question at hand is whether AI can more affordably produce acceptable technical writing, or if it's trash. My own experiences with AI make me think that it won't produce acceptable results, because you never know when AI is lying: catching those errors requires someone who might as well just write the documentation. But, if it could produce truthful technical writing affordably, that would not be a bad thing for humanity.
When it became cheaper to publish text, for example with the invention of the printing press, the quality of what the average person had in his possession went up: you went from very few having hand-copied texts to Erasmus describing himself running into some border guard reading one of his books (in Latin). The absolute quality of texts published might have decreased a bit, but the quality per capita of what individuals owned went up.
Today the situation is very different and I'm not quite sure why you compare a time in history where the average person was illiterate and (printed) books were limited to a very small audience who could afford them, with the current era where everybody is exposed to the written word all the time and is even dependent on it, in many cases even dependent on it's accuracy (think public services). The quality of AI writing in some cases is so subpar, it resembles word salad. Example goodreads: the blurb of this book https://www.goodreads.com/book/show/237615295-of-venom-and-v... was so surreal I wrote to the author to correct it (see in comments to the authors own review). It's better now, but it still has mistakes. This is in no way comparable with the pasts goes down a bit this is destroying trust even more than everything else, because it this gets to be the norm for official documents people are going to be hurt.
>When it became cheaper to x did the quality go up? ...yes?
It introduces a lower barrier to entry, so more low-quality things are also created, but it also increases the quality of the higher-tier as well. It's important to note that in FOSS, we (Or atleast...I) don't generally care who wrote the code, as long as it compiles and isn't malicious. This overlays with the original discussion...If I was paying you to read your posts, I expect them to be hand-written. If I'm paying for software, it better not be AI Slop. If you're offering me something for free, I'm not really in a position to complain about the quality.
It's undeniable that, especially in software, cheaper costs and a lower barrier to get started will bring more great FOSS software. This is like one of the pillars of FOSS, right? That's how we got LetsEncrypt, OpenDNS, etc. It will also 100% bring more slop. Both can be true at the same time.
I'd say that those high quality things that still exist do so despite of the higher volume of junk and they mostly exist because of other reasons/unique circumstances. (Individual pride, craftsmanship, people doing things as a hobby/without financial constraints etc)
In a landscape where the market is mostly filled with junk by spending anything on "quality" any commercial product is essentially losing money.
>people doing things as a hobby/without financial constraints
Isn't this the exact point I was making...? I get you're arguing it's only a single factor, but I feel like the point still stands. More hobbyists, less financial constraints
The problem is that with the amount of low-quality stuff we're seeing, and with the expansion of the low-quality frenzy into the realm of information dissemination, it can become prohibitively difficult to distinguish the high-quality stuff. What matters is not the "total quality" but sort of like the expected value of the quality you can access in practice, and I feel like in at least some areas that has gone down.
> but it also increases the quality of the higher-tier
I truly don't see this happening anymore. Maybe it did before?
If there's real competition, maybe this does happen. We don't have it and it'll never last in capitalism since one or a few companies will always win at some point.
If you're a higher tier X, cheaper processes means you'll just enjoy bigger profit margins and eventually decide to start the enshittification phase since you're a monopoly/oligopoly, so why not?
As for FOSS, well, we'll have more crappy AI generated apps that are full of vulnerabilities and will become unmaintainable. We already have hordes of garbage "contributions" to FOSS generated by these AI systems worsening the lives of maintainers.
Is that really higher quality? I reckon it's only higher quantity with more potential to lower quality of even higher-tier software.
> When it became cheaper to publish text did the quality go up?
Obviously, yes? Maybe not the median or even mean, but peak quality for sure. If you know where to look there are more high-quality takes available now than ever before. (And perhaps more meaningfully, peak quality within your niche subgenre is better than ever).
> When it became cheaper to make games did the quality go up?
Yes? The quality and variety of indie games is amazing these days.
> When it became cheaper to mass produce X (sneakers, tshirts, anything really) did the quality go up?
This is the case where I don’t see a win, and I think it bears further thought; I don’t have a clear explanation. But I note this is the one case where production is not actually democratized. So it kinda doesn’t fit with the digital goods we are discussing.
> basically nobody can make a living with focusing on publishing news or articles
Is this actually true? Substack enables more independent career bloggers than ever before. I would love to see the numbers on professional indie devs. I agree these are very competitive fields, and an individual’s chances of winning are slim, but I suspect there are more professional indie creators than ever before.
I don't think peak quality is a very meaningful measure. As you say, it turns everything into "if you know where to look".
I think for 'technical' writing, there is going to be some end-state crash.
What happens when all the engineers left can't figure out something, and they start opening up manuals, and they are also all wrong and trash. And the whole world grinds to a halt because nobody knows anything.
When was the last time that speed of development was the limiting factor? 15-20 years ago?
Nowadays the problem is that both technical and legal means are used to prevent adversarial interoperability. It doesn't matter if you (or AI) can write software faster if said software is unable to interface with the thing everyone else uses.
I suggest that you read my comment again. It will answer your question.
> Documentation is the cheaper form of customer service.
Thank you so much for saying this. Trying to convince anyone of the importance of documentation feels like an uphill battle. Glad to see that I'm not completely crazy.
> We are seeing a transition from the user as a customer to the user as a resource.
I'd argue that this started 30 years ago when automated phone trees started replacing the first line of workers and making users figure out how to navigate where they needed to in order to get the service they needed.
I can't remember if chat bots or "knowledge bases" came first, but that was the next step in the "figure it out yourself" attitude corporations adopted (under the guise of empowering users to "self help").
Then we started letting corporations use the "we're just too big to actually have humans deal with things" excuse (eg online moderation, or paid services with basically no support).
And all these companies look at each other to see who can lower the bar next and jump on the bandwagon.
It's one of my "favorite" rants, I guess.
The way I see this next era going is that it's basically going to become exclusively the users' responsibility to figure out how to talk to the bots to solve any issue they have.
“It's almost like a cartel of shitty treatment.”
Thank you. I love it when someone poetically captures a feeling I’ve been having so succinctly.
> Thank you. I love it when someone poetically captures a feeling I’ve been having so succinctly.
It’s almost like they’re a professional writer…
Just giving a human a compliment.
Enshittificartelization?
It’s almost like they are a professional writer
Word for word, 53 minutes later? Why?
I have exactly 1 guess but am waiting to say it.
His other comment in this thread is also a clone of someone else's comment.
And it happened after I wrote that comment.
Which means I replied to a bot.
I am officially retiring from social media.
I'm not a bot lmao. and dont retire cause of this . sorry for that but this was a genuine though and didnt mean to "copy" you or anyone .
And im new to hackernews lol
His or its?
Thanks for reinforcing the point. Repetition is always the clearest form of insight.
Sarcasm goes whooosh.
> You mention you've done work for public transit - well, if public transit documentation suddenly starts being terrible, will it lead to an immediate, noticeable drop in revenue? Doubt it. Firing the technical writer however has an immediate and quantifiable effect on the budget.
Exactly. If the AI-made documentation is only 50% of the quality but can be produced for 10% of the price, well, we all know what the "smart" business move is.
> If the AI-made documentation is only 50% of the quality
AI-made documentation has 0% of the quality.
As the OP pointed, AI can only document things that somebody already wrote down. That's no documentation at all.
AI can often synthesis information out of for example, code and screenshots and navigate a website. It could effectively document the current state of a given web application for example whereas most companies have 0 documentation whatsoever.
Most documentation is documenting things that somebody already wrote down in a different form.
The quality of AI-made documentation may be poor, but calling it 0% is just silly.
I'd take AI generated slop reviewed by the person who created the system over tech writer babble any day of the week.
I'm sure I'm not the only one who was reading about some interesting but flawed system only to discover later that they were talking about MY OWN SOFTWARE!? (only half-joking here)
Also consider that while the OP looks like a skilled, experienced individual, all too often the documentation is being written by someone with that context, but rather someone unskilled, and with read empathy. Quality is quite often very poor, to the point where as shitty as genai can be, it is still an improvement. Bad UX and writing outnumbers the good. The successes of big companies and the most well known government services are the exception.
"well, if public transit documentation suddenly starts being terrible, will it lead to an immediate, noticeable drop in revenue? Doubt it."
First, I understand what you're saying and generally agree with it, in the sense that that is how the organization will "experience" it.
However, the answer to "will it lead to a noticeable drop in revenue" is actually yes. The problem is that it won't lead to a traceable drop in revenue. You may see the numbers go down. But the numbers don't come with labels why. You may go out and ask users why they are using your service less, but people are generally very terrible at explaining why they do anything, and few of them will be able to tell you "your documentation is just terrible and everything confuses me". They'll tell you a variety of cognitively available stories, like the place is dirty or crowded or loud or the vending machines are always broken, but they're terrible at identifying the real root causes.
This sort of thing is why not only is everything enshittifying, but even as the entire world enshittifies, everybody's metrics are going up up up. It takes leadership willing to go against the numbers a bit to say, yes, we will be better off in the long term if we provide quality documentation, yes, we will be better off in the long term if we use screws that don't rust after six months, yes, we will be better off in the long term if we don't take the cheapest bidder every single time for every single thing in our product but put a bit of extra money in the right place. Otherwise you just get enshittification-by-numbers until you eventually go under and get outcompeted and can't figure out why because all your numbers just kept going up.
Just restating: Traceable errors get corrected, untraceable errors don't, and so over time the errors affecting you inevitably are comprised nearly entirely of accumulated untraceable issues.
It means you need judgement-based management to be able to over-ride metric-based decisions, at times.
>it's not like people have any options.
That’s one way to frame it. An other one is, sometime people are stuck in a situation where all options that come to their mind have repulsive consequences.
As always some consequences are deemed more immediate, and other will seem remoter. And often the incentives can be quite at odd between expectations in the short/long terms.
>this sucks and I'm gonna build a better one in a weekend
Hey, this is me looking at the world this morning. Bear with me, the bright new harmonious world should be there on Monday. ;)
And that's exactly the same for coding!
Coding is like writing documentation for the computer to read. It is common to say that you should write documentation any idiot can understand, and compared to people, computers really are idiots that do exactly as you say with a complete lack of common sense. Computers understand nothing, so all the understanding has to come from the programmer, which is his actual job.
Just because LLMs can produce grammatically correct sentences doesn't mean they can write proper documentation. In the same way, just because they are able to produce code that compiles doesn't mean they can write the program the user needs.
I like to think of coding as gathering knowledge about some problem domain. All that a team learns about the problem becomes encoded in the changes to the program source. Program is only manifestation of the humans minds. Now, if programmers are largely replaced with LLMs, the team is no longer gathering the knowledge, there is no intelligent entity whose understanding of the problem increases with time, who can help drive future changes, make good business decisions.
Well said. I try to capture and express this same sentiment to others through the following expression:
“Technology needs soul”
I suppose this can be generalized to “__ needs soul”. Eg. Technical writing needs soul, User interfaces need soul, etc. We are seriously discounting the value we receive from embedding a level of humanity into the things we choose (or are forced) to experience.
your ability to articulate yourself cleanly comes across in this post in a way that I feel AI is trying to be and never quite reaches as well.
I completely agree that the ambitions of AI proponents to replace workers is insulting. You hit the nail on the head with pointing out that we simply dont write everything down. And the more common sense / well known something is the less likely it is to be written down, yet the more likely it might be needed by an AI to align itself properly.
Thanks so much for this!
Nicely written (which, I guess, is sort of the point).
I like the cut o' your jib. The local public transit guide you write, is that for work or for your own knowledge base? I'm curious how you're organizing this while keeping the human touch.
I'm exploring ways to organize my Obsidian vault such that it can be shared with friends, but not the whole Internet (and its bots). I'm extracting value out the curation I've done, but I'd like to share with others.
Why shouldn't AI be able to sufficiently model all of this in the not far future? Why shouldn't have it have sufficient access to new data and sensors to be able to collect information on its own, or at least the system that feeds it?
Not from a moral perspective of course, but the technical possibility. And the overton window has shifted already so far, the moral aspect might align soon, too.
IMO there is an entirely different problem, that's not going to go away just about ever, but could be solved right now easily. And whatever AI company does so first instantly wipes out all competition:
Accept full responsibility and liability for any damages caused by their model making wrong decisions and either not meeting a minimum quality standard or the agreed upon quality.
You know, just like the human it'd replace.
> Accept full responsibility and liability for any damages caused by their model making wrong decisions and either not meeting a minimum quality standard or the agreed upon quality.
That's not sufficient, at least from the likes of OpenAI, because, realistically, that's a liability that would go away in bankruptcy. Companies aren't going to want to depend on it. People _might_ take, say, _Microsoft_ up on that, but Microsoft wouldn't offer it.
> Why shouldn't AI be able to sufficiently model all of this
I call it the banana bread problem.
To curate a list of the best cafés in your city, someone must eventually go out and try a few of them. A human being with taste honed by years of sensory experiences will have to order a coffee, sit down, appreciate the vibe, and taste the banana bread.
At some point, you need someone to go out in the world and feel things. A machine that cannot feel will never be a good curator of human experiences.
I hear you, but counterpoint: if you had an AI that monitored social media for mentions, used vision and audio capture in cafes to see what people ordered and how they reacted to it, had access to customer purchase data to see if people kept coming back to particular cafes and what they ordered over and over again...
Granted, there's lots that's dystopian about that picture, I'm not advocating for it, but it does start to feel like the main value of the "curator" is actually just data capture. Then they put their own subjective take on that data, but I'm not totally convinced that's better than something that could tell me a data-driven story of: "Here are the top three banana breads in the city that customers keep coming back to have a taste orgasm for".
I don't know though, it's a brave new world and I'm skeptical of anyone who thinks they know how all this will play out.
See also: librarians, archivists, historians, film critics, doctors, lawyers, docents. The déformation professionnelle of our industry is to see the world in terms of information storage, processing, and retrieval. For these fields and many others, this is like confusing a nailgun for a roofer. It misses the essence of the work.
The hard part is the slow, human work of noticing confusion, earning trust, asking the right follow-up questions, and realizing that what users say they need and what they actually struggle with are often different things
Replacement will be 80% worse, that's fine. As long as it's 90% cheaper.
See Duolingo :)
Your philosophy reminds me of my friend Caroline Rose. One of Caroline's claims to fame was writing the original Inside Macintosh.
You may enjoy this story about her work:
https://www.folklore.org/Inside_Macintosh.html
As a counterpoint, the very worst "documentation" (scare quotes intended) I've ever seen was when I worked at IBM. We were all required to participate in a corporate training about IBM's Watson coding assistant. (We weren't allowed to use external AIs in our work.)
As an exercise, one of my colleagues asked the coding assistant to write documentation for a Python source file I'd written for the QA team. This code implemented a concept of a "test suite", which was a CSV file listing a collection of "test sets". Each test set was a CSV file listing any number of individual tests.
The code was straightforward, easy to read and well-commented. There was an outer loop to read each line of the test suite and get the filename of a test set, and an inner loop to read each line of the test set and run the test.
The coding assistant hallucinated away the nested loop and just described the outer loop as going through a test suite and running each test.
There were a number of small helper functions with docstrings and comments and type hints. (We type hinted everything and used mypy and other tools to enforce this.)
The assistant wrote its own "documentation" for each of these functions in this form:
"The 'foo' function takes a 'bar' parameter as input and returns a 'baz'"
Dude, anyone reading the code could have told you that!
All of this "documentation" was lumped together in a massive wall of text at the top of the source file. So:
When you're reading the docs, you're not reading the code.
When you're reading the code, you're not reading the docs.
Even worse, whenever someone updates the actual code and its internal documentation, they are unlikely to update the generated "documentation". So it started out bad and would get worse over time.
Note that this Python source file didn't implement an API where an external user might want a concise summary of each API function. It was an internal module where anyone working on it would go to the actual code to understand it.
The map is not the territory! Documentation is a helpful, curated simplification of the real thing. What to include and what to leave out depends on the audience.
But if you treat "write documentation" as a box-ticking exercise, a line that needs to turn green on your compliance report, then it can just be whatever.
Are you working in the legal field or is that separate? How big is your company?
I work alone. My website is https://allaboutberlin.com
In every single discussion AI-sceptics claim "but AI cannot make a Michelin-star five-course gourmet culinary experience" while completely ignoring the fact that most people are perfectly happy with McDonald's, as evidenced by its tremendous economic and cultural success, and the loudest complaint with the latter is the price, not the quality.
I think you fundamentally misunderstand how the technology can be used well.
If you are in charge of a herd of bots that are following a prompt scaffolding in order to automate a work product that meets 90% of the quality of the pure human output you produce, that gives you a starting point with only 10% of the work to be done. I'd hazard a guess that if you spent 6 months crafting a prompt scaffold you could reach 99% of your own quality, with the odd outliers here and there.
The first person or company to do that well then has an automation framework, and they can suddenly achieve 10x or 100x the output with a nominal cost in operating the AI. They can ensure that each and every work product is lovingly finished and artisanally handcrafted , go the extra mile, and maybe reach 8x to 80x output with a QA loss.
In order to do 8-80x one expert's output, you might need to hire a bunch of people to do segmented tasks - some to do interviews, build relationships, the other things that require in person socialization. Or, maybe AI can identify commonalities and do good enough at predicting a plausible enough model that anyone paying for what you do will be satisfied with the 90% as good AI product but without that personal touch, and as soon as an AI centric firm decides to eat your lunch, your human oriented edge is gone. If it comes down to beancounting, AI is going to win.
I don't think there's anything that doesn't require physically interacting with the world that isn't susceptible to significant disruption, from augmentation to outright replacement, depending on the cost of tailoring a model to the tasks.
For valuable enough work, companies will pay the millions to fine-tune frontier models, either through OpenAI or open source options like Kimi or DeepSeek, and those models will give those companies an edge over the competition.
I love human customer service, especially when it's someone who's competent, enjoys what they do, and actually gives a shit. Those people are awesome - but they're not necessary, and the cost of not having them is less than the cost of maintaining a big team of customer service agents. If a vendor tells a big company that they can replace 40k service agents being paid ~$3.2 billion a year with a few datacenters, custom AI models, AI IT and Support staff, and totally automated customer service system for $100 million a year, that might well be worth the reputation hit and savings. None of the AI will be able to match the top 20% of human service agents in the edge cases, and there will be a new set of problems that come from customer and AI conflict, etc.
Even so. If your job depends on processing information - even information in a deeply human, emotional, psychologically nuanced and complex context - it's susceptible to automation, because the ones with the money are happy with "good enough." AI just has to be good enough to make more money than the human work it supplants, and frontier models are far past that threshold.
Spot on! I think LLM's can help greatly in quickly putting that knowledge in writing, including using it to review written materials for hidden prerequisite assumptions that readers might not be aware of that. It can also help newer hires in how to write and more clearly. LLM's are clearly useful in increasing productivity, but management that think that they even close to ready to replace large sections of practically any workforce are delusional.
- [deleted]
I don't write for a living, but I do consider communication / communicating a hobby of sorts. My observations - that perhaps you can confirm or refute - are:
- Most people don't communicate as thoroughly and complete - written and verbal - as they think they do. Very often there is what I call "assumptive communication". That is, sender's ambiguity that's resolved by the receiver making assumptions about what was REALLY meant. Often, filling in the blanks is easy to do - as it's done all the time - but not always. The resolution doesn't change the fact there was ambiguity at the root.
Next time you're communicating, listen carefully. Make note of how often the other person sends something that could be interpreted differently, how often you assume by using the default of "what they likely meant was..."
- That said, AI might not replace people like you. Or me? But it's an improvement for the majority of people. AI isn't perfect, hardly. But most people don't have the skills a/o willingness to communicate at a level AI can simulate. Improved communication is not easy. People generally want ease and comfort. AI is their answer. They believe you are replaceable because it replaces them and they assume they're good communicators. Classic Dunning-Kruger.
p.s. One of my fave comms' heuristics is from Frank Luntz*:
"It's not what you say, it's what they hear." (<< edit was changing to "say" from "said".)
One of the keys to improved comms is to embrace that clarify and completeness is the sole responsibility of the sender, not the receiver. Some people don't want to hear that, and be accountable, especially then assumption communication is a viable shortcut.
* Note: I'm not a fan of his politics, and perhaps he's not The Source of this heuristic, but read it first in his "Words That Work". The first chapter of "WTW" is evergreen comms gold.
LLMs are good at writing long pages of meaningless words. If you have a number of pages to turn in with your writing assignment and you've only written 3 sentences they will help you produce a low quality result that will pass the requirements.
Low-quality is relative. LLMs' low-quality is most people's above-average. The fact the copy - either way - is likely to go through some sort of copy-by-committee process makes the case for LLMs even stronger (i.e., why waste your time). Not always, but quite often.
No it's not. It's low quality because it's extremely verbose and that wastes time.
That's a function of the prompt. The tool only performs as well as you're able to instruct it.
You expect people who cannot write to become skilled writers to instruct llms?
sounds like a bunch of agents can do a good amount of this. A high horse isn’t necessary
I wonder how you have reached this conclusion without having the faintest idea of what I write about.
Nonetheless, I live from that work. If you are correct, there's a fair bit of money on the table for you.
A good amount != this. AI being able to do the easy parts of something doesn't replace the hard ones.
>insulting
As as writer, you know this makes it seem emotional rather than factual?
Anyway, I agree with what you are saying. I run a scientific blog that gets 250k-1M users per year, and AI has been terrible for article writing. I use AI for ideas on brainstorming and ideas for titles(which ends up being inspiration rather than copypaste).
My whole comment was about the need for a thinking, feeling human being. Is it surprising that I am emotional about it?
Emotion takes away from the idea. Instead of thinking: "Oh this is a great point. There is immense economic value here."
It becomes: This person is fearful of their job and used feeling to justify their belief.
Writing about the importance of empathy in terms of economic value would probably take away a lot more from the idea.
Funnily, of all your comment, the only word I objected to was the one right before "insulting": "almost". Thinking that LLM can replace humans outright expresses hubris and disdain in a way that I find particularly aggravating.
…says every charlatan who wanted to keep their position. I’m not saying you’re a charlatan but you are likely overestimating your own contributions at work. Your comment about feeding on data - AI can read faster than you can by orders of magnitude. You cannot compete.
"you are likely overestimating your own contributions at work"
Based on what? Your own zero-evidence speculation? How is this anything other than arrogant punting? For sure we know that the point was something other than how fast the author reads compared to an AI, so what are we left with here?
>you are likely overestimating your own contributions at work
That’s the logical fallacy anyone is going to be pushed to as soon as judging their individual worth in an intrinsically collective endeavor will happen.
People in lowest incomes which would not be able to integrate in society without direct social funds will be seen as parasites by some which are wealthier, just like ultra rich will be considered parasites by less wealthy people.
> People in lowest incomes which would not be able to integrate in society without direct social funds will be seen as parasites by some which are wealthier, just like ultra rich will be considered parasites by less wealthy people.
Your use of the word parasite, especially in the context of TFA, reminds me of the article James Michener wrote for Reader’s Digest in 1972 recounting President Nixon’s trip to China that year. In an anecdote from the end of the trip, Michener explained that Chinese officials gave parting gifts to the American journalists and their coordinating staffs covering the presidential trip. In the case of the radio/TV journalists, those staffs included various audio and video technicians.
As Michener told it, the officials’ gifts to the technicians were unexpectedly valuable and carefully chosen; but, when the newspaper and magazine writers in the group got their official gifts, they turned out to be relatively cheap trinkets. When one writer was bold enough to complain about this apparent disparity, a translator replied that the Chinese highly valued those who held technical skills (especially in view of the radical changes then going on in China’s attempt to rebuild itself).
“So what do you think about writers?” the complainer responded.
To that, the translator said darkly, “We consider writers to be parasites.”
That's a trope easy to fall into for any human, probably.
All the more as part of the underlying representation is actually starting from a structuralist analysis. We try to clarify the situation through classes of issues. But then mid journey we see what looks like an easy ride shortcut, where mapping ontological assessment over social forces in interaction is always one step on the side away. Goat scape is nothing new.
So we quickly jump from, what social structures/forces lead to that awful results, to who can be blamed while we continue to let the underlying anthropological issue rules everyone.
This kind of low effort little thinking comment is what AI is competing with at scale, not OP.
I think the article is clear enough in defeating every one of your argument.
Ai doesn't read it guesses.