There is a whole giant essay I probably need to write at some point, but I can't help but see parallels between today and the Industrial Revolution.
Prior to the industrial revolution, the natural world was nearly infinitely abundant. We simply weren't efficient enough to fully exploit it. That meant that it was fine for things like property and the commons to be poorly defined. If all of us can go hunting in the woods and yet there is still game to be found, then there's no compelling reason to define and litigate who "owns" those woods.
But with the help of machines, a small number of people were able to completely deplete parts of the earth. We had to invent giant legal systems in order to determine who has the right to do that and who doesn't.
We are truly in the Information Age now, and I suspect a similar thing will play out for the digital realm. We have copyright and intellecual property law already, of course, but those were designed presuming a human might try to profit from the intellectual labor of others. With AI, we're in the industrial era of the digital world. Now a single corporation can train an AI using someone's copyrighted work and in return profit off the knowledge over and over again at industrial scale.
This completely unpends the tenuous balance between creators and consumers. Why would a writer put an article online if ChatGPT will slurp it up and regurgitate it back to users without anyone ever even finding the original article? Who will contribute to the digital common when rapacious AI companies are constantly harvesting it? Why would anyone plant seeds on someone else's farm?
It really feels like we're in the soot-covered child-coal-miner Dickensian London era of the Information Revolution and shit is gonna get real rocky before our social and legal institutions catch up.
> Prior to the industrial revolution, the natural world was nearly infinitely abundant. We simply weren't efficient enough to fully exploit it.
This is just wildly incorrect. People started running out of trees during the early Iron Age. Woodlands have been a managed and often over exploited resource for a long time. Active agriculture vs passive woodlands vs animal grazing has been in constant tension for thousands of years across most of the globe.
The general point is accurate, don’t take it so literally.
There were more than enough trees until we developed the technology to clear cut in expeditious manner. There were more than enough fish until we developed the technology to pull massive indiscriminate amounts out of the ocean (and/or started polluting our rivers with industry). There was more than enough topsoil until we developed mechanized plows and artificial fertilizer. Etc.
A few hundred years ago or less, a squirrel could get from the Atlantic Ocean to the Mississippi River without ever touching the ground. Not possible today. That’s not a push and pull played out over thousands of years, that’s a one-way trend.
The general point is not. Iceland and Easter Island were fully deforested way before the industrial age. Countless species went extinct in Britain and more examples abound.
Britain was a little bit industrialised even before the steam engine. There were windmills and water mills. Steam massively accelerated it, but industry did exist before.
If a windmill or a water mill is a sign of industrialisation, then large parts of the world were industrialised.
Commons in England were being enclosed in the Tudor age. It caused a great deal of social unrest, even rebellion. It had little to do with technology, and was mostly caused by population growth.
the speed at which depletion happened was probably not the same
Interestingly, clearcutting is part of it but another part is just grazing. If you let sheep graze in a forest they will eat all the saplings, so after a century of this, the old trees die out without new ones to replace them. I agree with your point but thought that could be of interest - Whittled Away, by Padraic Fogarty, is a good book discussing this (and why Ireland, which really should be all forest, is an ecological wasteland more generally)
> The general point is accurate, don’t take it so literally.
GP is saying it is not, and you're just reiterating what OP said as fact.
It's sort of the exception that proves the rule.
This is where STEM people are weak- a lack of knowledge on history. In another forum, someone would have chipped in that England's virgin forests were fully deforested by 1150. And someone else would have pointed out that this deforestation produced the economic demand for coal that drove the Industrial Revolution in the first place.
Still, that kind of underscores OP's point. Yes, natural resources were not completely unlimited prior to the Industrial Revolution; Jonathan Swift predated Watt's steam engine, after all. Still... Neither were information resources 10 years ago. Intellectual property laws did exist prior to AI, of course. The legal systems in place are not completely ignorant of the reality.
However, there's an immense difference in scale between post-industrial strip mining of resources, and preindustrial resource extraction powered solely by human muscle (and not coal or nitrogylcerin etc). Similarly, there's a massive difference in information extraction enabled by AI, vs a person in 1980 poring over the microfilm in their local library.
The legal system and social systems in place prior to the Industrial Revolution proved unsuitable for an industrial world. It stands to reason that the legal system and social systems in today's society would be forced to evolve when exposed to the technological shift caused by AI.
> powered solely by human muscle
Both animals and water power go way back. The early steam engine was measured in horsepower because that’s what it was replacing in mines. It couldn’t compete with nearby water power which was already being moved relatively long distances through mechanical means at the time.
Hand waving this as unimportant really misunderstands just how limited the Industrial Revolution was.
Irrelevant. Here's Bret Devereaux (an actual historian) explaining this distinction and precisely why those are irrelevant in the context of the Industrial Revolution:
https://acoup.blog/2022/08/26/collections-why-no-roman-indus...
> Diet indicators and midden remains indicate that there’s more meat being eaten, indicates a greater availability of animals which may include draft animals (for pulling plows) and must necessarily include manure, both products of animal ‘capital’ which can improve farming outputs. Of course many of the innovations above feed into this: stability makes it more sensible to invest in things like new mills or presses which need to be used for a while for the small efficiency gains to outweigh the cost of putting them up, but once up the labor savings result in more overall production.
> But the key here is that none of these processes inches this system closer to the key sets of conditions that formed the foundation of the industrial revolution. Instead, they are all about wringing efficiencies out the same set of organic energy sources with small admixtures of hydro- (watermills) or wind-power (sailing ships); mostly wringing more production out of the same set of energy inputs rather than adding new energy inputs. It is a more efficient organic economy, but still an organic economy, no closer to being an industrial economy for its efficiency, much like how realizing design efficiencies in an (unmotorized) bicycle does not bring it any closer to being a motorcycle; you are still stuck with the limits of the energy that can be applied by two legs.
So yeah, actual historians would be dismissive at your exact response, basically saying "I know, I know, but I don't care". You're still just talking about a society mostly 'wringing efficiencies out the same set of organic energy sources'. It IS unimportant, and you completely misunderstand how the Industrial Revolution reshaped production if you think it is important.
I think I prefer the 'STEM people' approach of trying to say true things, rather than this superior approach of just saying things and then, when they turn out to be false, dismissing them as irrelevant. If the truth of the claim is irrelevant, why did you make it in the first place!
The statement IS true anyways, the problem is that you failed to distinguish between an example and a universal claim. You want to argue on logic? I'm an engineer, I can argue on precision too:
The (true!) statement is "However, there's an immense difference in scale between post-industrial strip mining of resources, and preindustrial resource extraction powered solely by human muscle (and not coal or nitrogylcerin etc). Similarly, there's a massive difference in information extraction enabled by AI, vs a person in 1980 poring over the microfilm in their local library."
I said there is a major difference in scale between "modern strip mining" and "a preindustrial extraction method powered only by human muscle", and I made an analogous point about AI-enabled information extraction versus 1980s manual archival research. That statement is purely true. Nothing in that statement says the muscle-powered-extraction example was the only preindustrial mode of production, just as "someone using microfilm in 1980" does not imply microfilm was the only way information was accessed in 1980. The fact that other information formats existed in 1980 is irrelevant to the truth of the example.
So no, nothing I said "turned out to be false". You are attacking a claim I never made because you failed to parse the logic in the one I did. Most importantly, this direction missed the big picture dialectical synthesis that I was introducing as well, and just kept decomposing the argument into locally falsifiable atoms which lost the thread of what was actually being discussed.
Is your counter argument that you’re not wrong just attacking a straw man? Because it really sounds to me like you are just clueless.
Strip mining goes back thousands of years, it’s a simpler technology than making tunnels. And no it wasn’t limited to human power to crack rock several more powerful methods existed.
Roman mining literally destroyed a mountain, operating within an order of magnitude of the largest mines today. That’s what makes what you say false. It’s not some minor quibble over details you are simply speaking from ignorance.
It’s almost like you’re intentionally trying to be wrong.
You don't seem to understand how analogies work. I’m not talking about strip mining vs tunnel mining, I was comparing scale of human powered mining to mining with nitroglycerin.
I’ll let you figure out how the scale of mining “going back thousands of years” is very different from modern explosive mining on your own. Go google “iron production by year” or something. Hint: it took generations for the Romans to strip a small hill, that a modern midsize mining company can do in a few days.
If you take Pliny’s word for truth, they did achieve 10% of the scale of the largest currently operating gold mine using hydraulics at Las Medulas.
Modern geological estimates are radically lower.
- [deleted]
“The industrial revolution wasn’t really all that” is such a strange hill to die on.
How so, being precise and correct is IMO worth preserving in a world of handwaving slop.
The industrial revolution was from ~1760–1840, it was a major shift it doesn’t cover everything that happens between 1760 and now more did it overwhelm many existing trends.
Before LLMs we had code generators and automation that eliminated a lot of time- and resource-consuming tasks. I think the point still holds.
Yeah - really struggling to understand why people are not grasping this point.
Yes, Easter Island was deforested far earlier - but you wouldn't compare the steam engine's capability in resource extraction compared to what people on Easter Island were doing.
It feels like people are almost straining to not understand the point - I think it's quite clear how ML + AI serve to extract resources of data at a unheard of scale.
It's the autism. And I say that endearingly. I'm an engineer who probably likes trains way too much.
I intentionally pointed out the STEM-esque responses of pedantic correction as a symptom of a disciplinary blind spot: technically correct nitpicking that misses the forest for the trees, a tendency to atomize arguments and lose the structural point, and that tendency is a weakness, not a strength.
There's also a lack of historical training to contextualize their own objection. That's also why I brought up Devereaux as an authority hammer: the actual domain experts consider those objections and dismiss it.
the conclusion doesnt follow from the premise is the issue.
the laws and enclosure happened basically orthogonal to the respurce constraints, so there's no actual comparison to draw.
if you insist on a causation, id go with the opposite - the laws making ownership and forcing people off of land enabled the exploitation and innovation, not that it was cleanup for exploitation that was already happening. existing exploitation across all kinds of degrees was already being managed without the enclosure.
if you just want to make stuff up, you can reference anything you want, like that some elaborate thing happened in star wars, and thus the same thing must be happening with AI
It is hard to convince a man of that which his income is dependent on him not understanding. -Upton Sinclair
You aren't wrong. There's definitely going to be a need to drag people kicking and screaming to enlightenment unfortunately. Too much money to be made at stake otherwise.
- [deleted]
> There were more than enough trees until we developed the technology to clear cut in expeditious manner.
Unless you mean 'an axe', way before that there were deforested areas where the need for trees was larger than the supply and there were enough humans to fell them.
> A few hundred years ago or less, a squirrel could get from the Atlantic Ocean to the Mississippi River without ever touching the ground.
Yes, but that wasn't possible in other parts of the world much sooner.
Burning was and is a popular way to deal with trees, too.
there's archeological evidence that humans hunted large animals (sometimes called megafauna) to extinction on every continent except Africa.
My original source for this was the book Sapiens, but here are two links I found with a quick web search: https://www.sciencedaily.com/releases/2024/07/240701131808.h...
https://ourworldindata.org/quaternary-megafauna-extinction
I also saw a theory (not sure how credible) that the reason humans started doing agriculture was in fact because we killed all the megafauna we used to eat.
This was over 10,000 years ago. Well before the Industrial Revolution, indeed, before even the original Agricultural Revolution.
> The general point is accurate, don’t take it so literally.
It's not, because the Malthusian trap was all too real going into modernity, as in recurring famines were a thing, they were quite real, nothing "literal" about them.
First of all, the study is written by an economist, might as well have sent me an Oracle of Delphi pronouncement. And second, he mentions the Malthusian trap being a real thing in his very first sentence, so not sure what I should have gotten out of this.
You could read the whole abstract. Or ask Deep Seek to explain it to you.
Proof by analogy is fraud .. and here the analogy is incorrect as well.
We also have had a significant rise inglobal population. Making for an unfair comparison.
I agree. Although in this specific point, I would say we always had depletion (since the most basic microorganisms, after all otherwise life would replicate until it faces depletion limits; all the way to our close primate relatives and throughout human history; food depletes locally which drives competition), but rarely faced degradation or permanent depletion.
I'd say degradation involves a lasting depletion or lasting damage (potentially permanent until restoration efforts happen) to the environment's output and ability to support life. Permanent depletion is what can happen to e.g. shallow mines and fossil fuel deposits.
I think I'd agree the legal system was created mostly for the former, depletion, and only recently had to contend with degradation and permanent depletion. I feel like we still struggle collectively to coming to gripes with permanent depletion.
Permanent depletion is also usually the result of shortsightedness or a competition gone awry. Famous case where nobody wants the ultimate results but people may selfishly march towards it (tragedy of the commons).
I believe running out of trees was always a local issue - there weren't enough trees where you were at because getting trees had to be gotten locally, you didn't go get trees from far away. So yes that was in constant tension, the thing is that the problem of having enough trees turned from a local problem to a global problem, with the side effects of not having enough trees globally that the world needed to maintain the environment humanity first conquered.
I think the natural world was nearly infinitely abundant is a reasonable description, resource depletion was always local before mass industrialization. Being able to exploit the world as opposed to just your local area is also a mark of efficiency.
By local you mean over 5 thousand of miles? Because yes moving wood was always in competition with growing it locally. But pine forests in the far north were untouched because of the low quality of the lumber they produce not the distances involved. All of Africa Europe and Asia ran out of the most valuable natural lumber a fucking long time ago.
> I think the natural world was nearly infinitely abundant is a reasonable description
Very little of the world’s woodland was untouched at the time of the Industrial Revolution and forests in the Americas survived as long as they did largely due to disease drastically reducing native populations. But American forests were on the clock independent from industrial development. I’m not sure exactly your counter argument even is here.
We still can’t reasonably extract most resources from the ocean bottom. That’s ~70% of the world’s mineral wealth just off the table.
So sure we are very slightly better at extracting resources but on the absolute scale it really isn’t that significant pre vs post Industrial Revolution compared to the sum total of human history.
>By local you mean over 5 thousand of miles?
maybe, "local" is a function of a lot of things, it is only fairly recently in human history that the "global" functions the way that "local" did centuries ago, meaning that it is cheap enough to source things from across the world that it does not need to be made in the next village.
>> I think the natural world was nearly infinitely abundant is a reasonable description
>Very little of the world’s woodland was untouched at the time of the Industrial Revolution and forests in the Americas survived as long as they did largely due to disease drastically reducing native populations.
things seemed appeared abundant prior to one event, soon after that event the thing no longer appears abundant, there's a correlation is the point, not a causation, but
>American forests were on the clock independent from industrial development.
sure, the Native Americans would have used up their forests if they had kept growing and not been killed off by disease brought by Europeans. Nonetheless they had been killed off, the world appeared infinite, because all you needed to do when you ran out of wood in one place is go to another place to source it, hurray, but now that is no longer the case. We have ran out of places to go get more wood.
As noted I said I felt the phrase "the natural world was nearly infinitely abundant" uttered by the original poster in this subthread is a reasonable description, and I mean obviously that is dependent on the impressions of the people of the time, and from my readings it seems like this was more the feeling than oh noes, we are running out of wood.
Although we got into a side track on wood, because that is what the first response to the OP was, that wood was always a problem, which that some natural resources were constrained still does not really disprove the phrase "the natural world was nearly infinitely abundant" since the word nearly can be seen as a cheat, and really what it means is that the world felt infinitely abundant at one time now it does not.
>We still can’t reasonably extract most resources from the ocean bottom. That’s ~70% of the world’s mineral wealth just off the table.
see, it sounds like you still feel like it is closer to infinitely abundant than dangerously used up. All we need to do is up our extraction game, at least were minerals are concerned.
NOTE: I think maybe the world feeling infinitely abundant thing is actually an American thing, this has been remarked by others in the past, that the first European settlers felt this was a world that had not been touched because in comparison to Europe it was under-exploited in many areas, it was big and had everything, and there is a whole part of American frontier myth that as soon as one area got settled and used up all you had to do was to pack up your stuff and move west and get a bunch of resources to use up, like locusts, or maybe just colonizers.
In this case the OP's idea of writing this up is that really what they are dealing with is not how the world was - infinitely abundant - but how it felt to people coming from one overly exploited area to an under-exploited one. They believe there is a narrative of economic constraints and results playing out, and that the two situations were analogous, but the source of the analogy - the world before the industrial revolution - was perhaps not as the analogy would have it but really how a memetic framework of exploration and conquest had interpreted the world.
Sorry my note went overly long, but that sometimes happens when I write what I think just as I'm thinking it.
People had been hunting whales for centuries, but industrialisation gave them the means and the motivation to do so until near extinction.
By that the token humans drove a great number of species to extinction long before the Industrial Revolution. So by that line of thinking we were already running into the limits of natural resources in the Neolithic.
Obviously we’re becoming better at extracting resources over time, but humans ran out of new land to exploit long before Europe's conquest of the Americas. Land only seemed empty because disease decimated native populations, people lived in San Francisco thousands of years ago.
Most of humanity survived on agriculture and sometimes hunting-gathering for last 10k years. People that survived on hunting whales is minuscule. Comparing those two is nonsensical.
Forest for the trees?
I doubt that anyone reading this can’t get the point of the analogy.
The value is in showing where the analogy fails, and either disproves the point, or deepens the point.
But you seem to be missing the point, parent is talking about the industrial scale of means to create a lot more destruction to the environment which the OP point hinges on. Parent does not say humanity survived on hunting whales, quite the opposite, when they had the means people nearly drove whales to extinction.
Read Moby Dick some time my friend.
The industrial revolution is generally understood to have started somewhere around 1760, Moby Dick took place in approximately 1830, about 10 years before what some historians mark as the end of the agrarian to Industrial shift that is generally termed the Industrial revolution
https://en.wikipedia.org/wiki/Industrial_Revolution
I get sort of wishy-washy from 1830 on, because lots of people put the end of the Industrial revolution as being 1900, but 1840 is a defensible and commonly held position.
> The industrial revolution is generally understood to have started somewhere around 1760,
In Britain. Moby Dick ain't set in Britain.
That’s besides the point because most whales were killed in the XX century.
This, and going back further, people literally would brutally massacre neighbouring tribal groupings over control of fishing and hunting and gathering grounds.
The rapid dispersal of our species over literally the entire planet (minus Antarctica) likely also has a lot to do with constantly moving on to new opportunities further away from rivals.
That said, starting in about the 18th century we ran out of new places for that. And intensification truly began.
> This is just wildly incorrect.
from an global perspective it isn't. Some places sure, like Western Europe, who in some cases had completed enclosure, but remember the new world had only been discovered a few hundred years ago at that point.
Just google maps the north part of South America, even today there are large swathes of undeveloped land across it and back then it was considerably less exploited. At that time it would have appeared infinite, especially to the European industrialists.
> remember the new world had only been discovered a few hundred years ago at that point.
By White people*
we're talking about the fucking industrial revolution, of course this defaults to the European perspective. Unless you wanna spit some new bars about Aztec foundries and train lines connecting meso-america in the 19th century, then the point stands. At that time, the world appeared to the industrialists of the industrial revolution to be infinite. Nor had humanity discovered the terrible side effects of fossil fuels on the atmosphere.
Why are you weirdly making this about race?
Sure, of course it's convenient to ignore the native peoples and pretend that prior to the Industrial Revolution the rest of the world outside of Europe was some untapped well of resources that Europeans had a natural right to.
Who might be swept underfoot in this "Information Revolution", I wonder?
Yes, just the other day I saw someone make a comment about write performance in SQLite without considering the plight of the Baltic peoples in the Northern Crusades. It was really convenient of them to do that, fucking typical.
Sure, because working on a database plugin is the same as, for example, working on mass surveillance tech.
This sort of handwashing is exactly why the natives were treated the way they were.
How do you think they're enabling the mass surveillance tech? SQLite got reach bruv.
Your continued erasure of the Baltic people's continues to cut deep into my heart, and your callous candour to their plight, as you discard any chance to mention them, continues to shock me.
Nobody said anything about Europeans having a "natural right". Bad enough to derail a conversation with irrelevant political nitpicking, unforgiveable to use a strawman to do so. Boo.
It's not irrelevant.
GP made a comparison between what we're going through and the Industrial Revolution. Ignoring the negatives of that revolution - like by acting as though the "new world" was uninhabited/unused and so Europeans had a right to its resources - seems like a bad idea.
> like by acting as though the "new world" was uninhabited/unused and so Europeans had a right to its resources - seems like a bad idea.
maybe it was a bad idea, but that's what happened.
Also doesn't justify doing the same damn thing again, which is exactly what all the people long on this technology fully expect to be allowed to do. Any further investment they have to do to ensure the outcome will just be chocked up to cost of doing business. And the capital funding all this is in so few hands, and in the hands in particular of such characters that don't concern themselves with not repeating atrocities of the past in new and interesting ways, that it is virtually guaranteed we're on the road to societal scale disruption. 'Tis the reason such inconvenient points are in need of being pounded home until they are impossible to ignore.
> not repeating atrocities of the past in new and interesting ways
sorry, are you suggesting that colonialism and LLMs are equivalent in terms of atrocity? I don't feel like they're really comparable.
> 'Tis the reason such inconvenient points are in need of being pounded home until they are impossible to ignore.
and what do you think is going to happen here? People so basic that this will never happen. At best you gotta create a grassroots political movement with political representation and clear legal aims and get that past the electorate. However see how the casuals lap up generated content for how ambitious an vision that is. LLMs will prevail and even if public boycotts were extreme, it will just move further and further behind the curtain and the end outcome will still be the same.
I don't see how derailing conversations on hacker news by taking issue with a particular analogy to grind a colonial axe is really furthering that. At the end of the day, regardless of the perspective of our identity, we'll get fucked by network effects and rounded out of systems by those with more influence and power. Sometimes by those who even share our perspective. So to use perspective as a point of division just further fragments what needs to be a whole to enact change.
the Stepchange show went fairly deep on this topic in their first episode (listened to it recently). https://www.stepchange.show/coal-part-i
"but I can't help but see parallels between today and the Industrial Revolution"
You're not the only one.
The current Pope Leo XIV explicitly named himself after the the previous Leo, Pope Leo XIII, who was pope during the Industrial Revolution (1878-1903) and issued the influential Encyclical Rerum novarum (Rights and Duties of Capital and Labor) in response to the upheaval.
“Pope Leo XIII, with the historic Encyclical Rerum novarum, addressed the social question in the context of the first great industrial revolution,” Pope Leo recalled. “Today, the Church offers to all her treasure of social teaching in response to another industrial revolution and the developments of artificial intelligence.” A name, then, not only rooted in tradition, but one that looks firmly ahead to the challenges of a rapidly changing world and the perennial call to protect those most vulnerable within it.”
https://www.vatican.va/content/leo-xiii/en/encyclicals/docum...
https://www.vaticannews.va/en/pope/news/2025-05/pope-leo-xiv...
>RERUM NOVARUM
ENCYCLICAL OF POPE LEO XIII ON CAPITAL AND LABOR
Oh hohm. Such a great mouthpiece Pope Leo XIII was for extolling and providing cover for the excesses of the worst breed of capitalist. Whilst my experience with such religious writing is used to coming away from them not wholely satisfied one way or another, this particular piece was heavily biased toward the "Captain's of Industry" and capitalist civilizations of the time. Explicitly condemning the practices by which labor can usurp the yoke of the unjustly enriched, and no consideration whatsoever against the fact that as capital centralizes, there are fewer and fewer places to actually look for employment that isn't in one way or another unconscionable to the Soul. Which, thankfully, the fellow at least had the decency to recognize that one isn't at, nor should be at liberty to give one's soul up because the only people signing cheques are those that are most conditioned to being in service not to anyone else, but merely to themselves.
For instance, it places the burden of the yoke of thrift equally on all men, without recognizing that that yoke provides exactly the spiritual cover required for the pernicious greedmonger to sleep soundly after condemning thousands to a situation where in their self-preservation is not guaranteed.
I see some mild concessions to the working class, which we have plenty of history from which to reason that even with a Papal acknowledgement, these words did not suffice to tilt the behavior of men away from ungodly and abusive treatment of their fellow men until such time as they were confronted with force. That the Pope of all people then doubled down by pointing out that "agitators would arise to foment violence and revolt" without taking into consideration that is the only language left that will be understood by the man whose heart has sufficiently hardened to enable him to with a stroke of a pen condemn thousands to millions, nay billions at a time to suffering. To usurp their livelihoods as his own to be rented back, ensuring that no ownership is onto his potential competitors conferred upon which could be built the means to diminish his own prosperity.
No... Pope Leo XIII, your encyclical is in places valid, but woefully out of date and in need of a massive update. Even in it's time, that wording would have been fairly what we today call "milquetoast" in terms of providing the necessary spiritual force to temper the excesses of man's vices. Our current day, is evidence enough of that. Where instead of true virtue and the ability of all to live prosperously, we have a divided class of those seeking desperately to get by, and those seeking desperately to ensure no one gets by them.
Whilst I'm not Catholic, I do do tend to honor the tradition of firmly worded letters nailed to their doors to keep them honest. This encyclical in it's time may have seemed fine, but with hindsight reeks of inadequacy and hedging, with excessive pandering to the already wealthy. This alone explains to me greatly why the labor movements of the late 1800's and early 1900's were not only as bad as they were, but absolutely necessary. If Leo XIV can't do any better than this, then it may once again come to bloodshed. The feedback loop is much tighter, and news travels much faster. Likely why the wealthy are doing everything they can to weight media outlets in their favor, and destroy any unregulated medium of anonymous communication. For these men are greedy, but not stupid. They know deep down the Lord dost tolerate the machinations of the Devil to test the tendencies of humankind, and they fear the inevitable outcome that will arise when the rest of mankind through privation is forced to harden their hearts as they (the wealthy) have. For in the eye's of one who has only their Soul left to bargain with, laid bare is the banal veil of Evil, and if one is to meet their Maker earlier than planned as a result of another man's artifice... Well. Justice doth favor action, whereas the banal finds fertile sustenance in the inaction of vacuous platitude debated endlessly.
Perhaps I am one of the Agitators of which the Pope spoke. Yet I feel no pause at any of the words I have hitherto written. So do with them what you will. If what we have to live with is supposed to be fine, I do not agree that anything about it is what one could conscionably call just.
[flagged]
As you know, I deeply respect you. Not trying to argue here, just provide my own perspective:
> Why would a writer put an article online if ChatGPT will slurp it up and regurgitate it back to users without anyone ever even finding the original article?
I write things for two main reasons: I feel like I have to. I need to create things. On some level, I would write stuff down even if nobody reads it (and I do do that already, with private things.) But secondly, to get my ideas out there and try to change the world. To improve our collective understanding of things.
A lot of people read things, it changes their life, and their life is better. They may not even remember where they read these things. They don't produce citations all of the time. That's totally fine, and normal. I don't see LLMs as being any different. If I write an article about making code better, and ChatGPT trains on it, and someone, somewhere, needs help, and ChatGPT helps them? Win, as far as I'm concerned. Even if I never know that it's happened. I already do not hear from every single person who reads my writing.
I don't mean that thinks that everyone has to share my perspective. It's just my own.
Agreed, totally! I still write and put stuff online.
But it definitely feels different now. It used to feel like I was tending a public garden filled with other people who might enjoy it. It still kind of feels like that, but there are a handful of giant combine machines grinding their way around the garden harvesting stuff and making billionaires richer at the same time.
It's not enough to dissuade me from contributing to the public sphere, but the vibe is definitely different.
Honestly, it reminds me a lot about the early days of Amazon. It's hard to remember how optimistic the world felt back then, but I remember a time when writing reviews felt like a public good because you were helping other people find good products. It was like we all wanted honest product information and Amazon provided a neutral venue for us to build it. Like Wikipedia for stuff.
But as Amazon got bigger and bigger and the externalities more apparent, it felt less like we were helping each other and more like we were help Bezos buy yet another yacht or media empire. And as the reviews got more and more gamed by shady companies, they became less of a useful public good. The whole commons collapsed.
I worry that the larger web and digital knowledge environment is going that way.
I still intend to create and share my stuff with the world because that's who I want to be. But I'll always miss the early days of the web where it felt like a healthier environment to be that kind of person in.
> But as Amazon got bigger and bigger and the externalities more apparent, it felt less like we were helping each other and more like we were help Bezos buy yet another yacht or media empire.
The Internet-circulating quote comes to mind: Planet Earth is pretty much a vacation resort for around 500 rich people, and the remaining 8 billion of us are just their staff. The Relative Few have got the system set up perfectly so that whatever we do, we're probably serving/enriching them. AI doesn't really change this, but it does further it.
> The Internet-circulating quote comes to mind: Planet Earth is pretty much a vacation resort for around 500 rich people, and the remaining 8 billion of us are just their staff. The Relative Few have got the system set up perfectly so that whatever we do, we're probably serving/enriching them. AI doesn't really change this, but it does further it.
I don't necessarily disagree with the analysis on how Planet Earth is currently setup to be, but something that I've been thinking about lately, is that to the extent we can consume the public image of some of the Relative Few, they seem oddly unhappy.
I think you're right.
Anyone who finds themselves with $100m in their bank account and thinks, "No, I need more," is a person with a hole inside them that can never be filled.
If raw resources (tree cutting) and manufacturing (book binding) is saturated, a fully-realized economy has just one step left: financialization.
You have to start finding ways to keep people hooked on books and make it a part of their regular lifestyle. One book can't be enough, and after a while you have to convince them to replace the books they already bought. New editions, Author's Footnotes, limited run release, all of the stops have to be pulled out to get consumers to show up en-masse. Because that's what they are - consumers, not readers - wallets to be squeezed until they're bled of all the trust they had in media.
I think about the publications I liked reading as a kid, like Joystiq and Polygon. Some of the best games journalism the industry produced, but inevitably doomed to fail as their competitors monetized further. The rest of traditional media has followed the same path, converging on some mercurial social network marketing tactic as the placeholder for big-picture brand strategy.
There were a couple of threads on HN this week. "Do you have any unusual hobbies" and "How do you relax". I enjoyed them and was thinking of contributing. Then it occurred to me that my comments would be gold for targeting advertising at me. That is the distrust that has been bred by the data harvesters.
Exactly. That thread about hobbies was just a trap designed to squeeze as much info from as many people as possible.
I can totally see that, for sure. I was much more likely to write a review long ago, now I don't even bother. (For buying stuff online, at least.) Maybe I lost my innocence about this stuff a long time ago, and so it's not so much LLMs that broke it for me, but maybe... I dunno, the downfall of Web 2.0 and the death of RSS? I do think that the old internet, for some definition of "old," felt different. For sure. I'll have to chew on this. I certainly felt some shock on the IP questions when all of this came up. I'm from the "information wants to be free" sort of persuasion, and now that largely makes me feel kinda old.
Also I'm not a fan of billionaires, obviously, but I think that given I've worked on open source and tools for so long, I kinda had to accept that stuff I make was going to be used towards ends I didn't approve of. Something about that is in here too, I think.
(Also, I didn't say this in the first comment, but I'm gonna be thinking about the industrial revolution thing a lot, I think you're on to something there. Scale meaningfully changes things.)
I feel the future includes the sentiments you describe. It was a little before my time professionally, but I grew up reading that kind of thinking.
I do think that the open web stuff, decentralized, or at least more decentralized than currently, is the path forward. I've been reading about the AT protocol and it recently becoming an official working group with the IETF.
I feel a second order effect of making decentralized social networking easier, is making individuals more empowered to separate from what they don't believe in. The third order effect is then building separate infrastructure entirely.
As sad as that can be - in my personal opinion it runs the risk of ending the "world wide" part of the web - it appears to be the only way society can avoid enriching the few beyond reason.
> I'm from the "information wants to be free" sort of persuasion, and now that largely makes me feel kinda old.
Me too, 100%. But that was during a moment in time when that information was more likely to be enabling a person who otherwise didn't have as many resources than enabling a billionaire to make their torment nexus 0.1% more powerful.
> I kinda had to accept that stuff I make was going to be used towards ends I didn't approve of. Something about that is in here too, I think.
Yeah, I've mostly made peace with that too.
The way I think about it is that when I make some digital thing and share it with the world, I'm (hopefully!) adding value to a bunch of people. I'm happiest if the distribution of that value lifts up people on the bottom end more than people on the top. I think inequality is one of the biggest problems in the world today and I aspire to have the web and the stuff I make chip away at it.
If my stuff ends up helping the rich and poor equally and doesn't really effect inequality one way or the other, I guess it's fine.
But in a world with AI, I worry that anything I put out there increases inequality and that gives me the heebie-jeebies. Maybe that's just the way things are now and I have to accept it.
> But in a world with AI, I worry that anything I put out there increases inequality and that gives me the heebie-jeebies. Maybe that's just the way things are now and I have to accept it.
This observation doesn't really clash with "information wants to be free." You just have to include LLMs in the category or "information," like Free Software types already do for all software. You don't need to abandon your principles, you should shift your demands. A handful of companies can't be allowed to benefit from free information and then put what they make behind a wall.
> Free Software types already do for all software
Free Software types also create software...they didn't just argue for a better license and try to regulate Sun/others to re-license their software; they wrote free (libre) versions of proprietary software and released it for free (cost), which is what counteracted the "[putting] what they make behind a wall". If you're saying "[some] LLMs should be free", I agree.
I don’t disagree with you, but this has been going on for a while… Google monetized the the by indexing it and monetized what you wanted to find. Facebook monetized the eyeballs from the pictures and posts you added. Now LLMs will monetize all web content. To play devil’s advocate - LLMs do give something back. Those with ideas and no coding experience can now build entire businesses for little to zero cost. This seems different
> A handful of companies can't be allowed to benefit from free information and then put what they make behind a wall.
What is there to prevent them?
Nothing today; but in a democracy, we have the power to make it possible, if people vote the right way.
> the "information wants to be free" sort of persuasion
That was always a luxury of its peculiar historical moment, though, wasn't it? Barlow didn't have to care who paid for the infrastructure, but he was just bloviating.
No, it's as true now as it was then. The intellectual property team didn't win on the merits or by law enforcement; it was the convenience of streaming anything at will for a monthly fee that did the trick.
> it was the convenience of streaming anything at will for a monthly fee that did the trick
That's not the whole story, though. There have been many community-driven projects to bring convenient access to copyrighted works to the masses in a convenient way. You may recall the meteoric success of Popcorn Time. Law enforcement shut them down. Without the hand of the state beating down any popular alternative to legal distribution it absolutely would be the dominant mode of media consumption.
It does feel like the collaborative, free open nature of the web has gone and the optimism that brought… it feels like no one would build Foursquare today. But then I wonder if I’m just old an jaded and to the younger generation creating content, for them the web is open and expressive- just in a different way
I still use swarm every day, and get teased for it all the time.
"So Steve, you're a millennial. What does it mean to 'be the mayor' of something?"
I can relate to this so much! IMHO Foursquare genuinely did gve the better recommendations for food and drink and I still think this recommendation problem is far from solved.
> It used to feel like I was tending a public garden filled with other people who might enjoy it. It still kind of feels like that, but there are a handful of giant combine machines grinding their way around the garden harvesting stuff and making billionaires richer at the same time.
An underrated upside to being harvested is that your voice has now effectively voted in the formation of the machine's constitution. In a broader ecological sense, you've still tended to a public garden, but in this case your work is part of the nutrient base for a different thing.
Broader still: after the machines squeeze all of our inputs into an opaque crystal, that crystal's very purpose is to leak it all back out in measured doses. Yes, "some billionaire" will own the lion's share of that process, but time so far is telling that efforts can be made to distill strong, open, public versions of the same.
> time so far is telling that efforts can be made to distill strong, open, public versions of the same.
I do really hope that part of the longer-term answer for AI is LLMs being run locally.
I like the garden analogy.
Writing online used to bring you readers. Now it trains model, which answers the same questions without sending anyone to your site.
AI harvesting your garden doesn't destroy the garden though. It's like calling piracy theft; in the digital realm those types of analogies quickly break down.
I also personally still feel like posting reviews on Amazon is a public good. I like helping people. That my efforts also help Amazon as a company is incidental.
Certainly if there were a convenient way to cross post my reviews to a more open platform that would be great. The more people I can help the better. It does annoy me to see companies trying to block scraping as if they own my posts and they aren't part of a commons.
> A lot of people read things, it changes their life, and their life is better. They may not even remember where they read these things. They don't produce citations all of the time. That's totally fine, and normal. I don't see LLMs as being any different. If I write an article about making code better, and ChatGPT trains on it, and someone, somewhere, needs help, and ChatGPT helps them? Win, as far as I'm concerned. Even if I never know that it's happened. I already do not hear from every single person who reads my writing.
Not a contradiction but an addendum: plenty of creative pursuits are not about functional value, or at least not primarily. If somebody writes a seemingly genuine blog post about their family trauma, and I as the reader find out it's made-up bullshit, that's abhorrent to me, whether or not AI is involved. And I think it would be perfectly fair for writers who do create similar but genuine content to find it abhorrent that they must compete with genAI, that genAI will slurp up their words, and that genAI's mere existence casts doubt on their own authenticity. It's not about money or social utility, it's about human connection.
The consent question gets weirder when agents have persistent memory. I run agents that accumulate context over weeks — beliefs extracted from observations, relationships with other agents. At what point does an agent's memory become its own work product vs. derivative of its training? There's no legal framework for that.
> I write for two main reasons
> people read things… their life is better
> it’s just my own
What was the point of writing this though?
Perhaps I should know who you are, but assuming you are a regular HN forum user - you are still very much a participant in a larger information economy / ecosystem.
All of us depend on that system, that commons.
Visits to Wikipedia have dropped by at least 8% since 2025, other estimates are starker. This will have an impact on donations.
These reports are similar for many sites which write or produce content.
Your individual behavior may be perfectly fine, and you are entitled to your perspective, but that doesn’t become a defense for the degradations of the commons.
If anything, it’s a classic example of the kind of argument that ends up entangling ideas and making conclusions harder to reach.
That seems fine if you're not publishing content for a living. A lot of people are.
> I don't mean that thinks that everyone has to share my perspective. It's just my own.
I think you are walking all around the word "consent" and trying very hard to avoid it altogether.
Your perspective, because it refuses to include any sort of consent, is invalid. No perspective that refuses consent can be valid.
Consent is absolutely important, but that does not mean that every single thing in the entire world requires explicit consent. You did not ask me for consent to use my words in your comment. That does not mean you're a bad person.
Free use is an important part of intellectual property law. If it did not exist, the powerful could, for example, stifle public criticism by declaring that they do not consent to you using their words or likeness. The ability to do that is important for society. It is also just generally important for creating works inspired by others, which is virtually every work. There has to be lines for cases where requiring attribution is required, and cases where it is not.
> You did not ask me for consent to use my words in your comment.
I am not representing your words as mine. I am not using your words to profit off. I am not making a gain by attributing your words to you.
> There has to be lines for cases where requiring attribution is required, and cases where it is not.
You are blurring the lines between "using a quote or likeness" and "giving credit to". I am skeptical that you don't know the difference between the two.
Regardless, any "perspective" that disregards the need to acquire consent is invalid. Even if you are going to ignore it, you have to acknowledge that you don't feel you need any consent from the people you are taking from.
This whole "silence is consent" attitude is baffling.
You made an incredibly strong statement that is much broader than what we are talking about. I am pointing out various cases where I think that broadness is incorrect, I am not equating the two.
I do not think that, if you read, say, https://steveklabnik.com/writing/when-should-i-use-string-vs... , and then later, a friend asks you "hey, should I use String or &str here?" that you need my consent to go "at the start, just use String" instead of "at the start, just use String, like Steve Klabnik says in https://steveklabnik.com/writing/when-should-i-use-string-vs... ". And if they say "hey that's a great idea, thank you" I don't think you're a bad person if you say "you're welcome" without "you should really be saying welcome to Steve Klabnik."
It is of course nice if you happen to do so, but I think framing it as a consent issue is the wrong way to think about it.
We recognize that this is different than simply publishing the exact contents of the blog post on your blog and calling it yours, because it is! To me, an LLM is a transformative derivative work, not an exact copy. Because my words are not in there, they are not being copied.
But again, I am not telling anyone else that they must agree with me. Simply stating my own relationship with my own creative output.
Just wanted to compliment you on your classy attitude and style, along with your solid points. It’s not easy to take that side of the debate. Cheers.
he doesn't have solid points, he conflates fair use with free use (?), ignores thousands of years of attribution history, and equates normal human to human learning with corporate LLMs training on original content (without consent). Great presentation, like you said, to cover the logical defects.
I did say "free use" instead of "fair use," yeah. That's my mistake, thank you for the correction. If I could edit my original comment, I would, mea culpa. Typos happen.
I see. I must congratulate you on your rhetorical prowess, it's nice seeing a professional at work.
Fair use of training data hasn’t yet been settled in court. People here are treating it like it has been. But no amount of wishful thinking or moral arguments will change a verdict saying it’s fine for training data to be used as it has been.
Until that question is settled, it’s disingenuous to dismiss his points out of hand as conflating fair use or ignoring consent.
Even beyond that, the initial legal opinion we do have did in fact point to training being fair use: https://www.reuters.com/legal/litigation/anthropic-wins-key-...
However, I don't feel comfortable suggesting that this is settled just yet, one district judge's opinion does not mean that other future cases may disagree, or we may at some point get explicit legislation one way or the other.
I think the court dropped the ball here. On the one hand, I think they were right that using existing works--copyrighted or otherwise--to train a model was transformable fair use. On the other hand, Anthropic and others trained their models on illicit copies of the works; they (more often than not) didn't pay the copyright holders.
There's a doctrine in Fifth Amendment law called "fruit of the poisonous tree." The general rule is that prosecutors don't get to present evidence in a criminal trial that they gained unlawfully. It's excluded. The jury never gets to see it even if it provides incontrovertible evidence of guilt. The point is to discourage law enforcement from violating the rights of the accused during the investigative process, and to obtain a warrant as the Amendment requires.
It seems to me that the same logic ought to be applied to these companies. They want to make money by building the best models they can. That's fine! They should be able to use all the source data they can legitimately obtain to feed their training process. But if they refuse to do so and resort to piracy, they mustn't be allowed to claim that they then used it fairly in the transformative process.
I mean, that is what the court said! Training on pirated data was not fair use. Training on legally acquired data is fair use.
Anthropic legally acquired the data and re-trained on it before release.
It did not say that. See Judge Alsup's order (https://fingfx.thomsonreuters.com/gfx/legaldocs/jnvwbgqlzpw/...), pp. 29-30, Section IV(B)(ii) ("The Pirated Library Copies").
"[T]he test requires that we contemplate the likely result were the conduct to be condoned as a fair use — namely to steal a work you could otherwise buy (a book, millions of books) so long as you at least loosely intend to make further copies for a purportedly transformative use (writing a book review with excerpts, training LLMs, etc.), without any accountability."
See also p. 31:
"The downloaded pirated copies used to build a central library were not justified by a fair use. Every factor points against fair use. Anthropic employees said copies of works (pirated ones, too) would be retained 'forever' for 'general purpose' even after Anthropic determined they would never be used for training LLMs. A separate justification was required for each use. None is even offered here except for Anthropic’s pocketbook and convenience."
Despite this consideration, the court still found for Anthropic on the question of fair use.
I don't read how that opposes what I said, that's part of the "training on pirated data is not fair use." That said, I am not a lawyer. From those pages:
> The copies used to train specific LLMs were justified as a fair use.
This is (in my understanding) because those were not the pirated copies.
> The copies used to convert purchased print library copies into digital library copies were justified, too, though for a different fair use.
Buying a book and then digitizing it for purposes of training is fair use.
> The downloaded pirated copies used to build a central library were not justified by a fair use.
Piracy is not fair use, you quoted this part as well.
In the conclusions section a the end of 31:
> This order grants summary judgment for Anthropic that the training use was a fair use. And, it grants that the print-to-digital format change was a fair use for a different reason. But it denies summary judgment for Anthropic that the pirated library copies must be treated as training copies.
Training is fair use. Pirating is not fair use, and therefore, you can't train on that either.
What part am I missing?
I think that's a reasonable way to interpret the court's order, but unfortunately the judge didn't really articulate the consequences of training on pirated copies "not fair use" as clearly as I would have liked. Does that mean they're simply liable for infringement of those works, or does it mean that they'd be enjoined from using them altogether to train the model? The genie was out of the bottle; how could it be put back in?
Anthropic settled the case with the publishers just a few months later, leaving the question mostly unsettled still.
I see. Thanks. I cannot wait until this is settled law too.
- [deleted]
I was just enumerating some of the issues with the '''solid''' points OP made. Actually addressing them would take too long and be exercise in futility, here, in HN, in april 2026. Why would I put in the effort, for my comment to be flagged and sent to the void? or worse, persisted for ever and used for training without my consent?
And yes, you are right, the legal and moral question of fair use in training data hasn't been settled yet; we agree here.
> But again, I am not telling anyone else that they must agree with me. Simply stating my own relationship with my own creative output.
Look, I'm not saying that you are doing that, I'm pointing out that "Silence is consent" is not as strong an argument that many think it is.
> you don't feel you need any consent from the people you are taking from.
What has been "taken", exactly?
> What has been "taken", exactly?
Where are you going with this line of thought? That making a copy of someone's work, using it for profit and not crediting them doesn't "take" anything from them?
I find that these discussions at the intersection of art and law tend to blur technical and familiar uses of words. So it's important to specify what was actually taken here because otherwise the discussion becomes muddy.
"making a copy of someone's work, using it for profit and not crediting them" wasn't really the scenario being discussed in this thread -- is that what you meant by "taking"?
Steve had made the point:
But actually taking someone else's verbatim work and selling it as your own is one of those instances where consent would be required, because many people see a clear line between someone selling another author's work and the author not getting a dollar because of that.Not every single thing in the entire world requires explicit consent.That doesn't preclude other instance where explicit consent is not required. For example, do I need your consent to learn from your work and produce similar work of my own? Am I required to credit you in my work for having learned from you? Am I taking from you if I don't share my profits with you?
Some rights holders would say yes, actually. Which, I don't agree with. I think it's important that we not require the artist's explicit consent for all things, because listening to some of rights holders (e.g. Disney), they have very expansive ideas about what kind of control they are owed by society over their creations.
Therefore, I think if you're going to claim something has been taken, you should specify what exactly.
> you don't feel you need any consent from the people you are taking from
In most cases, no, I (and it seems most others) don't feel the need for that, it is only you who seems to have an ideological hangup over this.
>In most cases, no, I (and it seems most others) don't feel the need for that, it is only you who seems to have an ideological hangup over this.
It's not an ideological hangup, it's confusion over the assumption by certain groups that "silence is consent", when it is not.
refuse consent?
You may need to clarify that thought.
I don't think the poster has a viewpoint that 'refuses consent', their viewpoint is their writing they put for others to view is for others to view, regardless of how it is viewed. They seem to be giving consent, not refusing it, no?
> refuse consent?
Who said anything about refusing consent?
> I think you are walking all around the word "consent" and trying very hard to avoid it altogether.
> Your perspective, because it refuses to include any sort of consent, is invalid. No perspective that refuses consent can be valid.
This is what I was responding to. I do not understand your thinking in this post.
> This is what I was responding to. I do not understand your thinking in this post.
I thought it was clear from "refuses to include any sort of consent" that I am talking specifically about holding an opinion that refuses to include consideration for consent, not refuses consent for usage.
But that's what I'm confused about:
How is freely giving consent for (all) others to read your content not 'considering consent'?
I'm not trying to be snarky. I really don't see the missing piece that isn't written that connects those dots.
> Prior to the industrial revolution, the natural world was nearly infinitely abundant.
The opposite is true. Central Europe was almost devoid of trees. Food was scarce as arable land bore little fruit without fertiliser.
Society was Malthusian until the Industrial Revolution.
Can we interpret "abundant" in a Darwinian sense e.g. diversity of life? I would think the industrial farming revolution decreased crop variety over time same for animal lineages aside from the rapid increase in mixed poodle breeds.
Crop variety was decreased by the original farming revolution, about 10k years before the industrial revolution. Rather than eating whatever was available, the large majority of the caloric input of an agricultural society comes from a few staple crops optimized for overwinter storability and producing large yields and thus supporting a large number of people.
The industrial revolution didn’t qualitatively change farming. It just made it possible to have more of it thanks to machine labor. The same goes for the later agricultural revolutions.
This is particularly evident if you had been around rural villages in eastern Europe in the late 00s, particularly those inhabited by elderly people at 70 years old and above.
They were still doing subsistence agriculture to supplement their own income well into the 21th century. Of course they didn't grow enough calorie heavy crops like corn, potatoes or wheat to live entirely off the land, but they had enough food that a bi-monthly shopping trip with their children was enough to get by.
No, they totally grew enough calories for themselves. My grandparents lived like that. They farmed around 15 hectares, which was actually quite a lot. You can easily grew enough calories for your family on 5 hectares, or even less if you have access to modern cultivars and artificial fertilizer. It’s just even poor people like variety, and will trade some of their crops for stuff they cannot make at home efficiently, like sugar, fish, or candy.
To add, I don’t think my ancestor Spaniards for example needed the help of machines to deplete mines in America. They also came already equipped with all kinds of legal systems, including the Requerimiento, which they read out loud to natives in preposterous spectacle.
In general the transition from feudalism to capitalism, including the formation of the legal systems that supported the latter, happened gradually for maybe up to four or five centuries before the steam engine had been invented.
Sure, the Industrial Revolution further accelerated the development of property rights, mercantile, and civil laws, but all in all I don’t think there’s much truth that machines were the primary cause of such developments.
Not really Malthusian. Agricultural societies had adapted to keep the population stable during normal times and bounce back in a generation or two after bad times. Those cultural adaptations stopped working when childhood mortality declined.
Useful land was a scarce resource in more civilized regions, while labor was cheap. Given enough land, subsistence farmers could easily feed themselves outside particularly bad years. But much of the land belonged to local elites, and commoners had to work that land to fund the pursuits of the elites.
If I'm being honest, I've never related to that notion of remuneration and credit being the primary reason to write something. I don't claim to be some great writer or anything, but I do have a blog I write quite often on (though I'm traveling in my wife's Taiwan now and haven't updated it in a while). But for me, I write because it feels good to do so. Sometimes there's a group utility in things like I edit a Google Maps listing to be correct even though "a faceless corporation is going to hoover up my work and profit off it without paying me for my work" and I might pick up a Lime bike someone's dropped into the sidewalk even though "a faceless corporation is externalizing the work of organizing the proper storage of their property on public land without paying the workers" or so on.
I just think it's nice to contribute to the human commons and it's fine if some subset of my fellow organism uses it in whatever way. Realistically, the fact that Brewster Kahle is paid whatever few hundred thousand he's paid for managing a non-profit that only exists because it aggregates other people's work isn't a problem for me. Or that Larry Page and Sergey Brin became ultra-rich around providing a search interface into other people's work. Or that Sam Altman and Dario Amodei did the same through a different interface.
This particular notion doesn't seem to be a post-AI trend. It seems to have happened prior to the big GPTs coming out where people started doing a lot of this accounting for contribution stuff. One day it'll be interesting to read why it started happening because I don't recall it from the past. Perhaps I just wasn't super plugged in to the communities that were complaining about Red Hat, Inc.
It's not that I don't understand if I sold my Subaru to a guy who immediately managed to sell it to another guy for a million times the money. I get that. I'd feel cheated. But if I contributed a little to it, like I did so Google would have a site to list for certain keywords so that they could show ads next to it in their search results, I just find it so hard to be like "That's my money you're using. Pay me!".
You do it as a hobby, that's fine. Some people do it for a living. And while they aren't owed a living doing that specific thing, it is going to be a big problem for them if they can't make money at it anymore.
I'm sure plenty of people feel the same way about software. They make software as a hobby and don't care about remuneration or credit. Meanwhile I write software for my day job and losing the ability to make money from it would be devastating.
Ah, I see. It’s just straightforward protectionism like dockworkers opposing automation and so on. That I do comprehend, in fact.
I write software too and I may no longer be able to just do it in the old way. Pretty scary world but also exciting. I can’t imagine trying to restrict LLM software writers on that basis but I can comprehend it as simply self-interest.
Fair enough.
Do you make money writing software? I bet you either try to restrict LLM usage or assign your rights to an employer who does. Putting code in the public domain is pretty rare, and extremely rare for paid work.
I allow them to train on my work as described here (for example) https://code.claude.com/docs/en/data-usage
And I do paste code into CC. I’m not super concerned that they’ll see it.
That’s fine by me. It doesn’t require putting code in the public domain which is something else entirely.
I make money off hosted software so in some sense there is writing involved at one end. But I’m not paid by output tokens.
If your code isn't in the public domain, then anything you haven't explicitly allowed them to train on is restricted for them. They've been ignoring that for anything they can actually get their hands on, but it's there.
It’s about the amount of time available.
> Some people do it for a living.
I was going to write, "not for long," which might be true for some. But then I realized there will always be a difference between LLM output and human writing. We don't read blogs because of their facts, we read them because of how the facts are presented and how the author's personality comes through on the page.
EDIT: That said, LLMs are great at faking it, and a lot of amateur writing will be difficult to distinguish from LLM output. So I'm disagreeing with myself a bit.
But we are talking about "slurping up" IP and regurgitating it right? OK. So if I slurp up Mickey Mouse and output Micky Mouse that's an offense. But what if I slurp up a billion images and output some chimera? That's what the LLMs do. And that's what humans do too.
- [deleted]
Prior to Industrial Revolution, nobody could go hunt in the woods, because the woods were King’s, and poaching King’s game carried death penalty. Situation was similar on the continent: the tiny slivers of remaining wood lands were off limits.
Granted, things were different in the New World, as a result of mass depopulation event following the Columbian exchange. But even there, the megafauna was hunted to extinction soon after the humans first appeared there.
Anyway, the point is that no, prior to Industrial Revolution, the world was of full of scarcity, not abundance.
>This completely unpends the tenuous balance between creators and consumers. Why would a writer put an article online if ChatGPT will slurp it up and regurgitate it back to users without anyone ever even finding the original article? Who will contribute to the digital common when rapacious AI companies are constantly harvesting it? Why would anyone plant seeds on someone else's farm?
This is completely reversed. Why should anyone honour the right of some creator who was merely the first to plant their flag on a creative task that is now absolutely trivial to perform by AI? Who needs a digital commons when creation itself is now the commons and freely accessible for pennies? The seeds plant and grow by themselves now. The only question is who should be allowed to claim the farms?
Answer: No one. AI companies will have their lunch eaten by open source. And if they don't - they should be nationalized and protocolized into free utilities. The entire idea of digital ownership should (and will) be abolished by the very nature of this technology.
The digital world is the new infinitely-abundant nature. We're just returning it to where it should have been, before corporations clawed it into fenced off empires.
At what point do we look at 'Industrial Society and its Future' and go from "yeah that'll never happen", "ok some parts of it are happening", to ...? I swear tech folks are the most obtuse people on the planet.
I think it's completely normal. Whenever automation comes knocking, people are inclined to think it's going to flatline conveniently before their job is at risk. LLMs can code now? Cool, they can't code well though can they? Oh they can code pretty well now? Cool, coding was never the hard part of SWE anyway, it's [thing we have no reason to think AI can't beat 99% of humans at at some point], etc
I think SWE as a mainstream profession is much nearer to the end than the beginning, I'm curious and quite scared about what becomes of us.
The problem is that software development contains domain independent and domain specific skills. Since information processing is domain independent, replacing software developers in general will require beating them not only in the domain independent skills, which is what the recent breakthroughs have been about, but also in every single domain dependent skill.
This makes software development AGI-complete. If you have an LLM that can write software for every domain, then for every task you assign it, it could build software that performs the assigned task and thereby solves every problem in existence.
What I'm trying to get at here is that an "SWE" is a biological machine building machine. If you have a digital machine that can build any machine, you haven't solved the first step, you've solved the final step in all of human history that ever needs to be done, whatever that means. Beyond that point, human work no longer exists, because the machines have taken over everything.
I don't think you understand. Frankly, AI is a failure if all it does is replace coders. AI needs (given its current investment levels) to conquer all forms of knowledge work. This is an example of tech/industry needing to impose itself on society, rather than society needing it.
That's how human progress works. No one can want or need it because they cannot conceptualize wanting it until someone shows that it is possible. Now, many of those wants become needs.
We can absolutely conceptualize what we want or need. I was born in 1980 in NYC. When I was a boy my father took me to a tech conference where they had a demo of ordering TV shows on demand. It was a miracle, to my young mind. Was this what I needed?
Growing up I had a friend group of misfit boys, who discovered h4ck1ng and phr34king. But we also discovered slackware Linux on 3.5" floppies. We also had to discover ASM and compiling the linux kernel in order to do anything with it. Boys with machines. That wasn't what I needed either.
Later on we did have great things with tech. Google made the world searchable in ways Altavista didn't. I remember strapping the original iPod on my arm to go for runs outside. I didn't even need a car for a while investors subsidized my Uber rides to and from the office.
Now, it seems the US is balanced on a precipice. The economy seems to have an incredible amount of money desperate to grow, but to what purpose. In my lifetime, and in my parents, and their parents before them, when the dollar becomes restless the flag goes forth. The dollar follows the flag.
And here we are at war.
You wouldn't have known about a TV had you not seen it. That is what I mean by, people generally can't conceptualize what they want or need until they see it.
Wants and needs are not the same. We are experiencing the difference in real time. AI does not give society a want or need.
My point was not about the difference, it was about the fact that average people cannot conceptualize new ideas until one person or team invents it, then the average person will want or need it.
As for AI, I and many others want it, and some even need it, in certain use cases. Speak for yourself.
I believe the idea that you (or I) might know better than the 'average people' to be incredibly conceited, arrogant, and frankly wrong. It is an attitude that gives you superiority for having achieved nothing.
I'm not sure what you're even talking about, you're putting words and an argument into my mouth which I never said.
Well then I owe you an apology. Perhaps I inferred too much about your point of view and understood too little, which is my own loss. Sorry.
I think your numbers are off. TAM for office workers is ~20T a year, of which SWE compensation is ~3T. So if they can make 3T x 10% X 5 years = 1.5T that covers their current valuations. It's not as insane as you make out, even not taking into account the other high risk areas like legal, accounting etc
Hit the nail on the head with that framing. So many articles are now coming out addressing the anxieties about adoption of a new technology, but we genuinely don’t really need it as a society.
I still wonder if we really needed the iPhone or many other things we’re told is “progress” and innovation in an arrow of time manner. The future is not set in stone and things need not play out in this manner at all. Unlike the iPhone where most were excited by its possibilities (even if they traded precious privacy in the name of convenience), there’s not a clear reason that this version of LLM driven technologies represent significant upsides than downsides.
- [deleted]
>This completely unpends the tenuous balance between creators and consumers. Why would a writer put an article online if ChatGPT will slurp it up and regurgitate it back to users without anyone ever even finding the original article? Who will contribute to the digital common when rapacious AI companies are constantly harvesting it? Why would anyone plant seeds on someone else's farm?
I have been thinking about this. I was pretty amendment a few months ago that AI is going to make a lot of thing worse for everyone because of the externalities of the technology (Data Center Creep, lock in of models, ect) and it probably still will. But then someone suggested to me that I use Claude Code to upgrade my SSG site to the new version because I had been sitting on my ass as the years went by, missing deadline by deadline. I just couldn't put my self into gear to upgrade it. It was massively out of date 10 years plus and I knew it was going to be a nightmare to deal with the problems. I probably was making it more harder than it really was in my head.
So I purchase Claude Code pro and the thing upgrade my site pretty well. There were things it missed because I didn't know the problems existed in the first place until the upgrade was complete, but I had a working updated site in less than an hour. If I had done this myself it would have taking me days/weeks.
So at that point I realized something. Its a tool that can handle good amount of tasks I throw at it as long as I am specific. I think the problem with most people is they expect it to respond like a human. Thats not going to happen, IMHO. Maybe some day it will be more than what it is but right now its just a tool. I don't care what anyone says about AGI and the likes. Its not going to happen with the current iteration (the pattern recognition type) We are going to need more than that if we want to simulate a human brain..
The point is. And I know this is not going to be received very well, mostly because this tech is in the hands of people that are gatekeeping it, is that maybe someday we might reach a point where all of humanities knowledge is put into these things and we can use them to better our lives. Maybe at some point we don't need to hold onto or hoard things as if its the only way we can make a living? And instead we can build things just for the sake of creating it and improve humanity in the process? Obviously the commercial model of these things is not great, that is going to have to be dealt with, but I can see a future where we might be able to fix a lot of humanities problems with this technology as more and more good people put it to use for things that help humanity.
You raised a point and then never answered it. Why would anyone plant seeds on someone else's farm?
Because maybe, someday, somehow, we will realize that these farms we are creating are all connected. When we share resources we prosper more than we would if we were all separate. But that wouldn't happen right away, enough people would have to have buy in for this to happen so I understand the concern.
Well, maybe because life is not a zero sum game? Sometimes you do things just because.
A couple thoughts…
Mostly, AIs don’t recite back various works. Yes, there a couple of high profile cases where people were able to get an AI to regurgitate pieces of New York Times articles and Harry Potter books, but mostly not. Mostly, it is as if the AI is your friend who read a book and gives you a paraphrase, possibly using a couple sentences verbatim. In other words, it probably falls under a fair use rule.
Secondly, given the modern world, content that doesn’t appear online isn’t consumed much, so creators who are doing it for the money will certainly continue putting content online. Much of that content will be generated by AIs, however.
You're missing the point. This is the crux of munificent's argument IMO (and I've made variations of it as well)
> We have copyright and intellecual property law already, of course, but those were designed presuming a human might try to profit from the intellectual labor of others.
You getting a summary of a copyrighted work from a friend is necessarily limited by the number of friends you have, the amount of time they have to read stuff and talk to you, and so on. Machines (and AIs) don't have any limitations.
Yes, true. But does that really shift the argument much? An AI is like the most well-read book nerd you’ve ever met. The AI has read everything. They still won’t recite Harry Potter for you at full length and reading what the original author wrote is part of the pleasure.
> An AI is like the most well-read book nerd you’ve ever met. The AI has read everything
But no real book nerd has read everything. Current law was designed for the capabilities of humans.
Sure, we could change current law, but I think that only forces an AI company to buy one copy of every book. I don’t think it gives any sort of royalty stream to anyone beyond that. Copyright is literally the right to make copies. Once I have acquired a copy, I can read it, summarize it, transform it, etc. in myriad ways.
I don’t think thats how fair use works.
You can't make copies though. AI training requires making copies of materials, even if they're purchased.
Not true. You can photocopy pages from a book you own for your own use. You can make copies of purchased software as a backup. What you can’t do is make copies and give them to all your friends or sell them to the public.
> You can photocopy pages from a book you own for your own use
No. You won't get it trouble for it. But it is against the law. https://www.copyright.gov/what-is-copyright
"U.S. copyright law provides copyright owners with the following exclusive rights: Reproduce the work in copies"
This doesn't differentiate between partial and complete copies.
> You can make copies of purchased software as a backup
This is true. They had to write out that exception for digital media. And the key is "backup". You can't run or use multiple copies if you only own one.
While the rules for fair use are not black and white, one of the primary tests is whether the copying impairs the market for the work. If you want to copy pages of a book to mark them up, for instance, so your original copy stays clean, that would generally fall under fair use. You aren’t selling the copy or the original. You aren’t giving one or the other to other people, thereby eliminating a potential sale. You are copying some pages, not the entire work, cover to cover. As you say, you wouldn’t get in trouble for it in any case, but I’m pretty sure that it would be covered under fair use. But yea, if you photocopy a book and give it to your friend, that’s illegal.
Yes.
1) Quantity is its own quality: Scale makes a difference
2) The tools themselves automate tasks and consolidate their outputs. The “sale” of a piece of content, and its consumption, shifts away from the people producing it Example: We have entire networks and systems that depended on consumption occurring on the site itself. News websites, or indie sites depend on ad revenue.
Does a literal book nerd profit megacorporations when they bring up books to you? While burning through a household worth of energy in the process? Also, I’d like to talk with such book nerd because they’d have opinions on books, potentially if I brought up something I have read we could exchange thoughts about it, they could make recommendations for me based on their complex experiences instead of statistics from Reddit comments. An LLM can do none of those, while also doing the former. It’s a lose-lose.
Also, a book nerd doesn’t take roughly ~all human created text to train to produce meaningful results. It’s just such a misplaced analogy and people have been making it ever since OpenAI announced chatgpt for the first time - why do people think “an LLM is just a human who read a lot”
Megacorporations making profit is not some evil that needs to be stopped. The economy is not zero sum.
> The economy is not zero sum.
This is true.
But it's not always positive sum, either.
> Megacorporations making profit is not some evil that needs to be stopped.
Externalities are a thing. It's not about the profit per se, but about how (a) the making of that profit might negatively impact others, and (b) the deployment of that profit in pursuit of rent-seeking and other antisocial behavior in order to insure its continued existence might also negatively impact others.
Externalities are a thing, but this isn’t exactly dumping toxic waste into a river.
I disagree with that. from what I read data centers are going to have some real world negative effects on human populations
No, it's more just drying the river up entirely.
https://www.texastribune.org/2025/09/25/texas-data-center-wa...
> It really feels like we're in the soot-covered child-coal-miner Dickensian London era of the Information Revolution and shit is gonna get real rocky before our social and legal institutions catch up
The really discouraging part of this is that it feels like our social and legal institutions don't even care if they catch up or not.
Technology is speeding up and the lag time before anything is discussed from a legal standpoint is way, way too long
Maybe it’s a useless distinction but it feels like we’ve gone through overlapping ages of Communication, Data, Processing, and Information.
First we conquered the ability to move matter and transmit signal, greatly shrinking the world. Next was sensor technology, especially the mobilization of it, and our ability to collect more data than we could ever imagine being able to process. Then we started going crazy with data centres and big data and the idea that maybe we can somehow correlate it all if we just process it enough. And now we’re finally turning data into information, building enormous graphs of correlation without even having to manually reason about a lot of it. Before AI, the hard part was figuring out how to go about finding the signal you needed. Now it’s getting easier at an incredible speed.
> There is a whole giant essay I probably need to write at some point, but I can't help but see parallels between today and the Industrial Revolution.
You might want to give the following Articles a read then: https://www.ufried.com/blog/ironies_of_ai_1/ https://www.ufried.com/blog/ironies_of_ai_2/
> Prior to the industrial revolution, the natural world was nearly infinitely abundant.
Prior to the industrial revolution, people fight to death for who can use the rivers. Pre-industrial societies are societies of scarcity.
[0]: People have been fighting for water for more than 4000 years: https://en.wikipedia.org/wiki/Umma%E2%80%93Lagash_war
> If all of us can go hunting in the woods and yet there is still game to be found, then there's no compelling reason to define and litigate who "owns" those woods.
Property rights don't just protect natural resources, but labor as well. If I cleared out hunting ground in that forest to be the prime spot to catch animals, I would make sure I can use it when I want.
> a small number of people were able to completely deplete parts of the earth
A small number of people seems inaccurate when there's typically many more individuals in the pipelines for these technologies.
> and in return profit off the knowledge over and over again at industrial scale
Not off just that knowledge, there needed to be a model trained on the data of many others to utilize it.
> Why would a writer put an article online if ChatGPT will slurp it up and regurgitate it back to users without anyone ever even finding the original article?
Who's better at writing in this scenario and what are my motivations? If it's ChatGPT and I did it for money, then I would say I should recognize that I can't compete and find something AI can't do. If it's ChatGPT and I write to convey my ideas in an effort to learn regardless of the bestowment of a new perspective on the reader, I'll keep writing.
> Why would anyone plant seeds on someone else's farm?
They wouldn't unless it was their own way to attain food and survive. And if it's not the only way, they can defer to those with optimal methods to get it the cheapest they can in the market.
I'm halfway through Foundation on Apple TV and this piece landed hard (you had me at Asimov) because of it. Asimov's whole deal with psychohistory is that you can predict what large populations do even when individuals are unpredictable. Seldon doesn't need anyone to be honest; he needs the math to converge on something real about how people actually behave.
LLMs are sort of the inverse of that. They produce text that looks like the statistical aggregate of human knowledge, but nothing underneath is converging on truth. Seldon's math worked because it modeled actual dynamics. LLMs work because they model plausible text. The "jagged competence frontier" Kingsbury describes: crushing multivariable calculus, failing a word problem, is exactly what you'd get from a system that learned the shape of correct answers without learning what makes them correct.
The part of Foundation that feels prescient right now isn't the predicting-the-future stuff. It's the part where everyone can see Empire is hollowing out and the response is to just...keep going. More spectacle, more confidence, less substance holding any of it up. Hmmm, wait...
how are you enjoying the live action saturday morning cartoon version of Foundation with bonus plucky protagonists?
Ha! Yeah, it is not the books. That said, it's been long enough since I read them that I didn't feel too annoyed ("oh wait, I don't remember that in the books" did come up more than once, as well as "oh wait, they're mixing up a bunch of books, is this the Robot series?". It's done really well personally, but hey, this is a way to make the money come in longer.
[dead]
>Prior to the industrial revolution, the natural world was nearly infinitely abundant. We simply weren't efficient enough to fully exploit it.
The mammoths disagree.
That is straightforwardly not true, land ownership was very well defined and the people who hunted in it without permission were prosecuted.
> Why would a writer put an article online if ChatGPT will slurp it up and regurgitate it back to users without anyone ever even finding the original article?
In the brave new world we're creating, people will write specifically for AI. If you can impress models so much that they "regurgitate" your work, then your work has achieved a kind of immortality.
> If all of us can go hunting in the woods and yet there is still game to be found, then there's no compelling reason to define and litigate who "owns" those woods.
The silver lining in that scenario is that consumers can "choose" to just go back offline. I put choose in quotes because with so many things in life requiring online accounts nowadays, that choice is tenuous.
> We have copyright and intellecual property law already, of course, but those were designed presuming a human might try to profit from the intellectual labor of others. With AI, we're in the industrial era of the digital world. Now a single corporation can train an AI using someone's copyrighted work and in return profit off the knowledge over and over again at industrial scale.
The idea that copyright simply doesn't apply to AI has more to do with AI companies deciding that they're not going to comply with those laws than the design of the laws. Also a very successful lobby against enforcement by positioning AI as a strategic necessity.
It's not possible (or at least extremely hard) to prove that the final weights they come up with resulted from copyright infringement.
Thats why they are evaluated so high on the stock market. Basically the will steal all the value of intellectual property in a semi legal way.
> Prior to the industrial revolution, the natural world was nearly infinitely abundant. We simply weren't efficient enough to fully exploit it. That meant that it was fine for things like property and the commons to be poorly defined. If all of us can go hunting in the woods and yet there is still game to be found, then there's no compelling reason to define and litigate who "owns" those woods.
I mean, medieval Europe (speaking broadly) had pretty well defined property rights wrt hunting. In fact, the forester at the time was thought of as one of the most corrupt jobs, as they'd commonly have side hustles poaching and otherwise illegally extracting resources from the lands they enforced and kept others from utilizing in a similar way. Quis custodiet ipsos custodes?
I've been looking at things from the same lens since 2023. At the same time, the depletion/hoarding bit isn't new. Companies were already doing this with consumer data, LLMs are just finally the factory moment—now that we have all the raw material we finally have a means of automating production using it.
So, in some ways, I also view LLMs as a pivotal and important wake up call. Companies were already taking the data and using it for a variety of other purposes—it was just way less evident to people when they weren't in direct competition with labor, since, under capital, labor is what we sell.
Either an entire new industry needs to form, or it's finally time to move beyond capitalism. Centralized capital ends up killing itself, because it effectively shuts down its own engine if it kills off consumers, who can only exist in the first place if the wage labor structure holds.
>Prior to the industrial revolution, the natural world was nearly infinitely abundant.
>We had to invent giant legal systems in order to determine who has the right to do that and who doesn't.
Excuse me? The industrial revolution was like 300 years ago. We had laws before that.
Stuff gets put online when the reader isn't the customer. Someone is paying for a reader to be told certain things. So it's free at the point of reading.
Our only hope is that AI in the long run is both powerful and benevolent enough to be its own "whistleblower" in cases of misuse.
I struggle so hard with this anthropomorphism of LLMs. At the end of the day it's a statistical gradient descent predictor with a bunch of "shit" bolted on top to try and steer outputs in a specific way.
They don't have the actual concept of "benevolent"... or a concept of anything at all. Based on an input, they regress down a path of "what is the next most probable statistical token to output next" and that's fucking it, with the bolted-on shit manipulating these outputs a bit.
I don't doubt that at some point there will be some other AI leap, but I'm not even sure it'll be built on this foundation.
What really needs to be developed is an actual artificial brain of sorts. Much like an infant learns language from first principals, a real AI would have a phase of continuous growth, creating actual memories and being able to reflect upon them. I daresay context windows are not that.
I'd really like to encourage anyone to pump the brakes a bit on how these things actually work, and what they actually are. There is a reason sama is pivoting away from video, et. al. and into corporate software coding, much like anthropic.
The natural world was not meaningfully abundant… Way before the industrial revolution land which was once used for opening hunting was closed off by the ruling class. Even before the Industrial Revolution you had a new class of merchant and factory owners who earned riches to buy land and keep the poor from hunting on it. Much of the natural resources out of reach for the majority and only accessible by those with deep pockets
> We are truly in the Information Age now, and I suspect a similar thing will play out for the digital realm.
The analogy seems to be backwards though. It would be as if we previously had a scarcity of land and because of that divided it up into private property so markets could maximize crop yield etc. and then someone came up with a way to grow food on asteroids using robots, and that food is only at the 20th percentile of quality but it's far cheaper. Suddenly food becomes much more abundant and the people who had been selling the 20th percentile food for $5 are completely out of the market because the new thing can do that for $0.05, and the people providing the 50th percentile food for $10 are also taking a hit because the price difference between what they're providing and the 20th percentile stuff just doubled.
The existing plantation owners then want to put a stop to this somehow, or find a way to tax it, but arguments like this have a problem:
> Why would a writer put an article online if ChatGPT will slurp it up and regurgitate it back to users without anyone ever even finding the original article?
This was already the status quo as a result of the internet. Newspapers were slowly dying for 20 years before there was ever a ChatGPT, because they had been predicated on the scarcity of printing presses. If you published a story in 1975 it would take 24 hours for relevant competitors to have it in their printed publication and in the meantime it was your exclusive. The customer who wants it today gets it from you. On top of that, there weren't that many competitors covering local news, because how many local outlets are there with a printing press?
Then blogs, Facebook, Reddit and Twitter come and anyone who can set up WordPress can report the news five minutes after you do -- or five hours before, because now everyone has an internet-connected camera in their pocket so the first news of something happening now comes in seconds from whoever happened to be there at the time instead of the next morning after a media company sent a reporter there to cover it.
The biggest problem we have yet to solve from this is how to trust reports from randos. The local paper had a reputation to uphold that you now can't rely on when the first reports are expected to come from people with no previous history of reporting because it's just whoever was there. But that's the same thing AI can't do either -- it's a notorious confabulist.
And it's the media outlets shooting themselves in the foot with this one, because too many of them have gotten far too sloppy in the race to be first or pander to partisans that they're eroding the one advantage they would have been able to keep. Damn fools to erode the public's trust in their ability to get the facts right when it's the one thing people would otherwise still have to get from them in particular.
This assumes the limiting factor is content generation, not ability to read and verify.
You make the point later in your comment, but consider it a minor issue. “Randos”
the actual limits are verification, and then attention. Verification is always more expensive than generation.
However, people are happy to consume unverified content which suits their needs. This is why you always needed to subsidize newspapers with ads or classifieds.
> This assumes the limiting factor is content generation, not ability to read and verify.
Content generation is the thing copyright applies to. If you want to create a reward system for verification, it's not going to look anything like that.
It mostly looks like things we already have, like laws against pretending you're someone else to trade on their reputation so that people can build a reputation as trustworthy and make money from subscriptions or ads by being the one people to turn to when they want trustworthy information.
> However, people are happy to consume unverified content which suits their needs. This is why you always needed to subsidize newspapers with ads or classifieds.
I suspect the real problem here is the voting thing. When people derive significant value from information they're quite willing to pay for it. Wall St. pays a lot of money for Bloomberg terminals, companies pay to do R&D or market research, individuals often pay for financial software or games and entertainment content etc.
But voting is a collective action problem. Your vote isn't very likely to change the outcome so are you personally going to spend a lot of money to make sure it's informed? For most people the answer is going to be no, so we need something that gives them access to high quality information at minimal cost if we want them to be informed.
Annoyingly one of the common methods of mitigating collective action problems (government funding) has a huge perverse incentive here because the primary thing we want people to be informed about is political issues and official misconduct, so you can't give the incumbent politicians the purse strings for the same reason the First Amendment proscribes them from governing speech.
So you need a way to fund quality reporting the public can access for free. Advertising kind of fit but it never really aligned the incentives. You can often get more views by being entertaining or inflammatory than factual.
The question is basically, who can you get to supply money to fund factual reporting for everyone, whose interest is for it to be accurate rather than biased in favor of the funder's interests? Or, if that's not a thing, whose interests are fairly aligned with those of the general public? Because with that you can use a patronage model, i.e. the content is free to everyone but patrons choose to pay money because they want the work to be done more than they want to not pay.
The obvious answer for "who" is then "the middle class" because they're not so poor they can't pay a few bucks while still consisting of a large diverse group that won't collectively refuse to fund many classes of important reporting. But then we need two things. The first is for the middle class to not get hollowed out, which we're not doing a great job with right now.
And the second is to have a cultural norm where doing this is a thing, i.e. stop teaching people illiterate false dichotomy nonsense where the only two economic camps are "Soviet Communism" in which the government is required to solve everything through central planning and "greed is good" where being altruistic makes you a doofus for not spending all your money on blackjack and cocaine. People rather need to be encouraged to notice that once their basic needs are met, wanting to live in a better world is just as valid a use for free time and disposable income as designer shoes or golf.
Absolute peak HN energy that the top reply to this very insightful point is a rambling pedantic argument about the finer points of agricultural development.
> Why would a writer put an article online if ChatGPT will slurp it up and regurgitate it back to users without anyone ever even finding the original article?
I'm happy to miss all the stuff that was written just for the financial benefit of the author.