We already have open-access publications: Just put it on arXiV. Most researchers I work with do this already.
The problem isn't access, it's citations. arXiV is not considered a credible citation source since anyone can publish anything. TPCs don't use it in their list of citations, neither do grant funding agencies or government institutions.
The current academic enterprise relies heavily on third-party gatekeeping. We rely on others to do the vetting for us. The first thing an academic does is check where a paper is published, before even reading it. It's a crutch.
Any gatekeeper will naturally tend towards charging for access over time: It's a captive market, the economics demands it. Unless we eliminate that dependency, we cannot change the system.
I just checked in case it had changed, but Arxiv is nowhere near as free-for-all as you imply.
Any crank who learned to use LaTeX is not allowed to post articles willy-nilly. You need endorsements in the field.
Check out the "Collective action problem" described in this article. It describes why "Just publish on arXiV" isn't a practical solution. It doesn't lead to the problem being fixed, because of inertia against any individual breaking out of the system.
I've long wished that "journals" and academic societies would transition from a publishing model to a cultivation model. If everything is available on arXiv, that's great, but it also means the best of the best is mixed in with all the rest.
Journals (in the sense of whoever is on the editorial board) don't need to cease to exist; they just need to transition to "here's our list this month of what the best new articles are on X topic". The paper's already there on arXiv, you could already read it before. But having a group of editors that cultivate a list of good articles (as well as the peer review process that can, in an ideal world, serve to improve a paper) can serve to make sifting through arXiv less overwhelming, and draw attention to papers in particular subfields, subject matter, or whatever other criteria might be relevant.
This is quite similar to how eLife does publishing. You still have to submit to them but they basically just add reviewer comments and an “eLife Assessment” that serves as the quality/curation signal rather than a binary publish/reject
I don't see any reason why we shouldn't make this transitive. working professionals track the literature. if there were a standard way to publish a "I think this paper is interesting" signal, then we could roll that all up. there are certainly practitioners that I really do trust to be in the game for the right reasons, if they think a paper represents a contribution, then that's a strong signal for me.
In the publishing world, there is this thing called the slush pile: the collection of unsolicited submissions, essentially the only way a person without an agent can break into the field. And you can find quite a few editors' experiences with the slush pile in various blog posts or articles online. And the general reaction goes from naïve wonder at the idea of finding the diamond-in-the-rough to frustration with the quality of the submissions and a realization that the actual game is to figure out how to reject submissions with as little reading as possible (because they don't have the time to do any reading!). This is before LLMs came about, which have made the slush pile problem much worse because they don't improve the quality of the submissions but the increase the amount of reading that needs to be done to reject them.
Academia has the same fundamental problem. We don't actually have the time to read every possible paper someone has for us, because keeping up with literature takes time that we don't have. And while relying on the quality of the journal or conference as a metric for "is this paper worth reading?" has issues, to be honest, it is more effective than other proposed solutions. When I have done the literature searches that delved into the unknown, low-quality tiers of journals... no, those results were not worth the time I spent reading them.
There's also a middle ground, i.e., renowned publishers who aren't free but still publish everything as OA. One example is Dagstuhl Publishing for CS research papers.
Why isn't a citation just a citation. It's a pointer to a source, that's all. If it implies some standards have been applied or editorial or scientific review has been done, then that's going to have to be paid for by someone. TFA implies that doesn't happen: [and then] we stop doing all that stuff and then the cash just pours out. So a citation to an article in Nature isn't any better than one on arXiV.
> So a citation to an article in Nature isn't any better than one on arXiV.
The real problem is that nobody can grade and compare article in different topics, so there are proxies like number of articles in "serious" journals (whatever that means[1]) and number of citations in "serious" journals (whatever that means[1]).
Do we count also citations in X/Tweeter, FaceBook, WordPress [2], StackOverflow, ... ?
If links in HN also count as citations, there are 3 additional citations for my last paper:
http://www.example.com/gus_massa/very_good_paper_2026.pdf
http://www.example.com/gus_massa/very_good_paper_2026.pdf
http://www.example.com/gus_massa/very_good_paper_2026.pdf
[1] Which journals are serious and which are paper mills? In the extremes the difference is clear, but there in the middle there is a gray zone.
[2] A citation in Tao's blog in WordPress should be worth at least half official citation, or perhaps a whole point.
Unfortunately I think charging money is a necessary signal that this particular gatekeeper is doing a good job. We should recognise that money is a necessary part of this process, else there is no gate to keep. But we shpuld reverse the economics by having people pay to get their stuff peer reviewed. Imagine if reviewing research papers was something you could get paid to do, the incentive then isnt to rubber stamp things, actually your rating as a reviewer would come down to quality of reviews
> I think charging money is a necessary signal that this particular gatekeeper is doing a good job.
I’ve never seen the slightest relationship between the charge to read a paper and the quality of review.
Because there isn't such a relation. It's a thing people believe when they don't have actual experience with peer review. If anything, predatory journals and low-quality pubs can charge more, since publication is more guaranteed (and researchers reaching for these pay-to-publish journals are more desperate).
It's a reputation economy. Like review sites. They start off truthful, and then as time goes on incentives shift to bad actors to subvert it. Or they just sell out their reputation.
Yelp, TripAdvisor, wire cutter, hell even Google results themselves.
Once you start poisoning that well, it's difficult if not impossible to claw it back.
I tend to agree, but keep in mind that most likely you just don't even bother reading the shittiest of the shittiest papers just based on title and abstract. And for every good article there are like 10 unindexed shitty ones.
Yeah review takes time and time is money. This needs to be priced in somehow. Bonus side effect: Frauds get discovered and filtered out (in theory).
But who watches the watchers? I guess review fraud will need to be considered as well.
Scientific publishers do not pay for peer review. Reviews are done by researchers as part of their jobs which are paid for by their research grants.
> But we shpuld reverse the economics by having people pay to get their stuff peer reviewed.
Not really. There would be perverse incentives where the publisher benefits from accepting more articles. For good journals that would be a conflict of interests at best where they would optimise the marketing-to-acceptance ratio. I can’t believe I am writing something good about scientific publisher, but at least when the reader pays they are incentivise to publish things that have an audience. Otherwise, they are going to cut corners, and I mean more than they currently do. And it’s not hypothetical, there are already terrible publishers doing this.
The problem is that this becomes a race to the bottom of actual quality and turns into advertising.
Sponsored reviews of products are basically this. If you are paying a reviewer for a stamp of approval and the reviewer sets the bar too high, why would you want to pay that reviewer? On the other end of the reviewer, it's easy to get more money by providing that stamp of approval to more people--not fewer--so they're incentivized to make it fairly easy to achieve.
Exactly. The solution already exists. However another problem is that the arxiv is creeping towards the old model ...
>The first thing an academic does is check where a paper is published, before even reading it. It's a crutch.
This is actually what ruined my respect for Academia.
My Science PhD buddy looked at the journal title and the claim, then said: "Its true!"
I look at him with horror. Who cares about the journal, I want to know data and methodology.
I've basically never forgiven Academia since this. I see even Ivys put out bad research and journals will publish bad research (Replication crisis and the ivy fake psychology studies)
For outsiders, there is a prestige to being a PhD or working as a professor. Now that I'm mid career and lived through the previous events I mentioned + seeing who stuck with academia... These are your C grade performers. They didnt get hired by industry, so they stayed in school. They are so protective of their artificial rank because they cannot compete in Industry. Its like being the cool person on the tennis team. They are locally cool, but not globally cool.
> This is actually what ruined my respect for Academia.
Spoken like someone who never went through grad school at a competitive R1 program
It was already a grueling 60-80 hour grind every week with frequent all nighters, high-pressure deadlines, absolute minimal pay, thankless duties, and plenty of politics. It's about the same for professors too.
We already paid our dues by helping peer review (for free) a half dozen papers for each one we submitted. Why should we be expected to review random papers on arxiv too...?
I went to an R1 university. Most students did not have a 60-80 hour grind. If they did, it was because of an overbearing advisor. Years later, those students are not ahead of those who had a more relaxed advisor.
And chances are: Those overbearing advisors are very invested in the current system.
it varies enormously by field.
in CS you will have intense grind weeks around conference deadlines and a more manageable but challenging pace of life otherwise.
in wet lab science you live by the schedule set by your experiments, which often involves intense hours.
>Why should we be expected to review random papers on arxiv too...?
The GP is not saying to review each paper you read or cite. They're complaining that a colleague accepted a claim after just reading the title and where the paper was published. Between that and doing a full review there's surely a world of options.
The problem is not that he was not willing to review it. It was that he was willing to conclude it was true. If he had said "that is interesting" or "that is plausible" or whatever, that is fine. It is concluding it is true that is the problem.
I don’t think folks in academia have come to terms with how much the above attitude has completely and nearly entirely undermined the credibility of the entire scientific and academic community in the eyes of the general public.
You don’t need a degree to understand how much utter junk science is being published by those who think they are superior to you. Just read a few actual papers end to end and look at the data vs conclusions and it becomes totally obvious very rapidly that you cannot “trust the science” since it’s rarely actual science being done any longer.
The academic community has utterly failed at understanding they needed to cull this behavior early and mercilessly. They did not, and it will be generations at best to rebuild the trust they once had. If they ever figure out they need to.
Things are going to get much worse before they get better. You can’t take any published paper at face value any longer without going direct to primary sources and bouncing it off an expert in the space you still trust to give you the actual truth.
On the whole you should rarely read papers, you want to read a whole literature in an area. Academics embedded in the field can do this easily. Academics outside of an area know to do this, and to bounce things off an expert to make sure you have the context and aren't over-indexing on a flashy result. Everybody learns the painful lesson in grad school to not just read a paper and believe everything will work as it says.
Somehow the general public and policymakers got the idea that if a paper gets published in any non-fake journal, this is an official endorsement that it's 100% correct, everything in it can be read in isolation, and it's safe to use all claims in the paper to direct policy immediately.
I think academia is partially to blame for encouraging people to believe this rather than insisting on explaining the nuances of how to interpret published research. On the other hand, nobody wants to hear a message that things are nuanced, and they will have to do costly hard work to get at the truth.
I think a world where "you can take any published paper at face value...without going direct to primary sources and bouncing it off an expert in the space" would be great, but it never existed, and it's just fundamentally impossible.
I wouldn't be surprised if the parent's complaint about his academic buddy who didn't read the paper's methods yet declared their findings as true, had misunderstood why his friend did so... which could have well been due to their additional knowledge about similar past findings/studies.
I fear you are right here, and that the problem is far more dire than much of academia realizes. I know enough highly intelligent people (some even with family / spouses in academia, surprisingly) that are otherwise very e.g. left / liberal / progressive and open, that are still basically saying academia needs to be gutted / burned down.
I've no idea what the actual stats are on faith in academia overall today, but I don't think it is looking good.
Go read /r/LawyerTalk and enjoy the horror of the dawning realization that this is a lot of professionals. I think it's an issue that stems from getting too deep into the minutiae of the technical and cultural matters of one's field; you become a really good scientist, or lawyer, or SWE (by the standards of scientists and lawyers and SWEs), and end up coming to conclusions that everyone outside the bubble looks at and says, "That's absolutely asinine." Well, laymen just don't understand the details, you know? (Even though the whole point of these professions is to provide services to laymen, fix problems laymen come to them with, and guide laymen to make practical and logical decisions when a $500/hr appointment isn't called for.)
These people take themselves too seriously, and other people only take them seriously when there are material ramifications for not doing so. Otherwise, they're viewed as pompous busy-bodies and don't do themselves any favors by playing to the role.
>It was already a grueling 60-80 hour grind every week with frequent all nighters, high-pressure deadlines, absolute minimal pay, thankless duties, and plenty of politics.
You know what else works really hard? A washing machine. Hard work alone doesnt create value. I could give you a spoon and tell you to dig a hole, or I can teach you how to use a Digger.
Some things are hard because you overcomplicate them. Some things are hard by their very nature.
Unless you are a Claude Shannon type, adding fundamental new knowledge to humanity's corpus is generally actually hard - at least in science & engineering. If you feel differently, I look forward to reading your groundbreaking papers!
Weirdly, I do have my contributions to science. I run a pretty popular blog, 250k-1M users per year.
Academia will refer to my stuff. Various levels of the US government use my data.
To be honest, I think I got lucky + I was a (hardcore) Stoic for a decade + my hobby was scientific.
> You know what else works really hard? A washing machine. Hard work alone doesnt create value.
My washing machine creates a lot of value for me. The time it saves me is incredibly valuable.
Most machines that work really hard are valuable because they free up time.
This wasn’t the clever burn you thought it was.
Its a line from National Lampoon's Xmas Vacation.
Value is what you're willing to pay for something.
Laundromats aren't particularly profitable businesses.
Laundromats are the best business there is and are extremely profitable and seldom to never go out of business - you should look this up, it is fairly fascinating.
Complete hogwash of a comment, based almost entirely on your limited experiences, to denigrate academic scientists.
If you even knew these people, you'd know that most that remain in academia never considered industry in the first place. These people were not rejected by industry. In fact, it is the other way around. *They rejected industry*. They did so, despite knowing they'd make more money, but chose to remain in academia because they wanted to spend their life pursuing research topics that interested them with independence. Sometimes they feel the fool when money is tight and the hours are relentlessly long, but never have I seen it happen because they were rejected by industry.
> The first thing an academic does is check where a paper is published, before even reading it. It's a crutch
IMO, academics that do this are not very competent, because we have plenty of research suggesting that higher-profile journals are in fact less trustworthy in many ways, or that there is no correlation at all between reputation and quality (see my other post here in this thread).
Yes, some trash journals publish all trash, but, beyond that, competent researchers scan the abstract, look at sample sizes and basic stats, and if those check out, you skip to the methods and look for red flags there. Also, most early publications will be on an arXiv-like place anyway so you can't look to reputation yet.
Likewise, serious analytic reviews like meta-analyses don't factor in e.g. impact factor or paper citations, since that would be nonsense. They focus on methodology and stats.
I really think we ought to shame academics that are filtering papers based on journal alone, it is almost always the wrong way to make a quick judgement.
I have seen more than one PI at an R1 universities with multiple Nature publications use this heuristic. I would not call them incompetent.
Do you not notice the circularity of your reasoning here?
Also I didn't say incompetent, I said "not very". More competent researchers make journal rep only a very small factor, and it is not via the "high rep = more trustworthy" direction (which is the bad heuristic), it is "pay-to-publish journals = not trustworthy" (better heuristic).
Once you have ruled out a publication being in a trash journal, reputation is only a very minor factor in consideration, and methodological and substantive issues are what matter.
> IMO, academics that do this are not very competent, because ...
Where's the cry-laugh emoji when I need it.
Of course academics check where stuff is published. Please...
There are still real journals put there, although you might not know which is which.
Ah, look, another smug sneer that ignores the evidence I presented, and makes another circular argument (i.e. that because academics look at rep, this is justified, even though I provided evidence disputing this).
I know what journals are better / not. But reputation only is helpful in letting you ignore trash journals, once you are out of trash land, rep is just not a very meaningful factor, and you have to focus on methodology and substance.
Where's the evidence you presented?
What are some higher-profile journals that are in fact less trustworthy in many ways?
I literally said it was posted in this thread, and a quick Ctrl+F of my username on this page would have found you it in a half second: https://news.ycombinator.com/item?id=47249236
> The problem isn't access, it's citations. arXiV is not considered a credible citation source since anyone can publish anything
I do some due diligence work from time to time. Uploading to arXiV is becoming a favorite tactic from companies trying to look impressive for investors. I’ve read a lot of “papers” submitted by startup founders that are obviously ChatGPT written slop uploaded to arXiV. They then go to investor and show their record of “published research”. Smart investors are catching on but there are a lot of investors who associate journals with quality and filtering and assume having a paper on there means something.
The filtering and curation problem is real. It seems like academic pettiness or laziness from the outside, until you see the volume of bad “papers” that everyone is trying to publish to chase the incentives.
Maybe studies could be dual published in open access publications and private.
Then you get the private branded badge social proof and access can continue.
Also, til anyone can publish to arxiv.org?
We have a gatekeeper already in the funding source - they do the work of vetting researchers prior to funding the work.
Piggy back this system so that the funding source publishes the papers itself, and researchers can only publish their papers that are directly funded.
This system requires the cooperation of an organization to build the publishing infrastructure, but this could be a lowest capable bidder, and less drag on the system overall.
Just putting it on arXiv does not automatically make it OA. It needs a permissive license.
I think people in this post are using arXiv as sort of metonymy / stand-in for OA here, but, yes.