In my experience, the publication pressure in today's science is to large extent inhibiting innovation. How can you innovate when you need to have X papers every year, otherwise you will not get that position of funding. To fulfill the quota, the only rational strategy is to focus on simple iterative papers that are very similar to what everybody else is doing. There is simply no time to innovate or be brave, you have to comfort. There is also barely time to make sure they what you are doing is actually methodologically correct. If you spend too much time, you will get scooped and forgotten.
Case in point, everybody is doing AI research nowadays and NIPS has like 15k submitted papers. But the innovation rate in AI is actually not that much higher than 10 years ago, I would even argue that it is lower. What are all these papers for? They help people build their careers as proofs of work.
I completely agree that "publish or perish" harms innovation. Funding and research positions have become so predicated on rapid and consistent publication that it incentives researchers to focus on incremental and generally low-risk ideas that they can propose, develop, and publish quickly and predictably. Nobody has the time or energy anymore to focus on bigger and braver (your word) ideas that are less incremental and cannot be developed in predictable time frames.
I agree that many fields essentially have papers as "proof of work", but not all fields are like that. When I worked as a mechanical engineer, publication was "the icing on the cake" and not "the cake itself". It was a nice capstone you do after you have have completed a project, interacted with your customers, built a prototype, filed a patent application, etc. The "proof of work" was the product, basically, and you can build your career by making good products.
Now that I am working as a scientist, I see that many scientists have a different view of what their "product" is. I have always focused on the product being the science itself --- the theories I develop, the experiments and simulations I conduct, etc. But for many scientists, the product is the papers, because that it what people use to evaluate your career. It does not have to be this way, but we would have to shift towards a better definition of what it means to be a productive scientist.
AI is a special case of a special case. First you have the weird CS publication culture with conference papers and a heavy focus on selecting a (small) subset of winners. And then you have a subfield with giant conferences, a lot of money, and a lot of people doing similar things.
A typical approach to science is finding your niche and becoming a person known for that thing. You pick something you are interested in, something you are good at, something underexplored, and something close enough to what other people are doing that they can appreciate your work. Then you work on that topic for a number of years and see where you end up in. But you can't do that in AI, because the field is overcrowded.
I agree that AI is an extreme example, but similar pressures exist in other popular fields and subfields, especially in STEM. Peter Higgs famously said that he would probably not be able to do a PhD nowadays.
> finding your niche
Exactly. It used to be that way in AI a decade ago. Different subfields used bespoke methods you could specialize in and could take a fairly undisturbed 3-5 years to work on it without constant worries of being scooped and therefore having to rush to publish something half baked to plant flags. Nowadays methods are converging, it's comparatively less useful to be an expert in some narrow application area, since the standard ML methods work quite well for such a broad range of uses (see the bitter lesson). This also means that a broader range of publications are relevant to everyone, you're supposed to be aware of the NLP frontier even if you are a vision researcher etc., you should know about RL developments etc. Due to more streamlined github and huggingface releases, research results are also more available for others to build on, so publishing an incremental iteration on top of a popular method is much easier today than 15 years ago when you first had to implement the paper yourself and needed expertise to avoid traps not mentioned in any paper and is assumed common knowledge.
It may not be a big problem for overall progress, but it makes people much more anxious. I see it on PhD students, many are quite scared of opening arxiv and academic social media, fearing that someone was faster and scooped them.
Lots of labs are working on very similar things, and the labs are less focused on narrow areas, everyone tries to claim broad areas. Meanwhile people have less and less energy to peer review this flood of papers and there's less incentive to do a good job there instead of working on the next paper.
This definitely can't go on forever and there will be a massive reality check in academia (of AI/ML).
Certainly other field are competitive, but the current AI boom has been ridiculous for a while now. As an outside observer, the competition seems to be for the final money, prestige, or whatever the top papers win, rather than competition at the level of paper acceptance...
The competition racket and inflation keeps turning. It used to be publications. Then it was top conference publications. Now it's going viral on social media, being popularized by big AI aggregators like AK.
It's crazy, most Master's students applying for a PhD position already come with multiple top conference papers, which a few years ago would get you like 2/3 of the way to the PhD, and now it just gets you a foot in the door in applying to start a PhD. And then already Bachelor students are expected to publish to get a good spot in a lab to do their Master thesis or internship. And NeurIPS has a track for high school students to write papers, which - I assume - will boost their applications to start university. This type of hustle has been common in many East Asian countries and is getting globalized.
That whole thing feels like a crypto coin, as in, its currency that’s worth something to just that particular group. The industry obviously doesn’t care about all these papers, so the question is, what is the social structure where these papers provide status and respect (who values their currency?).
Science is prestigious and quick and quantifiable way to measure it are used as heuristic proxies. There are many angles to answer your question. Are you interested on the industry connection, how it translates to money, or the political aspects etc? People generally have little time for evaluation, there is an oversupply of applicants, being able to point to metrics can cover your ass against accusations of bias. It offloads the quality assurance to the peer review system. This person's work has been assessed by expert peers in 5 instances and passed to acceptance in a 20% acceptance rate venue where the top experts regularly publish. It's a real signal. They can persist through projects, communicate and defend it against the reviewers, has presented it to crowds, etc.
Its a prestige economy. There are other things too like having worked with someone famous or having interned in a top company.
Prestige economy is what I suspected. I recently read an AI paper that I mostly came up on a random walk, but there was a Stanford student that had already created the research paper (not exactly but more or less). In terms of “true” signal, I’d imagine that student getting reviewed as credible signals that we’re in bad shape because I can promise you I came up with the exact thesis and implementation and it was truly just common sense stuff - not research worthy.
Makes me wonder, have I turned brilliant or is it quite unimpressive out there?
I’m inclined to even suggest to you that the prestige economy started with truly prestigious research work, of which then the institutions “ordered” as many more of those as they could, hence the industrial levels of output. Not unlike VCs funding anything and everything for the possibility of the few being true businesses.
The reality is that innovation is hard to plan. It's like outperforming the market. Scientific breakthroughs are about figuring out where are gaps in our knowledge that are fruitful when filled, or where our current understanding is wrong. But if we already knew what we believe wrongly, then we already wouldn't believe it. You can't produce breakthroughs like clockwork and the more thorough work you do, the less opportunity there is to find out later that you were wrong!
The problem is that of course everyone wants the glory of finding out some new groundbreaking innovative disruptive scientific discovery. And so excellence is equated with such discoveries. So everything has to be marketed as such. Nobody wants to accept that science is mostly boring, it keeps the flame alive and passes on the torch to the next generation, but there's far less new disruption than it is pretended. But again, a funding agency wants sexy new finding that look flashy in the press, and bonus points if it supports their political agendas. The more careful and humble an individual scientist is, the less they will seem successful. Constantly second guessing your own hypotheses, playing devil's advocate strongly and doing double and triple checks, more detailed experiments, etc. take longer time and have better chance at discovering that really the sexy effect doesn't exist.
> Makes me wonder, have I turned brilliant or is it quite unimpressive out there?
Obviously, it's impossible to say without seeing their work and your work. But for context, there are on the order of tens of thousands of top-tier AI-related papers appearing each year. The majority of these are not super impressive.
But I also have to say, what may seem "just common sense" may look like that just in hindsight, or you may overlook something if you don't know the related history of methods, or maybe you're glossing over something that someone more experienced in the field would highlight as the main "selling point" of that paper. Also, if common sense works well, but nobody did it before, it's still obviously important to know how well it works quantitatively, including detailed analysis of the details.