This shouldn't be flagged. This is a new type of spam that will have serious consequences for open source.
LLMs have made it possible to effortlessly produce plausible looking garbage at scale and open source maintainers will soon have to deal with a high volume of these PRs going forward.
Just look at how much spammers it attracted when Digital Ocean offered free T-shirts to open source contributors [1]. Now, imagine what will happen when job prospects are involved and anyone can mass produce plausible looking garbage PRs in one single click.
LLMs will accelerate maintainer burnouts in the open source world and there's no good solution for that right now.
There is actually a really simple solution to this: auto reject PRs from people you dont know.
If someone is new to the project, ask them to write an issue explaining the bug/feature and how they plan to address/implement it. Make them demonstrate a human understanding of the code first.
This is not a purely technical problem but a social one too.
How will that help? LLM are more coherent and better at communicating than a lot of developer.
Making people go through hoops will just discourage legitimate potential contributors and not stop AI slop. LLMs are good at generating legitimate sounding wall of text. Without actual code, it'll be harder to distinguish legitimate contributors from spammers.
You could ask the submitter to show a quick video recording of the new feature being used. Or if its a bugfix, show the failure scenario and then the fixed non-buggy scenario. If they can't be bothered to show a basic before/after demo of whatever they are working on, then you probably don't want to work with them and accept their code changes anyway.
People already go through hoops and live with it just fine. I don't claim to have the best solution but fundamentally its a social problem and therefore solvable. Perhaps some form of chain of trust.