This policy change makes sense to me; I'm also sympathetic to the P0 team's struggle in getting vendors to take patching seriously.
At the same time, I think publicly sharing that some vulnerability was discovered can be valuable information to attackers, particularly in the context of disclosure on open source projects: it's been my experience that maintaining a completely hermetic embargo on an OSS component is extremely difficult, both because of the number of people involved and because fixing the vulnerability sometimes requires advance changes to other public components.
I'm not sure there's a great solution to this.
On the contrary: If Project Zero finds a 0-day in a product I know I use, and I know that product is Internet Facing, I can immediately take action and firewall it off. It isn't always the case that they find things like this, but an early warning signal can be really beneficial.
For customers, it also gives them leverage to contact vendors and ask politely for news on the patch.
Maybe I don't understand the threat model here: what kind of public-facing services are you running that are simultaneously (1) not already access-limited, and (2) not load-bearing such that they need to be public-facing?
(And to be clear: I see the benefit here. But I'm talking principally about open source projects, not the vendors you're presumably paying.)
Some companies might be willing to compromise functionality to avoid compromise of their networks.
There's always a usability / functionality vs security tradeoff
Unfortunately I think most of the products you use have 0-days in them, it's just that Project Zero hasn't found them yet.
Unless the 0day is in your firewall.
Fortinet strikes again...
There really isn't a great solution here. The notice that a vulnerability has been discovered puts even more pressure on the fix to be deployed as close to instantly as possible, throughout the entire supply chain.
Why is this? Especially for smaller or more stable open-source projects, the number of commits in a 90-day period that have the possibility to be security-relevant are likely to be quite low, perhaps as low as single digits. So the specific commit that fixes the reported security issue is highly likely to be identified immediately, and now there's a race to develop and use an exploit.
As one example, a stable project that's been the target of significant security hardening and analysis is the libpng decoder. Over the past 3 months (May 1 - Jul 29), its main branch has seen 41 total commits. Of those, at least 25 were non-code changes, involving documentation updates, release engineering activities, and build system / cross-platform support. If Project Zero had announced a vulnerability in this project on May 1 with a disclosure embargo of today, there would be at most 16 commits to inspect over 3 months to find the bug. That's not a lot of work for a dedicated team.
So now, do we delay publishing security fixes to public repos and try and maintain private infrastructure and testing for all of this? And then try and get a release made, propagated to multiple layers of downstream vendors, have them make releases, etc... all within a day or two? That's pretty hard, just organizationally. No great answers here.