This might just be the frequency illusion at play, but there seem to have been a number of high-profile supply chain attacks of late in major packages. There are several articles on the first few pages of HN right now with different cases.
Looking back ten years to `left-pad`, are there more successful attacks now than ever? I would suspect so, and surely the value of a successful attack has also increased, so are we actually getting better as a broad community at detecting them before package release? It's a complex space, and commercial software houses should do better, but it seems that whilst there are some excellent commercial products (e.g. CI scan tools), generally accessible, idiot friendly tooling is somewhat lacking for projects which start as hobby/amateur code but end up being a dependency in many other projects.
I've cross-posted my comment from the current SAP supply chain attack thread [0].
>This might just be the frequency illusion at play, but there seem to have been a number of high-profile supply chain attacks of late in major packages.
It's real. As of the beginning of April we'd had 7 in the past 12 months vs 9 in the two decades before that: https://www.jefftk.com/p/more-and-more-extensive-supply-chai...
I think the real question is "are we just hearing about it more now or has the actual rate of attack increased?"
I looked pretty hard, with some LLM assistance, so if it was "are we just hearing about it more now" it would have to be old attacks that happened without being discovered and written up.
Rate of attack increased over the past 5 years and multiple wars and proxy wars have broken out.
[dead]
> idiot friendly tooling is somewhat lacking for projects which start as hobby/amateur code but end up being a dependency in many other projects.
Historically, extra-security-scanned artefact handling has been a paid enterprise option. Whereas the less secure option is the much-less-hassle default.
IDK how good a business model this is, I suspect not very.
People are ramming tons of code into places without ever looking at it, it would follow that supply chain attacks would increase thusly.
Yeah, and ultimately no body cares. Everyone assumes it’s just some process miss, and we need to add another step to the process and move on. Fuck ups that would have killed the credibility of projects 10 years ago are now treated as “eeh what are you gonna do. Sometimes you ship malware. Will look into it”
> Yeah, and ultimately no body cares.
More like hiding their heads in the sand in circumstances that are outside of their ability to fix. None of the tooling or practices out there push you in the direction of not being at risk, or even provide you with easy ways to stay completely safe: no external packages needed to develop software with everything you NEED being provided out of the box, or a flow where pulling in a new package makes you review all of its source code line by line and compile everything instead of any binary tooling blobs, or built in vulnerability and configuration scanning so you don't get pwned by Trivy or don't leave an open S3 bucket somewhere, which also means that obviously you'd need thorough observability and alerting for any of the cloud stuff you do.
And even when they exist, your org projects might be painfully out of date, too much to use those approaches, or the org culture might not be there, or any number of other issues I can't even imagine. On one hand, people are running out of date software and those have CVEs, on the other using dependencies that are too new also puts you at risks of compromised packages - it's like we're being squeezed by rocks on both sides in a landslide or something. Even at the OS level, the fact that everyone is not running something like Qubes OS or regular VMs for development is absolutely insane. The fact that all software isn't sandboxed and that desktop OSes don't prompt for permissions like mobile apps do is absolutely insane. That we don't have firewalls like Glasswire as standard that prompt you for external connections, or don't allow easily blocking what you don't trust is insane.
Despite lots of people trying their best, on some level, everything both up and down the stack is absolutely fucked for a variety of complex reasons. You'd have to largely tear it all down and rebuild everything starting with your OS kernel in a memory safe language and formal proofs and thorough testing for everything (if it took SQLite as long as it did to get a decent test suite, it might as well take on the order of decades to do it for a production OS kernel and drivers), then do the same for all userland software and DBs and tooling and dependency management and secrets management (not just random files, special hardware most likely) and so on. It's not happening, so we just build towers of cards.
For something more practical: https://nesbitt.io/2026/03/04/package-managers-need-to-cool-...
> Yeah, and ultimately no body cares.
I assume you're using hyperbole.
Some of us are very aware and concerned about the risk. But like Cassandra from Greek mythology, we see the coming disaster and feel powerless to stop it.
Well yeah but if you don't have some critical mass which is very vocal/influential, at the end 'nobody cares enough'.
Good old « release first, fix later »
It's not that nobody cares. It's just that nobody who does care has the money or the power to change it.
Business school. Ahaha.
Are you talking about open source or commercial products? I can't speak for the pytorch lighting case, but I wouldn't be surprised if the maintainers didn't get any $ from it. They would be sad if the credibility of the package suffers, but ultimately it wouldn't make a big difference to them
I feel this is an inevitable consequence of a move towards languages with a culture of many small and transitive dependencies.
If my project has 100 dependencies, the release of an updated dependency will inevitably be a daily occurrence.
- [deleted]
FWIW left-pad was not an attack, it was a bug in NPM. It should not be possible to unpublish package versions that are depended on by other published packages. On the other hand, it should be possible to unpublish certain package versions that are new and not depended on.
NPM should have returned error codes when the author of left-pad attempted to remove all his data with the intention of leaving the service.
To quote Wikipedia:
> After Koçulu expressed his disappointment with npm, Inc.'s decision and stated that he no longer wished to be part of the platform, Schlueter [author of NPM] provided him with a command that would delete all 273 modules that he had registered.
The attacks from TeamPCP were successful at stealing credentials recursively. So it is very likely that someone working on this pytorch related package may have recently pulled the bad litellm or trivy (or what was there like 8 others?)
And the reason it jumps from npm to pip to whatever is that it's trying to find all the user's keys in well known locations for any of these repos.
So teampcp is sitting on tens of thousands of passwords or keys and they just need time to run tests on them to figure out what packages they can release to get even more attacks out there.
Why all the major repo vendors haven't done a full cred wipe? No idea (unless they have and I just wasn't on the email list)
> Looking back ten years to `left-pad`, are there more successful attacks now than ever? I would suspect so, and surely the value of a successful attack has also increased, so are we actually getting better as a broad community at detecting them before package release?
The value has increased, and that is what drives all these attacks. Cryptocurrencies are to blame in particular because they not just provided a way for money laundering the proceeds but also a juicy target in itself.
And what is stolen with today's malware? Cloud credentials. Either to use for illicit mining, which is on the decline, or to run extortion campaigns, which is made possible by cryptocurrencies. All too often it's North Korea or Iran running these campaigns.
> All too often it's North Korea or Iran running these campaigns.
I'm sure the NSA does similar things to them but we aren't really informed about that detail.
> Looking back ten years to `left-pad`, are there more successful attacks now than ever?
I can't vouch for the number of attacks, but, and since we are talking about Python, nothing substantially changed since the time of `left-pad`. The same bad things that enabled supply chain attacks in Python ten years ago are in place today. However, it looks like there are more projects and they are more interconnected than before, so, it's likely that there are either more supply chain attacks, or that they are more damaging, or both.
Here's my anecdotal experience with Python's packaging tools. For a while, I was maintaining a package to parse libconfuse configuration language. It started as a Python 2.7 project, but at the time there was already some version of Python 3 available, so, it was written in a way that was supposed to be future-proof.
I didn't need to change the code of the project in the last ten or so years, but roughly once a year something would break in the setup.py. Usually, because PyPA decided to remove a thing that didn't bother anyone.
When Python 3.13 came out, as clockwork, setup.py broke. I rolled up my sleeves and removed the dependency on setuptools, instead, I wrote some Python code that generated a wheel from the project's sources. I didn't look up the specification of the RECORD file in dist-info directory, and assumed that sha256().hexdigest() will generate the checksums in the desired format. And that's how I shipped my packages...
Some time later, the company added an AI reviewer to the company's repo and it discovered that instead of hexdigest() the checksums have to be base64-encoded and then padding removed...
Now, to the punchline: nobody cared. The incorrectly generated packages installed perfectly fine without warnings. Nobody checks the checksums.
More so: nobody checks that during `pip install` or the more fancy `uv pip install` the packages aren't built locally (i.e. nobody cares that package installation will result in arbitrary code execution). It's not just common, it's almost universal to run `pip install` on production machines as a means of deploying a Python program. How do I know this? -- The company I work for ships its Python client as a... source package. Not intentionally. We are just lazy. But nobody cares.
It's probably the same people, who think that merely having a requirements.txt stating packages with versions or even without that (2010 sends its regards) is fine. Open a random open source Python project on GitHub, and chances are you will see this kind of thing. Stands to reason, that people in companies are not acting much different.
> It's not just common, it's almost universal to run `pip install` on production machines as a means of deploying a Python program.
Maybe a Python culture problem; maybe a hallmark of Python's status as an "easy to hire for", manager-friendly, least common denominator blub language; maybe a risk that stems from the conveniences of interpreter languages... but this is such a shame in this day and age.
It's seriously not difficult to do better. And if this is what you're doing, you're also missing out on reproducible environments both in dev and in prod. At least autogenerate a Nix package! You still don't need to publish any artifacts, but you can at least have the thing build in a sandbox or yeet the whole closure over SSH.
It's also not that hard to get a Docker image out of a Python project.
You only need one platform-minded person on the whole development team to make this happen.
What is going on???
"Almost universal" is a bit of a stretch, most of the time these days Python apps are deployed as Docker containers, and if you're using k8s this becomes effectively mandatory.
However a lot of the time especially for older codebases the docker build will just run pip install from public pypi without a proper lockfile.
So at least install code isn't being executed on your production machine, but still significant surface area for supply chain attacks
As scary as it is right now, it warms your heart a little bit that this system existed for 30 years and is only now reaching a crisis point.
I ran an open source project with tens of thousands of downloads (presumably all either developer machines or webservers, so even a small number is valuable) and never received a malicious pull request, offer of a bribe to install malware, or a phishing attempt with enough effort to even catch my attention.
What it says to me is that there weren't a lot of people working on the crime side of this. It's like dropping your wallet in a bar bathroom and coming back to find it still there.
left-pad was an npm issue
virtualenv isn't relocatable out of the box, so how else would you deploy a python project?
You can call it laziness, but it's not like the python ecosystem has ever developed an answer for this problem. The only reasonable answer has been to use docker, which is basically admitting that the python community did nothing.
but is docker the solution, though? I don't think so, docker itself is prone to supply chain attacks from my understanding
[dead]
No need to invoke frequency illusion when every moderate HN lurker already stopped counting. https://socket.dev/blog gives a good impression, but a dedicated article would be nice. Maybe recurring once or twice a year.
If you're interested in synchronicity and frequency illusion, Sergei v. Chekanov wrote a book that sounds interesting https://jwork.org/designed-world/
Have you ever experienced coincidences that cannot be logically explained? This book helps the readers understand the meaning of synchronicity, or remarkable coincidences in people's lives. This work not only explains the mystery of synchronicity, originally introduced by Carl Jung, but it also shows how to make simple calculations to estimate the chances that coincidences are not due to mere randomness.