About a month ago I had a rather annoying task to perform, and I found an NPM package that handled it. I threw “brew install NPM” or whatever onto the terminal and watched a veritable deluge of dependencies download and install. Then I typed in ‘npm ’ and my hand hovered on the keyboard after the space as I suddenly thought long and hard about where I was on the risk/benefit curve and then I backspaced and typed “brew uninstall npm” instead, and eventually strung together an oldschool unix utilities pipeline with some awk thrown in. Probably the best decision of my life, in retrospect.
This is why you want containerisation or, even better, full virtualisation. Running programs built on node, python or any other ecosystem that makes installing tons of dependencies easy (and thus frustratingly common) on your main system where you keep any unrelated data is a surefire way to get compromised by the supply chain eventually. I don't even have the interpreters for python and js on my base system anymore - just so I don't accidentally run something in the host terminal that shouldn't run there.
No thats not what i want, that whats i need when i use something like npm.
Which can't be the right way.
Why not? Make a bash alias for `npm` that runs it with `bwrap` to isolate it to the current directory, and you don't have to think about it again. Distributions could have a package that does this by default. With nix, you don't even need npm in your default profile, and can create a sandboxed nix-shell on the fly so that's the only way for the command to even be available.
Most of your programs are trusted, don't need isolation by default, and are more useful when they have access to your home data. npm is different. It doesn't need your documents, and it runs untrusted code. So add the 1 line you need to your profile to sandbox it.
I wrote myself a handy and generalized bwrap-wrapping script: https://github.com/sandbox-utils/sandbox-run
The right way (technically) and the commercially viable way are often diametrically opposed. Ship first, ask questions later, or, move fast and break things, wins.
Here I go again: Plan9 had per-process namespaces in 1995. The namespace for any process could be manipulated to see (or not see) any parts of the machine that you wanted or needed.
I really wish people had paid more attention to that operating system.
The tooling for that exists today in Linux, and it is fairly easy to use with podman etc.
K8s choices clouds that a little, but for vscode completions as an example, I have a pod, that systemd launches on request that starts it.
I have nginx receive the socket from systemd, and it communicates to llama.cpp through a socket on a shared volume. As nginx inherits the socket from systemd it does have internet access either.
If I need a new model I just download it to a shared volume.
Llama.cpp has now internet access at all, and is usable on an old 7700k + 1080ti.
People thinking that the k8s concept of a pod, with shared UTC, net, and IPC namespaces is all a pod can be confuses the issue.
The same unshare command that runc uses is very similar to how clone() drops the parent’s IPC etc…
I should probably spin up a blog on how to do this as I think it is the way forward even for long lived services.
The information is out there but scattered.
If it is something people would find useful please leave a comment.
You are missing my point, maybe.
Plan9 had this by default in 1995, no third party tools required. You launch a program, it gets its own namespace, by default it is a child namespace of whatever namespace launched the program.
I should not have to read anything to have this. Operating systems should provide it by default. That is my point. We have settled for shitty operating systems because it’s easier (at first glance) to add stuff on top than it is to have an OS provide these things. It turns out this isn’t easier, and we’re just piling shit on top of shit because it seems like the easiest path forward.
Look how many lines of code are in Plan9 then look at how many lines of code are in Docker or Kubernetes. It is probably easier to write operating systems with features you desire than it is to write an application-level operating system like Kubernetes which provide those features on top of the operating system. And that is likely due to application-scope operating systems like Kubernetes needing to comply with the existing reality of the operating system they are running on, while an actual operating system which runs on hardware gets to define the reality that it provides to applications which run atop it.
You seem to have a misunderstanding of what namespaces accomplished on plan9, or that it was extending Unix concepts and assembling them in another way.
As someone who actually ran plan9 over 30 years ago I ensure that if you go back and look at it, the namespaces were intended to abstract away the hardware limitations of the time, to build distributed execution contexts of a large assembly of limited resources.
And if you have an issue with Unix sockets you would have hated it as it didn’t even have stalls and everything was about files.
Today we have a different problem, where machines are so large that we have to abstract them into smaller chunks.
Plan9 was exactly the opposite, when your local system CPU is limited you would run the cpu command and use another host, and guess what, it handed your file descriptors to that other machine.
The goals of plan9 are dramatically different than isolation.
But the OSes you seem to hate so much implemented many of the plan9 ideas, like /proc, union file systems, message passing etc.
Also note I am not talking about k8s in the above, I am talking about containers and namespaces.
K8s is an orchestrater, the kernel functionality may be abstracted by it, but K8s is just a user of those plan9 inspired ideas.
Netns, pidns, etc… could be used directly, and you can call unshare(2)[0] directly, or use a cri like crun or use podman.
Heck you could call the ip() command and run your app in an isolated namespace with a single command if you wanted to.
You don’t need an api or K8s at all.
Kubernetes is an operating system on top of an operating system. Its complexity is insane.
The base OS should be providing a lot/all of these features by default.
Plan9 is as you describe out of the box, but what I want is what plan9 might be if it were designed today and could be with a little work. Isolation would not be terribly difficult to add to it. The default namespace a process gets by default could limit it to its own configuration directory, its own data directory, and standard in and out by default. And imagine every instance of that application getting its own distinct copy of that namespace and none of them can talk to each other or scan any disk. They only do work sent to them via stdin, as dictated in the srv configuration for that software.
Everything doesn’t HAVE to be a file, but that is a very elegant abstraction when it all works.
> call the ip() command and run your app in an isolated namespace with a single command if you wanted to.
I should not have to opt in to that. Processes should be isolated by default. Their view of the computer should be heavily restricted; look at all these goofy NPM packages running malware, capturing credentials stored on disk. Why can an NPM package see any of that stuff by default? Why can it see anything else on disk at all? Why is everything wide fucking open all the time?
Why am I the goofy one for wanting isolation?
Because containers on Linux will never be able to provide this, they are fundamentally insecure from the kernel layer up, adding another OS stack on top (k8s) will never address the underlying mess that Linux containers are fundamentally.
Thank you.
OS-level isolation needs to be a thing. And it needs to be on by default.
The fact that tools like docker, podman and bubblewrap exist and work points out that the OS supports it, but using the OS APIs directly sucks. Otherwise the only "safe" implementations of such features would need a full software VM.
If using software securely was really a priority, everyone would be rustifing everything, and running everything in separated physical machines with restrictive AppArmor, SELinux, TOMOYO and Landlock profiles, with mTLS everywhere.
It turns out that in Security, "availability" is a very important requirement, and "can't run your insecure-by-design system" is a failing grade.
> The fact that tools like docker, podman and bubblewrap exist and work points out that the OS supports it
Only via virtualization in the case of MacOS. Somehow, even windows has native container support these days.
A much more secure system can be made I assure you. Availability is important, but an NPM package being able to scan every attached disk in its post-installation script and capture any clear text credentials it finds is crossing the line. This isn’t going to stop with NPM, either.
One can have availability and sensible isolation by default. Why we haven’t chosen to do this is beyond me. How many people need to get ransomwared because the OS lets some crappy piece of junk encrypt files it should not even be able to see without prompting the user?
This sounds very interesting to me. I'd read through that blog post, as I'm working on expanding my K8s skills - as you say knowledge is very scattered!
> If it is something people would find useful please leave a comment.
I would love to know.
That can only go so far. Assuming there is no container/VM escape, most software is built to get used. You can protect yourself from malicious dependencies in the build step. But at some point, you are going to do a production build, that needs to run on a production system, with access to production data. If you do not trust your supply chain; you need to fix that.
If you excuse me, I have a list of 1000 artifacts I need to audit before importing into our dependency store.
Absolutely, good old VMs can really provide the needed isolation while still having good UX. I just published a post on setting up dev VMs with Lima: https://www.metachris.dev/2025/11/sandbox-your-ai-dev-tools-...
Why distro do you run? Python is a part of the os in many cases ?
It’s a fair angle your taking here, but I would only expect to see it on hardend servers.
Why think about the consequences of your actions when you can use docker?
Ok, but if you distrust the library so much it needs to go in a VM, what the hell are you doing shipping it to your customers?
...but the github runners already are virtualized; you'd need to virtualize the secrets they have access to instead.
Containers don't help much when you deploy malware into your systems, containers are not and never will be security tools on Linux, they lack many needed primitives to be able and pull off that type of functionality.
nixos solves this without vms
It's funny because techies love to tell people that common sense is the best antivirus, don't click suspicious links, etc. only to download and execute a laundry list of unvetted dependencies with a keystroke.
But.. but... we're all friends here!
The lesson surely though is 'don't use web-tech, aimed at solving browser incompatibility issues for local scripting'.
When you're running NPM tooling you're running libraries primarily built for those problems, hence the torrent of otherwise unnecessary complexity of polyfills, that happen to be running on a JS engine that doesn't get a browser attached to it.
Very few packages published on npm include polyfills, especially packages you'd use when doing local scripting.
I'm sorry, but this is just incorrect. Have you ever heard of ljharb[0]? The NPM ecosystem is rife with polyfills[1]. I don't know how you can make a distinction on which libraries would be used for "local scripting" as I don't think many library authors make that distinction.
[0] - TC39 member who is self-described as "obsessed with backwards compatibility": https://github.com/ljharb
[1] - Here's one of many articles describing the situation: https://marvinh.dev/blog/speeding-up-javascript-ecosystem-pa...
Yes. I'm on TC39 as well, and I've talked to Jordan about this topic.
It's true that there are a few people who publish packages on npm including polyfills, Jordan among them. But these are a very small fraction of all packages on npm, and none of the compromised packages were polyfills. Also, he cares about backwards compatibility _with old versions of node_; the fact that JavaScript was originally a web language, as the grandparent comment says, is completely irrelevant to the inclusion of those specific polyfills.
Polyfills are just completely irrelevant to this discussion.
Fair enough. Thank you for the clarification, and I apologize for not recognizing your status as a TC39 member.
Same story from a month ago. The moment I saw the sheer number of dependencies artillery wanted to pull I gave up.
I used to run npm only inside docker containers, and I've been regularly laughed at on these forums. I eventually gave up…
“Whenever you find yourself on the side of the majority, it is time to pause and reflect." — Mark Twain (supposedly)