The ability to run docker containers as wsl instances looks nifty. A bit more overhead since they now run as a VM instead of a container, but you get the ability to start a shell with Windows terminal or access files with Explorer (and thus every program running in Windows)
Perhaps I misunderstand your comment but when you run docker / podman in Windows, you are using WSL / HyperV.
running docker on windows runs docker in wsl (which is a HyperV guest). The project offers running docker images as wsl instance (which is a HyperV guest), no docker involved
You said - "A bit more overhead since they now run as a VM instead of a container"
To which osigurdson seemed to be noting that WSL2 itself is a VM (meaning if you launch 1 or 100 WSL2 instances, a single Linux VM will be spun up), and when you run docker, it runs using exactly that same VM (optionally, though it's the default now to just use WSL2's backend).
Can you clarify what you meant by "A bit more overhead"? Running a container via docker or directly via WSL2 will use the same underlying VM, and there will only be that one VM regardless of the number of WSL2 or docker instances.
For a single container the difference depends on your exact setup. I typically run docker in the same WSL instance as my other wsl stuff, so starting a single docker container adds 0 additional VMs, while starting the container as a wsl instance will add one VM. If you use the "docker for windows" package you may be adding a VM just for running docker, depending on your setup.
Once you start the second container the difference becomes more obvious: running $N containers in docker uses one VM with one linux kernel, no matter how many containers you add. Running $N containers as separate WSL instances runs $N VMs and $N linux kernels. That's the "bit more overhead" I was referring to
"Running $N containers as separate WSL instances runs $N VMs and $N linux kernels."
But it doesn't, and this is what I'm disagreeing with.
If you instantiate WSL2, it launches a Linux VM. A single Linux VM. If you then run Docker with WSL2 integration (the default and hugely recommended), it uses that Linux VM as its VM as well, so you're still at 1 VM.
If you run 100 WSL2 instances, they will all use that single Linux VM, each doing namespacing for isolation with their own filesystems. If you run 100 Docker instances, they will all use that single Linux VM.
If you run 100 WSL instances, and 100 Docker instances (assuming, again, WSL2 integration which is the default), they will all be using that single Linux VM.
Every time you run `wsl --install <distro>` you are creating a new VM. Every time you run `wsl --unregister <instance id or name/>` your are removing a VM.
It is these two operations at the heart of OP's app.
Okay, are you guys using wrong terminology? Are you talking about a filesystem image as a "VM", because that would not be correct. A "Virtual HD" (VHD) is not a VM. Do you think a WSL2 distro instance is a "VM" because it's persistent? Did you know Docker containers can be persistent as well, people just choose as a best practice not to use it like that? Does that make them magical VMs?
WSL2 has a single VM running as the process vmmem (or vmmemWSL in the newest versions). This is the single Linux kernel that will be shared by every WSL2 instance. It will also be the instance that hosts Docker containers as well. I mean, I'm just echoing myself and apologies for the repetition, but this is just easily confirmed, utter factual truth.
A rootfs is not a VM. WSL2 and containers have different nuances about their filesystem and isolation (WSL2 instances is aware of other instances, for instance, as a design aspect, and WSL2 persists rootfs changes while docker as a best practices does not), but they're more alike than different. And they share little in common with something like VirtualBox or vmware, which actually isolate every such system as VMs.
Again, sigh, if you install 10 WSL2 "distros", you will have 10 rootfs overlays on your drive, and 10 persistence VHD files. When you run these the VM they all run under in the shared vmmem instance, which is the single utility Linux VM.
Is there a way to visualize this on a running system or some documentation that describes it? I'm not familiar with the plumbing here but did try to find some documentation.
I've done a lot of in-depth WSL2 work, so I'm just aware of its particulars, but to cite some documentation-
https://learn.microsoft.com/en-us/windows/wsl/about
"WSL 2 uses virtualization technology to run a Linux kernel inside of a lightweight utility virtual machine (VM). Linux distributions run as isolated containers inside of the WSL 2 managed VM. Linux distributions running via WSL 2 will share the same network namespace, device tree (other than /dev/pts), CPU/Kernel/Memory/Swap, /init binary, but have their own PID namespace, Mount namespace, User namespace, Cgroup namespace, and init process."
(under "What is WSL2"?)
https://www.docker.com/blog/new-docker-desktop-wsl2-backend/
"WSL 2 runs all distros in the same utility VM, sharing the same Kernel."
If you run multiple distros take a look at the process manager and find the single vmmem or vmmemWSL (newer versions have the latter). That single instance is all of the instances, and all of the docker containers you might be running as well, each with namespace isolation (with WSL2 having intentional bridging between them for convenience). Visualise it by doing something intensive in any of them and seeing the single process react, because that's the single utility VM responsible for all of them. Further while starting up the first WSL2 instance or Docker container is expensive, requiring the initialisation of all of the resources for the utility VM and the memory to support it, subsequent iterations are much less expensive.
Thanks, it wasn't out of doubt that I asked, but it seemed having a reference to point at would help resolve the contention. The Docker blog post covered a lot more detail, even about WSL2, which was really informative and I hadn't seen.
I wonder exactly how much work "container" is doing in that Microsoft blog post's description, because it doesn't seem like it's the same kind of environment as a runc or containerd container?
I also wasn't quite sure how much detail to infer from the behavior of vmmemWSL or vmcompute.exe, because my casual understanding is that there's some adaptation layer that handles mapping Linux calls to Windows calls. It seems reasonable to allow for process mapping or accounting shenanigans for any number of good reasons.
>there's some adaptation layer that handles mapping Linux calls to Windows calls
This was how WSL1 functioned. It used a shim layer, and honestly it was pretty neat for a lot of the basic stuff. It fell apart if you were doing more complex/advanced stuff, however, as there were many missing cases and exceptions.
WSL2 instead uses that utility VM, with a couple of special Microsoft kernel drivers to interact with the host system.
The linked project includes a very different way to launch docker containers.
How does this compare to running the containers in one of the WSL VMs? Can't you do all the same things via the host VM?