I'm not really understanding what this does specifically. It looks like it creates the filesystem on the host machine using chroot and then tarballs it?
Is there an advantage to that over combining layers and using cache mounts to avoid those redundant downloads?
A side-by-side comparison of dive's output would be helpful (https://github.com/wagoodman/dive).
Also can you clarify what you mean by "requiring full rebuilds of all previous steps"?
It’s basically just a fancy bash script (mkimage.sh) and Makefiles for calling scripts with different sets of paramaters. The process – is the same exact process of creating base docker images – chroot and using package manager apt or yum to install packages in chroot jails. That is how ubi9 or debian slim base images are made. With this tool you can extend the process – install dependencies, run security checks, sign it all in one go. It’s easy to extend it, so you can create base images for Kafka with different Java distributions for example. Which is very useful for testing and performance tuning.
Imagine you work at a large org and you want to control all images used for CI/CD workers. Instead of scattering it across different Dockerfiles and scripts (Java, NodeJS, python, etc) you can just use a single tool. At least it was why I built it in the first place.
I'm similarly curious why not just use Alpine or Void rootfs if container size is important?
For the same reason hyperscalers build and maintain their own distro’s and base images – to have complete control over supply chain.