When I was a beginner I was surprised (on university machines and clusters) that I can't install packages for myself but if I download the source, I can compile and use almost anything by putting it in my home dir. I still don't quite get why we have to do this dance and can't just install from apt into home, but whatever (downloading the .deb file and unpacking it as an archive is also an option though, but still quite some manual effort.)
A great tool to manage these home-based installations is GNU Stow. In fact I've written scripts that just take the tarball, compile it with the typical workflow (autotools or cmake), setting prefix and DESTDIR as needed and then use Stow to put it in place. Then if I want to "uninstall" something, I use `stow --delete`. Works well enough for most of the use cases, like installing a newer GCC or Cmake than available on a cluster etc.
That's actually how the nix package manager works, normal users can 'install', or even build, packages. It works because the installation does not really have any side effects beyond using some resources (disk space, network, CPU), which as you point out you could anyway use as a normal user.
Yeah, it has been painful previously, but now we get conda / uv, it is in a much better condition right now.
That's a much more narrow set of software though, compared to the breadth of distro package managers.
Well how else would we gradually drive people to madness if not by asking them their sudo password every five seconds?