Writing this from my Debian system, it's a great distro that has been excellent to me as a daily driver. I switched to Debian 6 after Ubuntu went way downhill and haven't had cause to regret it.
I like Debian's measured pragmatism with ideology, how it's a distro of free software by default but it also makes it easy to install non-free software or firmware blobs. I like Debian's package guidelines, I like dpkg, I like the Debian documentation even if Arch remains the best on that front. I like the stable/testing package streams, which make it easy to choose old but rock-stable vs just a bit old and almost as stable.
And one of the best parts is, I've never had a Debian system break without it being my fault in some way. Every case I've had of Debian being outright unbootable or having other serious problems, it's been due to me trying to add things from third-party repositories, or messing up the configuration or something else, but not a fault of the Debian system itself.
>I've never had a Debian system break without it being my fault in some way.
Debian is great but I can't say this is a shared experience. In particular, I've been bitten by Debian's heavy patching of kernel in Debian stable (specifically, backport regressions in the fast-moving DRM subsystem leading to hard-to-debug crashes), despite Debian releases technically having the "same" kernel for a duration of a release. In contrast, Ubuntu just uses newer kernels and -hwe avoids a lot of patch friction. So I still use Debian VMs but Ubuntu on bare metal. I haven't tried kernel from debian-backports repos though.
> Debian's heavy patching of kernel in Debian stable
Needs citation.
Debian stable uses upstream LTS kernels and I'm not aware of any heavy patching they do on top of that.
Upstream -stable trees are very relaxed in patches they accept and unfortunately they don't get serious testing before being released either (you can see there's a new release in every -stable tree like every week), so that's probably what you've been bit by.
LTS has had major breaking changes in various areas in recent times too, virtio was badly broken at one point this year, as was a commonly used netlink interface. Hat tip to the Arch kernel contributors who helped track this down and chase upstream, as we had mutually affected users. The debian and ubuntu bug trackers were a wasteland of silence and user contributions throughout the situation, and frustratingly continued to be so as AWS, GCP and others copied their kernel patch trees and blindly shipped the same problems to users and refused to respond to bugs and emails.
You're right stability comes from testing, not enough testing happens around Linux period, regardless of which branch is being discussed.
It's not easy testing kernels, but the bar is pretty low.
One of the unsung praises of Arch is that it's turned thousands of users into testers. Before someone says "that shouldn't be the user's responsibility" I'm going to say I'm not so sure. We're all in this together. I'd rather deal with a bug or two on my desktop at home if it means it gets fixed before appearing in a distro that gets used for servers at work and causes issues there where the consequences are much higher.
> One of the unsung praises of Arch is that it's turned thousands of users into testers.
You can do that well enough with Debian's "testing" and "unstable" release channels. Aside from the few months leading up to a new "stable" release, which usually isn't a big deal (and fixing regressions in "stable" should then be a higher priority anyway). Just don't install it on systems that you actually depend on to keep working. But running it on your desktop at home that you only use to play and experiment with is just fine.
I have a similar experience. My not-so-tech-savvy brother also has the same laptop setup I do (arch+XFCE). He knows to yay -Syyu and it's usually never a problem. The recent upgrade there was the vlc package split problem so I told him to hold on upgrading and that I'd come and do it. While I needed to sit and filter and install the optional dependencies myself for my upgrade, a week later it was already figured out (based on user feedback I assume) and the usual yay -Syyu installed just the right optional dependencies.
I don't consider myself particularly adept with linux. I've only been running it daily on the desktop for the last few years and, aside from mucking around with TWMs, I've not done much poking about with the internals.
Despite the reputations, I've had far fewer issues on Arch-based desktop distros than back when I was rolling Ubuntu and Debian.
That said, Debian on a server every time.
Yeah same. I think the release cycle actually doesn't matter at all. The reason for it is that the majority of breakage are caused by components/extensions of gnome and kde and non-DE-yet-complex software in distros with a lot of those present out of the box, like manjaro, breaking backwards compatibility every other week.
When people switch to arch they typically set things up from scratch, end up choosing simple tools and avoid most of the unstable stuff distros push onto you.
The virtio bug bit me. I have one hostnode (debian) with one nic that gets passed through to a virtualized opnsense. Storage is two consumer nvme in raid0 as system disk. I was expecting downtime with this setup, but kernel bug was not on my list.
Bear in mind, LTS and ELTS are not Debian maintained.
The wiki has more info on this.
The folks behind Debian LTS and Freexian ELTS are all Debian members/contributors, and the Debian LTS changes end up in the Debian archive, while the Freexian ELTS ones are publicly available, just in an external archive.
https://wiki.debian.org/LTS https://wiki.debian.org/LTS/Team https://wiki.debian.org/LTS/Funding https://wiki.debian.org/LTS/Extended
I think they mean the LTS kernels, not Debian's LTS.
Yep, I mean longterm trees from https://www.kernel.org/, to be clear.
- [deleted]
Yes, seems so, thanks.
AFAICT, the patches are here: https://salsa.debian.org/kernel-team/linux/-/tree/debian/lat...
Whether that qualifies as "heavy" or not is of course a matter of opinion, but it's not nothing.
IMO, considering the size and scale of the kernel (millions of lines of code, variety of architectures supported, # of subsystems and ridiculous amount of device drivers ), these patches might as well be counted as nothing. I'd say they're basically shipping a pristine kernel :D
Classic Linux user response. Jeez…
These days all of my “Debian” bare metal systems are technically running Proxmox, which I think is a relatively happy medium as far as the base Debian system goes — the Proxmox kernel is basically the Ubuntu kernel, but otherwise it’s a pretty standard Debian system.
I’ve thought about (ab)using a Proxmox repository on an otherwise stock Debian system before just for the kernel…
Same at $work all physical servers run proxmox VE (by policy), 90% of VMs are debian (cloudinit genericcloud), the rest misc linux and various windows.
The upstream kernel already backports enough regressions on its own to its stable releases, Debian's kernel team does not help them too much with that.
Which GPU, display server and compositor stack are you using?
Integrated Intel GPU and no graphical system, just KMS VT (text console). That's what made it so frustrating - only displaying a console should not result in kernel panics under CPU load! Admittedly, the experience was anecdotal and years ago and I heard Debian is doing less of a RHEL-style "frankenkernel" now.
drm/i915 was a pretty miserable experience for me on one machine. The Intel drivers for that chipset around the 5.3 kernel era weren't good, I recall lots of bug reports at the time. Below is one of the several issues that I was affected by
Intel's integrated GPU driver team, actually all driver teams, had a period of frequent screw-ups a while back (five years ago? Time flies). They also borked e1000e driver in the same period.
On the other hand, I had and still have many Debian installations, some with Intel integrated graphics. None of them created any problems for a very, very long time. To be honest, I don't remember even any of my Intel iGPU systems crashed.
...and I use Debian for almost two decades, and I have seen tons of GPU problems. I used to write my Xorg.conf files without using man, heh. :)
Maybe you can give Debian another chance.
Yeah, I ditched Ubuntu Server after too many upgrade headaches. I manage 75+ VPS instances for app hosting, and it's nerve-wracking doing maintenance updates knowing there's a chance one won't boot after. That's easily an extra 1-2 hours per VPS just to get it back. Switched to Debian back in the 8.x days in 2015 and it's been smooth sailing. Never had it break unless I was the one who messed it up.
Me too. All the server software (postgres, caddy, bun, etc) I'm using runs just fine on Debian, and I never have had updates break something on my Debian servers.
The only thing I can say against Debian is that it tends to start new server software immediately after install, before I have a chance to configure it properly. Defaults are sane for most packages, but, still, it scares me a little. In that I like the Red Hat approach of installing and leaving it off until I decide to turn it on.
It is a well-known issue with probably less well-known solutions, cf. <https://unix.stackexchange.com/questions/723675/debian-ubunt...>
I think this is the recommended way to avoid autostarting services on Debian.echo exit 101 > /usr/sbin/policy-rc.d chmod +x /usr/sbin/policy-rc.d
Good pointer. I remember learning it, and then forgetting it. Probably more than once.
Still should be the default behavior.
Just have sane firewall rules and you are good. E.g. if I install openssh-server and it auto starts, it doesn't make it out of my machine because my nftables does not allow inbound on port 22. It's just knowing the default behaviour and adjusting your practices for it.
That is a workaround for a ridiculous issue.
A sane firewall won't protect you from privilege escalation from a local attacker. While unlikely, this is one more breach that could be exploited.
Debian bundles AppArmor profiles for most services. This will prevent an attacker from accessing outside the perimeter drawn by the AppArmor profile.
This is the "you're holding it wrong" response to a clear design issue.
Aren't firewall rules part of the "configuration" the OP talked about?
No, because you can install and configure the firewall before you install package X. (without knowing anything about X, your firewall defaults can just prevent X from doing anything)
But you can't (easily) configure package X itself before you install it; and after you install it, it runs immediately so you only get to configure it after the first run.
> And one of the best parts is, I've never had a Debian system break without it being my fault in some way. Every case I've had of Debian being outright unbootable or having other serious problems, it's been due to me trying to add things from third-party repositories, or messing up the configuration or something else, but not a fault of the Debian system itself.
You're not trying hard enough ;-)
I have Debian on an old MacBook Pro and had it on an even older iMac, and I've had a few problems over the years. Always with proprietary drivers - WiFi, graphics, webcams, etc. - Apple really don't want people using free software on their hardware. There's always been a fix, but there have been a few stressful moments and hoops to jump through.
But it's definitely my favorite distro, and I run it everywhere I can. Pretty much always "just works" anywhere but Apple.
I’m not trying hard enough. Feel the same as you and GP for two decades and counting.
> after Ubuntu went way downhill and haven't had cause to regret it.
In what way Ubuntu went downhill?
For me, it was a combination of Ubuntu breaking upstream and introducing its own unnecessary systems.
I had a few issues caused by Ubuntu that weren't upstream. One was Tracker somehow eating up lots of CPU power and slowing the system down. Another was with input methods, I need to type in a pretty rare language and that was just broken on Ubuntu one day. Not upstream.
The bigger problem was Ubuntu adding stuff before it was ready. The Unity desktop, which is now fine, was initially missing lots of basic features and wasn't a good experience. Then there was the short-lived but pretty disastrous attempt to replace Xorg with Mir.
My non-tech parents are still on Ubuntu, have been for some twenty years, and it's mostly fine there. I wouldn't recommend it if you know your way around a Linux system but for non-tech, Ubuntu works well. Still, just a few months ago I was astonished by another Ubuntu change. My mom's most important program is Thunderbird, with her long-running email archive. The Thunderbird profile has effortlessly moved across several PCs as it's just a copy of the folder. Suddenly, Ubuntu migrated to the snap version of Thunderbird, so after a software update she found herself with a new version and an empty profile. Because of course the new profile is somewhere under ~/snap and the update didn't in any way try to link to the old profile.
Then there were stupid things like Amazon search results in the Unity dash search when looking for your files or programs. Nah. Ubuntu isn't terrible by any means but for a number of years now, I'd recommend Linux Mint as the friendly Debian derivative.
Snaps? Proprietary package managers are never great.
As I understand it, snap the package format is not proprietary. Its as open source as say flatpak. What is proprietary is Canonical official snap store, and they patch their version of snap to only use that store. It'd be the same as flatpak being tied to only flathub.
Of course that goes against the spirit of FOSS, but there's a bit more nuance there than simply saying "snaps are proprietary".
Snaps don't just suck from an ideological but also practical perspective, as described for Thunderbird. Firefox on Ubuntu has also serious permission issues with webcam support OOTB even experts are struggling with (involving AppArmor, pipewire, snap, and FF device config). and has become unusable for things like browser-only MS Teams on mainstream notebooks.
Containers, popular as they may be on servers, can only add breakage and overhead to desktops, especially for an established and already much better organized system like Debian's apt. There just haven't been any new desktop apps for way over a decade that would warrant yet another level of indirection.
In addition, applications under snap are much slower to start. That's just not acceptable.
I've tried making snap packages, but I discovered they're very tightly tied to Ubuntu's base packages. They're not portable at all. In essence they're effectively just a secondary Ubuntu-specific package format for user-level applications.
For example, with flatpak you select a base runtime for your package that contains mostly system-agnostic libraries. With snap, you specify an Ubuntu version as a base runtime and additional dependencies that are Ubuntu packages.
My understanding is that the base layer (similar to what FlatPak provides) is shared and downloaded by the snap manager so it is portable as long as you want to download it.
The end result should be similar to FlatPak where you have practically no dependencies as it should package almost everything.
See, that's the issue. I want my distribution to distribute the dependencies I need to run applications outside of containers. That's, like, it's main job man.
Did they release the server components for hosting your own snap repositories, yet?
I can't seem to find it. Any pointers would be helpful, so at least I can know the latest state of this thing.
Snapd still hardcodes Canonical's snap store signing key and provides no mechanism to add your own keys. Any other snap repos will be treated as second class citizens.
No, but it's trivial to implement since Snap is open source so you know exactly what sort of payload it wants.
> If I try to manipulate what you are talking about, I can attempt to frame something as open source which isn't.
I didn't say "the snap format".
The server isn't, and the client is hostile to using an alternative server. Snaps are a solution, and picking out one piece is deceptive.
I don't really understand why this is such a big problem. You don't have to use snaps.
Defaults matter a lot, and snaps are the default in Ubuntu.
The topic is not whether snaps are avoidable or not, but the Ubuntu is going downhill. And snaps are purported to be part of that downhill, which would be Ubuntu's NIH syndrome. As far as I know, Ubuntu's only successful development is Ubuntu itself - the other projects have all failed over the years, and snap, while ongoing, is not winning any popularity contests either.
Snaps per se are no better or worse than flatpak. Canonical's mistake, IMO, was to make their store the only place snaps can be hosted. That is the "proprietary" bit everyone keeps talking about.
But in practice even for flatpak the only realistic place you can publish your flatpak if you want any traction at all would be flathub, so both formats have only one store right now. But flatpak allows a custom store while for some strange reason Canonical decided not to allow snap that freedom.
Another problem is, Canonical promised to release server components and enable alternative stores, and just forgot that they made that pledge.
Also, rugpulling users and migrating things to snaps without asking their users in order to "create a positive pressure on snap team to keep their quality high" didn't sit well with the users.
> But in practice even for flatpak the only realistic place you can publish your flatpak if you want any traction at all would be flathub
But, for any size of fleet from homelab to an enterprise client farm, I can host my local flathub and install my personal special-purpose flatpaks without paying anyone and thinking whether my packages will be there next morning.
Freedom matters, esp. it that's the norm in that ecosystem.
I was neutral-ish about Ubuntu, but I flat out avoid them now, and migrate any remaining Ubuntu server to Debian in shortest way possible.
I'm using Debian for the last 20 years or so, BTW.
Yes, same. I started with Ubuntu back in the day, because the server I inherited ran Ubuntu, and it was just natural after that for me to run it on the desktop as well. I grew to dislike their NIH over the years, tried distro hopping, and settled on Debian.
Yes, I agree. Snaps or Flatpak, not much of a practical, technological difference. What sets them apart is the way the distribution is handled, including the open source availability of the backend, which enabled for example Red Hat and Elementary to run their own stores.
If you are making your own distro, creating your own flatpak store is trivial, that's all what matters. Linux Mint doesn't use snap exactly because Canonical forces everyone to use their snap store.
Canonical doesn't force anyone to use anything. Snap is open source, just modify it to use a different store if you want. Mint literally forked a zombie DE, but changing a few lines of code in snap is an issue...
Defaults matter a lot, snap is not open source (client is, backend isn't), you cannot "just modify it (Ubuntu)" to use a different store, because Ubuntu installs snaps even with apt. Mint is not part of the discussion.
> Mint is not part of the discussion.
Read the parent comment I responded to
Mea culpa, I glossed over that!
> which would be Ubuntu's NIH syndrome
Red Hat do the same. They reinvented the wheel on multiple occasions (systemd and it's whole ecosystem like systemd-resolved and timed and the whole kitchen sink; podman, buildah, dnf, etc etc.)
They just have more success on getting their NIH babies accepted as the standard by everyone else. Canonical just fail at that (often for good reasons, Unity was downright crap for some time) and abandon stuff, which doesn't help their future causes.
Canonical did their own NIH init daemon called Upstart which failed due to the fundamental design and the implementation being plain bad. Redhat builds better software which is why their NIH gets more adoption.
ChromeOS still uses upstart.
Upstart came before systemd; much of the reason for systemd’s creation was fixing what was considered fundamental design mistakes in Upstart: <http://0pointer.net/blog/projects/systemd#:~:text=On%20Upsta...> (under the heading “On Upstart”).
> systemd
https://bbs.archlinux.org/viewtopic.php?pid=1149530#p1149530
> like systemd-resolved and timed
They're not forced on anybody, they're not required by systemd, and many distributions use more feature-rich alternatives (including, afaik, RHEL — last time I looked at it, they used dnsmasq and chrony). They're also often shipped as separate optional packages:
> podman, buildah$ apt search 'systemd-timesyncd|systemd-resolved' systemd-resolved/testing,now 257.7-1 amd64 systemd-timesyncd/testing 257.7-1 amd64
Still not anywhere near as popular as Docker. Although technically they're far better than Docker, and if anyone is using them, it's for that reason.
> dnf
Only used by RHEL and its upstream Fedora?
---
All of this makes very little sense.
>> podman, buildah
> Still not anywhere near as popular as Docker. Although technically they're far better than Docker, and if anyone is using them, it's for that reason.
NIH packages are generally expected to be less popular, yes. They have some technical merit, though in my opinion that's mostly trade-offs rather than one being strictly better than the other. I would be surprised if everybody using them is using them because of technical merit as opposed to it being pushed by the distro.
Red Hat builds really good stuff. NIH is sometimes right because nobody invented the stuff at all. Standard Unix tools are great but they don't solve everything, so we've ended up with most distros having "the Debian way" or "the Red Hat way", the main difference of course being deb/apt/dpkg vs rpm/yum/dnf. When building an embedded system with Yocto, the basic choices are also Debian or Red Hat style, though you can of course do anything.
Special mention goes to NetworkManager, which has become the de facto standard way to configure networking because it's good. And with nmcli I can even remember how to connect to wifi from single user mode.
>They just have more success on getting their NIH babies accepted as the standard by everyone else.
This depends on the phrasing. We could also say that Red Hat produces actually useful software, in contrast with Canonical, whose developments don't seem to provide value over existing solutions.
We could also say that Canonical tries really hard to do exactly what Red Hat does, but in a slightly different space, and not very successfully.
A major difference is that Canonical projects have copyright assignment policies, while Red Hat projects don't - this probably explains a lot of the difference in adoption dynamics.
You're right. You don't have to use snaps. Ubuntu migrates packages slowly in behalf of you.
Using apt to install some packages installs snap plumbing and downloads the package as a snap automatically. You don't have to install it manually.
There's no malicious intent though, it's made to "impose a positive pressure on the snap team to produce better work and keep their quality high" (paraphrased, but this was the official answer).
Installing the inferior snap packages when you apt get is one of the worst cases of a Linux distro refusing to respect the user's intent that I've experienced.
Preach.
This has got to be the most user-blind self-imposed preference in a modern operating system outside of Microsoft's BS.
If you're going to use an OSS operating system, the control of what is placed on the system should be inherently with the user. If the developer has a question if a new package should be added or is required, throw a prompt and ask -- with a default to not use application containers and the default packaging system.
Really not hard.
And one of these migrations broke my workflow substantially enough that a dist-upgrade turned into a complete system reformat to Debian and cost hours that I couldn’t afford.
Debian has been a safe haven since.
You really have to work to avoid them; ex. `apt install firefox` will install the snap
You sort of do. It's really hard to avoid them, because they've modified "apt" to install snaps by default without asking.
I'm with my neighbor comments. How do you use Ubuntu without snaps? The base Ubuntu install already comes with several snaps. Installing random things through apt leads to snaps. I personally do not know how to avoid snaps on Ubuntu.
I made this snap "alternative" to solve this exact problem: https://github.com/justinclift
Packaged as: https://github.com/justinclift/snapd-empty/releases/download...
It's just an empty package that tells the system snap is installed, to stop the broken dependency chains you otherwise get from force uninstalling snap.
It's been working fine on a handful of Ubuntu 24.04 systems I've been handed and can't change the OS of, for about half a year now.
Ugh, I somehow managed to paste an incorrect repo url in my comment above.
It should be this: https://github.com/justinclift/snapd-empty/
You migrate to Debian. Everything else is a bandaid that can be rug pulled any time Canonical feels like doing so.
I've done that some years ago and couldn't be happier.
You uninstall all snaps, uninstall snapd, and block it from being installed via APT.
Then you add e.g. the mozilla PPA such that its firefox package gets installed instead.
Yes you do. Some packages aren't available anymore in apt
If you use Ubuntu, yes you do. It’s why I ditched Ubuntu
I forget the exact context but I recall an Ubuntu dev stating they have more users of the Firefox snap alone than trendy distros have entire users.
I think it’s worth keeping that in mind with all the hate Ubuntu gets. Most users are just silently getting their work done on an LTS they update every two years.
Well, I don't know which trendy distro the dev is referring, but Debian is complete opposite of Trendy. It's a bedrock distro silently running almost everywhere in some form or shape.
Most of the Linux-based (enterprise and/or embedded) appliances are built upon Debian, for example.
P.S.: The total number of Debian installations and their derivatives are unknown BTW. Debian installations and infra do not collect such information. You can install "popularity-contest", but the question defaults to "no" during installation, so most people do not send in package selection lists, unlike Canonical's tracking of snap installations.
They have also had more malware on the snap store than all other distros have had in their official repos combined.
IMHO, it also became too complex, with too many things installed by default and too much upstream patching.
This made it more fragile. It was really nice in the late 2000s, but gradually became worse.
all the weird proprietary Canonical stuff they try to put into vanilla Debian and have it replace common stuff.
snap, lxd (not lxc!), mir, upstart, ufw.
It's neverending, and it's always failing.
LXD was forked as Incus, and it’s an absolute delight.
Seamless LXC and virtual machine management with clustering, a clean API, YAML templates and a built-in load balancer, it's like Kubernetes for stateful workloads.
Incus is fantastic. I think Proxmox is where everyone is migrating to after the VMWare/Broadcom fiasco, but people should seriously consider Incus as well.
Snaps, and ads in the motd
Plus reduced support duration to increase adoption of Ubuntu Pro. Changing some packages ever slightly so they behave a little differently.
Switch to sudo-rs, uu-coreutils (rust based stuff), etc., etc.
It's not a Debian derivative anymore. It's something else.
Was not my cup of tea before, it's even more not my cup of tea now.
Switching to rust-based system software is very different from the clearly profit seeking (or control seeking which is just long term profit seeking) changes like ads and snap (with massive friction to not using snap).
Yes, but I prefer glibc + GNU Coreutils based systems in my installations. They're additional nails on top of the (fatal) ones like snap, Ubuntu Pro and MOTD ads.
The alternative question to ask is: in what way has it gone uphill versus just using Debian?
In the early days it had a differing and usually better aligned release schedule for the critical graphics stack.
As a function of time, you are increasingly likely to get rug pulled once Shuttleworth decides to collect his next ransom.
> in what way has it gone uphill versus just using Debian?
Their lawyers' willingness to risk shipping pre-built zfs kernel modules (that are always in sync with the kernel). Pretty important if you're into that sort of thing, it's easier to remove cruft once post-install than to keep an eye on DKMS for years (making sure that it hasn't disassembled itself and continues working).
Minutes to start Firefox on one of my machines.
Amazon ads in the Unity application menu (what was it called, 'lenses' or something?).
I'm an old-school user so I'm not exactly Ubuntu's target audience, but for Ubuntu was bad about a lot of the older, lesser-used bits of Linux.
The two things I can remember were problems with NFS out of the box (outside having to install nfs-common, which I'm fine with) and apt-cache not displaying descriptions of packages. There were lots of other, minor annoyances that affected people like me but wouldn't affect someone who got into Linux desktops after, say, 2010. My memory sucks though so those are the two I remember. Yes, there were bug reports filed and yes, they sat in the tracker for years with no attention from Ubuntu.
I wound up back on Debian once I got old enough that I didn't care about being behind the times a couple years.
Oh....snap.
> I like dpkg, I like the Debian documentation even if Arch remains the best on that front.
That's curious, because when I was learning to make Debian packages, I found the official documentation to be far better than I had seen from any other distro. The Policy Manual in particular is very detailed, continually improving, and even documents incremental changes from each version to the next. (That last bit makes it easy for package maintainers to keep up with current best practices.)
Does Arch have something better in this department?
Are you perhaps comparing the Arch wiki to Debian's wiki? On that front I would agree with you.
You weren't around for when they broke the OpenSSL random number generator for no good reason. That was back in 2008 and it created vulnerabilities that persist to this day. https://16years.secvuln.info/
I still use Debian but it's hard to forget stuff like that even after all these years.
Though I agree with you, if that's the bar you're setting then Debian comes out far ahead of any other OS that I've ever used - Linux based or not. I can recall dozens of worse Windows bugs, most of which did not even affect me because I was not using Windows at the time. Mac has its share too.
What do you expect Debian to do today about this 17 years old incident?
> I like Debian's measured pragmatism with ideology
There is plenty that could be said of Debian but as far as I’m concerned that’s not part of it.
Debian patches software for purely ideological reasons because they think they are not free enough. That’s not pragmatism. That’s the reverse of pragmatism. It certainly is a real drag on the teams developing the software they try to ship.
> I've never had a Debian system break without it being my fault in some way.
My experience has been contrary to that. I'm a Linux user of 25+ years with various distros but about half of that time with Debian as my main desktop. I broke up with Debian about ten years ago thinking we could still be friends, but every time I've tried to put it on a new box it since then something weird has happened, most recently about a month ago on a completely new Intel N150, when it gave me some stick about video modes. Today my laptop got hosed by an attempted upgrade from bookworm to trixie, as in tons of error messages and then no more docker and no more virtualbox. No harm done because Debian taught me long ago to store a copy of the whole root filesystem on external media before an upgrade, but now the clock is ticking until I have to migrate off it or get stuck with something too old to be compatible with anything.
> And one of the best parts is, I've never had a Debian system break without it being my fault in some way.
https://blog.kronis.dev/blog/debian-updates-are-broken
https://blog.kronis.dev/blog/debian-and-grub-are-broken
Then again, I’ve had most software occasionally break, I’m thankful that Debian exists.
Debian is my foundation. I keep servers on Old Stable and test new release features on an ephemeral system.
I learned nftables with Bookworm and labwc with Trixie.
labwc supports Wayland with Openbox configuration.
Why do you remain on old stable instead of stable?
I've been always a Fedora person, still am. But my PC moved to Proxmox (debian) in 2023. Now a Fedora Atomic sits in a VM running flatpaks and podman containers :D
I think this is all true, but the "being my fault" part has gotten better for me with nixos. Broke it? just reboot into the previous version and get configuration.nix back from git. I had to reinstall exactly once in 2016 shortly after the first install, but I don't know what I did wrong. the third time I installed nixos was last week when I bought a new computer that came with Windows.
You don't mention say what you like specifically about Debian, most of what you wrote could be said for a lot of distributions.
So here is what I _don't_ like about Debian :-)
- I don't like Debian package tooling (dpkg, debootstrap, de build...). Actually I hate everything about the experience of Debian packaging. Every time I package for Debian, I end up with a messed up setup of chroots and have to make triple sure nothing leaked from my environment.
- Debian has a habit of repackaging everything at their own sauce, disregarding upstream philosophy. Debian packages will have their own microcosm of configuration directories, defaults, paths, etc. orthogonal to what a pristine installation look like.
- Debian has the annoying habit of default starting installed services. So you always have to dance around your configuration management to disable services, install them, configure them, then restart them.
Do you usually update in place or do a fresh install whenever a new major version comes out?
I always update in place. And I follow all the upgrade procedure advice in the release notes.
[dead]
[flagged]