26 years of FreeBSD and counting...
IIRC in about 99 I got sick of Mandrake and RH RPM deps hell and found FreeBSD 3 CD in a Walnut creek book. Ports and BSD packages were a revelation, to say nothing of the documentation which still sets it apart from the haphazard Linux.
The comment about using a good SERVER mobo like supermicro is on point --- I managed many supermicro fbsd colo ack servers for almost 15 years and those boards worked well with it.
Currently I run FreeBSD on several home machines including old mac minis repurposed as media machines throughout the house.
They run kodi + linux brave and with that I can stream anything like live sports.
Also OpenBSD for one firewall and PFSense (FreeBSD) for another.
> The comment about using a good SERVER mobo like supermicro is on point --- I managed many supermicro fbsd colo ack servers for almost 15 years and those boards worked well with it.
I completely agree.
Supermicro mobo's with server-grade components combined with aggressive cooling fans/heat sinks running FreeBSD in a AAA data center resulted in two prod servers having uptimes of over 3000+ days. This included dozens of app/jails/ports updates (pretty much everything other than the kernel).
Back when I was a sysadmin (sort of 2007-2010), the preference of a colleague (RIP AJG...) who ran a lot of things before my time at the org, was FreeBSD, and I quickly understood why. We ran Postgres on 6.x as a db for a large Jira instance, while Jira itself ran on Linux iirc because I went with jrockit that ran circles around any JVM at the time. Those Postgres boxes had many years of uptime, locked away in a small colo facility, never failed and outlived the org that got merged and chopped up. FreeBSD was just so snappy, and just kept going. At the same time I ran ZFS on FreeBSD as our main file store for NFS and whatnot, snapshots, send/recv replication and all.
And it was all indeed on Supermicro server hardware.
And in parallel, while our routing kit was mostly Cisco, I put a transparent bridging firewall in front of the network running pfSense 1.2 or 1.3. It was one of those embedded boxes running a Via C3/Nehemiah, that had the Via Padlock crypto engine that pfSense supported. Its AES256 performance blew away our Xeons and crypto accelerator cards in our midrange Cisco ISRs - cards costing more than that C3 box. It had a failsafe Ethernet passthrough for when power went down and it ran FreeBSD. I've been using pfSense ever since, commercialisation / Netgate aside, force of habit.
And although for some things I lean towards OpenBSD today, FreeBSD delivers, and it has for nearly 20 years for me. And, as they say, it should for you, too.
> uptimes of over 3000+ days
Oof, that sounds scary. I’ve come to view high uptime as dangerous… it’s a sign you haven’t rebooted the thing enough to know what even happens on reboot (will everything come back up? Is the system currently relying on a process that only happens to be running because someone started it manually? Etc)
Servers need to be rebooted regularly in order to know that rebooting won’t break things, IMO.
Depends how they are built. There are many embedded/real-time systems that expect this sort of reliability too of course.
I worked on systems that were allowed 8 hours of downtime per year -- but otherwise would have run forever unless there was nuclear bomb that went off or a power loss...Tandem. You could pull out CPUs while running.
So if we are talking about garbage windows servers sure. It's just a question of what is accepted by the customers/users.
Yep. I once did some contracting work for a place that had servers with 1200+ day uptimes. People were afraid to reboot anything. There was also tons of turnover.
I still remember AJG vividly to this day. He also once told me he was a FreeBSD contributor.
My journey with FreeBSD began with version 4.5 or 4.6, running in VMware on Windows and using XDMCP for the desktop. It was super fast and ran at almost native speed. I tried Red Hat 9, and it was slow as a snail by comparison. For me, the choice was obvious. Later on I was running FreeBSD on my ThinkPad, and I still remember the days of coding on it using my professor's linear/non-linear optimisation library, sorting out wlan driver and firmware to use the library wifi, and compiling Mozilla on my way home while the laptop was in my backpack. My personal record: I never messed up a single FreeBSD install, even when I was completely drunk.
Even later, I needed to monitor the CPU and memory usage of our performance/latency critical code. The POSIX API worked out of the box on FreeBSD and Solaris exactly as documented. Linux? Nope. I had to resort to parsing /proc myself, and what a mess it was. The structure was inconsistent, and even within the same kernel minor version the behaviour could change. Sometimes a process's CPU time included all its threads, and sometimes it didn't.
To this day, I still tell people that FreeBSD (and the other BSDs) feels like a proper operating system, and GNU/Linux feels like a toy.
All hail the mighty Wombats!
The "completely drunk" comment made me chuckle, too familiar... poor choices, but good times!
This is more about OpenBSD, but worth mentioning that nicm of tmux fame also worked with us in the same little office, in a strange little town.
AJG also made some contributions to Postgres, and wrote a beautiful, full-featured web editor for BIND DNS records, which, sadly, faded along with him and was eventually lost to time along with his domain, tcpd.net, that has since expired and was taken over.
Lovely stuff. The industry would be so much better off if the family of BSDs had more attention and use.
I run some EVE Online services for friends. They have manual install steps for those of use not using containers. Took me half a day to get the stack going on FBSD and that was mostly me making typos and mistakes. So pleased I was able to dodge the “docker compose up” trap.
As one of the guys who develops a EVE Online service: While you were able to get by with manual install steps that perhaps change with the OS, for a decent number of people it is the first time they do anything on the CLI on a unixoid system. Docker reduces the support workload in our help channels drastically because it is easier to get going.
I can sympathize. It makes sense.
But...
As a veteran admin I am tired of reading trough Docker files to guess how to do a native setup. You can never suss out the intent from those files - only do haphazardous guesses.
It smells too much like "the code is the documentation".
I am fine that the manual install steps are hidden deep in the dungeons away from the casual users.
But please do not replace Posix compliance with Docker compliance.
Look at Immich for an unfortunate example. Theys have some nice high level architecture documentation. But the "whys" of the Dockerfile is nowhere to be found. Makes it harder to contribute as it caters to the Docker crowd only and leaves a lot of guesswork for the Posix crowd.
Veteran sysadmin of 30 years... UNIX sysadmin and developer...
I use docker+compose for my dev projects for about the past 12 years. Very tough to beat the speed of development with multi-tier applications.
To me Dockerfiles seem like the perfect amount of DSL but still flexible because you can literally run any command as a RUN line and produce anything you want for layer. Dockerfiles seem to get it right. Maybe the 'anything' seems like a mis-feature but if you use it well it's a game changer.
Dockerfiles are also an excellent way to distribute FOSS to people who unlike you or I cannot really manage a systems, install software, etc without eventually making a mess or getting lost (i.e. jr developers?).
Are their supply chain risks? sure -- Like many package systems. I build my important images from scratch all the time just to mitigate this. There's also Podman with Podfiles if you want something more FOSS friendly but less polished.
All that said, I generally containerize production workloads but not with Docker. If a dev project is ready for primetime now I port it to Kubernetes. Used to be BSD Jails .
[dead]
Can you explain why "Docker compose" is a trap?
For my two cents, it discourages standardization.
If you run bare-metal, and instructions to build a project say "you need to install libfoo-dev, libbar-dev, libbaz-dev", you're still sourcing it from your known supply chain, with its known lifecycles and processes. If there's a CVE in libbaz, you'll likely get the patch and news from the same mailing lists you got your kernel and Apache updates from.
Conversely, if you pull in a ready-made Docker container, it might be running an entire Alpine or Ubuntu distribution atop your preferred Debian or FreeBSD. Any process you had to keep those packages up to date and monitor vulnerabilities now has to be extended to cover additional distributions.
You said it better at first: Standardization.
Posix is the standard.
Docker is a tool on top of that layer. Absolutely nothing wrong with it!
But you need to document towards the lower layers. What libraries are used and how they're interconnected.
Posix gives you that common ground.
I will never ask for people not to supply Docker files. But to be it feels the same if a project just released an apt package and nothing else.
The manual steps need to be documented. Not for regular users but for those porting to other systems.
I do not like black boxes.
Why I move from docker for selfhosted stuff was the lack of documentation and very complicated dockerfiles with various shell scripts services config. Sometimes it feels like reading autoconf generated files. I much prefer to learn whatever packaging method of the OS and build the thing myself.
Something like harbor easily integrates to serve as both a pull-through cache, and a cve scanner. You can actually limit allowing pulls with X type or CVSS rating.
You /should/ be scanning your containers just like you /should/ be scanning the rest of your platform surface.
I wonder how it would work with the new-ish podman/oci container support?
You've put that command in quotation marks in three comments on this topic. I don't think it's as prevalent as you're making out.
It really is amazing how much success Linux has achieved given its relatively haphazard nature.
FreeBSD always has been, and always will be, my favorite OS.
It is so much more coherent and considered, as the post author points out. It is cohesive; whole.
> It really is amazing how much success Linux has achieved given its relatively haphazard nature.
That haphazard nature is probably part of the reason for its success, since it allowed for many alternative ways of doing things being experimented in parallel.
That was my impression from diving into The Design & Implementation of the FreeBSD Operating System. I really need to devote time to running it long term.
Really great book. Among other things, I think it's the best explanation of ZFS I've seen in print.
Linux has turned haphazardry into a strength. This is impressive.
I prefer FreeBSD.
I like the haphazardry but I think systemd veered too far into dadaism.
THIS. As bad as launchctl on Macs. Solution looking for a problem so it causes more problems -- like IPv6
> Solution looking for a problem
Two clear problems with the init system (https://en.wikipedia.org/wiki/Init) are
- it doesn’t handle parallel startup of services (sysadmins can tweak their init scripts to speed up booting, but init doesn’t provide any assistance)
- it does not work in a world where devices get attached to and detached from computers all the time (think of USB and Bluetooth devices, WiFi networks).
The second problem was evolutionary solved in init systems by having multiple daemons doing, basically, the same thing: listen for device attachments/detachments, and handling them. Unifying that in a single daemon, IMO, is a good thing. If you accept that, making that single daemon the init process makes sense, too, as it will give you a solution for the first problem.
Yes, ”a solution”. We need a thing. Systemd is a thing. Therefore, we need systemd.
Not to get into a flame war, but 99% of my issues with systemd is that they didn't just replace init, but NTP, DHCP, logging (this one is arguably necessary, but they made it complicated, especially if you want to send logs to a centralized remote location or use another utility to view logs), etc. It broke the fundamental historical concept of unix: do one thing very well.
To make things worse, the opinionated nature of systemd's founder (Lennart Poettering) has meant many a sysadmin has had to fight with it in real-world usage (eg systemd-timesyncd's SNTP client not handling drift very well or systemd-networkd not handling real world DHCP fields). His responses "Don't use a computer with a clock that drifts" or "we're not supporting a non-standard field that the majority of DHCP servers use" just don't jive in the real world. The result was going to be ugly. It's not surprising that most distros ended up bundling chrony, etc.
You can't be serious thinking that IPv4 doesn't have problems
Of course not.
But IPv6 is not the solution to Ipv4's issues at all.
IPv6 is something completely different justified post-facto with EMOTIONAL arguments ie. You are stealing the last IPv4 address from the children!
- Dual stack -- unnecessary and bloated - Performance = 4x worse or more - No NAT or private networks -- not in the same sense. People love to hate on NAT but I do not want my toaster on the internet with a unique hardware serial number. - Hardware tracking built into the protocol -- the mitigations offered are BS. - Addresses are a congintive block - Forces people to use DNS (central) which acts as a censorship choke point.
All we needed was an extra pre space to set WHICH address space - ie. '0' is the old internet in 0.0.0.0.10 --- backwards compatible, not dual stack, no privacy nightmare, etc
I actually wrote a code project that implements this network as an overlay -- but it's not ready to share yet. Works though.
If I were to imagine my self in the room deciding on the IPv6 requirements I expect the key one was 'track every person and every device every where all the time' because if you are just trying to expand the address space then IPv6 is way way way overkill -- it's overkill even for future proofing for the next 1000 years of all that privacy invading.
> All we needed was an extra pre space to set WHICH address space - ie. '0' is the old internet in 0.0.0.0.10 --- backwards compatible, not dual stack, no privacy nightmare, etc
That is what we have in ipv6. What you write sounds good/easy on paper, but when you look at how networks are really implemented you realize it is impossible to do that. Networks packets have to obey the laws of bits and bytes and there isn't any place to put that extra 0 in ipv4: no matter what you have to create a new ipv6. They did write a standard for how to send ipv4 addresses in ipv6, but anyone who doesn't have ipv6 themselves can't use that and so we must dual stack until everyone transitions.
Actually there is a place to put it... I didn't want to get into this but since you asked:
My prototype/thought experiment is called IPv40 a 40bit extension to IPv4.
IPv40 addresses are carried over Legacy networks using the IPv4 Options Field (Type 35)
Legacy routers ignore Option 35 and route based on the 32-bit destination (effectively forcing traffic to "Space 0". IPv40-aware routers parse Option 35 to switch Universes.
This works right now but as a software overlay not in hardware.
Just my programming/thought experiment which was pretty fun.
When solutions are pushed top down like IPv6 my spider sense tingles -- what problem is it solving? the answers are NOT 'to address address space limitations of IPv4' that is the marketing and if you challenge it you will be met with ad hominen attacks and emotional manipulations.
So either the new octet is in the least-significant place in an ipv40 address, in which case it does a terrible job of alleviating the IP shortage (everyone who already has IP blocks just gets 256x as much as them)
Or, it’s in the most-significant place, meaning every new ipv40 IP is in a block that will be a black hole to any old routers, or they just forward it to the (wrong) address that you get from dropping the first octet.
Not to mention it’s still not software-compatible (it doesn’t fit in 32 bits, all system calls would have to change, etc.)
That all seems significantly worse than IPv6 which already works just fine today.
So you have to update every router to actually route the "non-legacy" addresses correctly. How is this different from IPv6?
That is the easy part - most of the core routers have supported ipv6 for decades - IIRC many are IPv6 only on the backbone. The hard part is if there is even one client that doesn't have the update you can't use the new non-legacy addresses as it can't talk to you.
Just like today, it is likely that most client will support your new address, but ISPs won't route them for you.
You didn't save anything as everyone needs to know the new extension before anyone can use it.
Hardware is important - fast routers can't do work in the CPU (and it was even worse in the mid 90's when this started), they need special hardware assistance.
All good points guys -- but my point was to see what is possible. And it was. And it was fun! Of course I know it will perform poorly and it's not hardware.
- [deleted]
I almost completely agree with you, but IPv6 isn't going anywhere - it's our only real alternative. Any other new standard would take decades to implement even if a new standard is agreed on. Core routers would need to be replaced with new devices with ASICs to do hardware routing, etc. It's just far too late.
I still shake my head at IPV6's committee driven development, though. My god, the original RFCs had IPSEC support as mandatory and the auto-configuration had no support for added fields (DNS servers, etc). It's like the committee was only made up of network engineers. The whole SLAAC vs DHCP6 drama was painful to see play out.
That being said, most modern IPv6 implementations no longer derive the link-local portion from the hardware MAC addresses (and even then, many modern devices such as phones randomize their hardware addresses for wifi/bluetooth to prevent tracking). So the privacy portions aren't as much of a concern anymore. Javascript fingerprinting is far more of an issue there.
> still shake my head at IPV6's committee driven development, though. My god, the original RFCs had IPSEC support as mandatory and the auto-configuration had no support for added fields (DNS servers, etc). It's like the committee was only made up of network engineers. The whole SLAAC vs DHCP6 drama was painful to see play out.
So true.
> That being said, most modern IPv6 implementations no longer derive the link-local portion from the hardware MAC addresses (and even then, many modern devices such as phones randomize their hardware addresses for wifi/bluetooth to prevent tracking). So the privacy portions aren't as much of a concern anymore. Javascript fingerprinting is far more of an issue there
JS Fingerprinting is a huge issue.
Honestly if IPv6 was just for the internet of things I'd ignore it. Since it's pushed on to every machine and you are essentially forced to use it -- with no direct benefit to the end user -- I have a big problem with it.
So it's not strictly needed for YOU, but it solves some problems that are not a problem for YOU, and also happens to address space. I do not think the 'fixes' to IPv6 do enough to address my privacy concerns, particularly with a well-resourced adversary. Seems like they just raised the bar a little. Why even bother? Tell me why I must use it without resorting to 'you will be unable to access IPv6 hosted services!' or 'think of the children!?' -- both emotional manipulations.
Browser / JS fingerprinting applies to IPv4, too. And your entire IPv4 home network is likely NAT'd out of an ISP DHCP provided address that rarely changes, so it would be easy to track your household across sites. Do you feel this is a privacy concern, and why not?
> Tell me why I must use it without resorting to 'you will be unable to access IPv6 hosted services!' or 'think of the children!?' -- both emotional manipulations.
You probably don't see it directly, but IPv4 IP addresses are getting expensive - AWS recently started to charge for their use. Cloud providers are sucking them up. If you're in the developed world, you may not see it, but many ISPs, especially in Asia and Africa, are relying on multiple levels of NAT to serve customers - you often literally can't connect to home if you need or want to. It also breaks some protocols in ways you can't get around depending on how said ISPs deal with NAT (eg you pretty much can't use IPSEC VPNs and some other protocols when you're getting NAT'd 2+ times; BitTorrent had issues in this environment, too). Because ISPs doing NAT requires state-tracking, this can cause performance issues in some cases. Some ISPs also use this as an excuse to force you to use their DNS infra that they can then sell onwards (though this can now be mitigated by DNS over HTTPS).
There are some benefits here, though. CGNAT means my phone isn't exposed directly to the big bad internet and I won't be bankrupted by a DDOS attack, but there are other, better ways to deal with that.
Again, I do get where you're coming from. But we do need to move on from IPv4; IPv6 is the only real alternative, warts and all.
C'mon that just rude to Dada.
Linux is haphazard because it's really only the kernel. The analog of "FreeBSD" would be a linux distro like Redhat or Debian etc. In fact, systemd's real goal was to get rid of linux' haphazard nature... but it's ahhh really divisive as everyone knows.
I go back to early Linux' initial success because of the license. It's the first real decision you have to make once you start putting your code out there.
Yes and no. There was also some intellectual property shenanigans with FreeBSD 4.3 and then the really rough FreeBSD 5 series and their initial experiments with M:N threading with the kernel and troubles with SMP
Just another instance of Worse is Better?
- [deleted]