This posts lists inexpensive home servers, Tailscale and Claude Code as the big unlocks.
I actually think Tailscale may be an even bigger deal here than sysadmin help from Claude Code at al.
The biggest reason I had not to run a home server was security: I'm worried that I might fall behind on updates and end up compromised.
Tailscale dramatically reduces this risk, because I can so easily configure it so my own devices can talk to my home server from anywhere in the world without the risk of exposing any ports on it directly to the internet.
Being able to hit my home server directly from my iPhone via a tailnet no matter where in the world my iPhone might be is really cool.
I'd rather expose a Wireguard port and control my keys than introduce a third party like Tailscale.
I am not sure why people are so afraid of exposing ports. I have dozens of ports open on my server including SMTP, IMAP(S), HTTP(S), various game servers and don't see a problem with that. I can't rule out a vulnerability somewhere but services are containerized and/or run as separate UNIX users. It's the way the Internet is meant to work.
> I'd rather expose a Wireguard port and control my keys than introduce a third party like Tailscale.
Ideal if you have the resources (time, money, expertise). There are different levels of qualifications, convenience, and trust that shape what people can and will deploy. This defines where you draw the line - at owning every binary of every service you use, at compiling the binaries yourself, at checking the code that you compile.
> I am not sure why people are so afraid of exposing ports
It's simple, you increase your attack surface, and the effort and expertise needed to mitigate that.
> It's the way the Internet is meant to work.
Along with no passwords or security. There's no prescribed way for how to use the internet. If you're serving one person or household rather than the whole internet, then why expose more than you need out of some misguided principle about the internet? Principle of least privilege, it's how security is meant to work.
> It's simple, you increase your attack surface, and the effort and expertise needed to mitigate that.
Sure, but opening up one port is a much smaller surface than exposing yourself to a whole cloud hosting company.
Ah… I really could not disagree more with that statement. I know we don’t want to trust BigCorp and whatnot, but a single exposed port and an incomplete understanding of what you’re doing is really all it takes to be compromised.
Same applies to Tailscale. A Tailscale client, coordination plane vulnerability, or incomplete understanding of their trust model is also all it takes. You are adding attack surface, not removing it.
If your threat model includes "OpenSSH might have an RCE" then "Tailscale might have an RCE" belongs there too.
If you are exposing a handful of hardened services on infrastructure you control, Tailscale adds complexity for no gain. If you are connecting machines across networks you do not control, or want zero-config access to internal services, then I can see its appeal.
There was a time when people were allowed to drive cars unlicensed.
These days, that seems insane.
As the traffic grew, as speeds increased, licensing became necessary.
I think, these days, we're almost into that category. I don't say this happily. But having unrestricted access seems like an era coming to an end.
I realise this seems unworkable. But so was the idea of a driver's license. Sometimes society and safety comes first.
I'm willing to bet that in under a decade, something akin to this will happen.
I'll take this to mean that you think arbitrary access to a computer's capabilities will require licensure, in which case I think this is a bad metaphor.
The point of a driver's license is that driving a ton of steel around at >50mph presents risk of harm to others.
Not knowing how to use a computer - driving it "poorly" - does not risk harm to others. Why does it merit restriction, based on the topic of this post?
Your unpatched Wordpress install is someone else’s botnet host, forming part of the “distributed” in DDoS, which harms others.
It’s why Cloudflare exists, which in itself is another form of harm, in centralising a decentralised network.
The argument is self-defeating:
1. "Unpatched servers become botnet hosts" - true, but Tailscale does not prevent this. A compromised machine on your tailnet is still compromised. The botnet argument applies regardless of how you access your server.
2. Following this logic, you would need to license all internet-connected devices: phones, smart TVs, IoT. They get pwned and join botnets constantly. Are we licensing grandma's router?
3. The Cloudflare point undermines the argument: "botnets cause centralization (Cloudflare), which is harm", so the solution is... licensing, which would centralize infrastructure further? That is the same outcome being called harmful.
4. Corporate servers get compromised constantly. Should only "licensed" corporations run services? They already are, and they are not doing better.
Back to the topic: I have no clue what you think Tailscale is, but it does increase security, only convenience.
The comment I was replying to was claiming that using your computer 'poorly' does not harm others. I was simply refuting that. Having spent the last two decades null routing customer servers when they decide to join an attack, this isn't theoretical.
As an aside, I dislike tailscale, and use wireguard directly.
Back to the topic: Your connected device can harm others if used poorly. I am not proposing licensing requirements.
I meant: does not increase security.
I would detest living in a world where regulators assign liability in this way, it sounds completely ridiculous. On a level with "speech is violence".
If I threw my license away tomorrow, what would be insane about me driving without a license?
Are you saying "unlicensed" where you mean "untrained?"
The point of massive fines, and in some cases jailtime for driving without a license is control.
If someone breaks regs, you want to be able to levy fines or jail. If they do it a lot, you want an inability to drive at all.
It's about regulating poor drivers. And yes, initially vetting a driver too.
I don't really know any adults who don't drive, and nobody ever told me they weren't capable.
I don't think it's about driving ability, besides the initial vetting.
Most inadequate drivers don't think they're inadequate, which is part of the problem. Unless your acquaintances are exclusively PMC you most likely know several adults who've lost their licenses because they are not adequately safe drivers, and if your acquaintances are exclusively PMC you most likely know several adults who are not adequately safe drivers and should've lost their licenses but knew the legal tricks to avoid it.
From the perspective of those writing the regs, speeding, running lights, driving carelessly or dangerously (all fines or crimes here) are indeed indicators of safe driving or not.
Understand, I am not advocating this. I said I did not like it. Neirher of those statements have anything totk do with whether I think it will come to pass, or not.
I am ~30 years old and I do not drive. In fact, I cannot drive.
Can you be more concrete what do you predict?
- [deleted]
This felt like it didn’t do your aim justice, “$X and an incomplete understanding of what you’re doing is all it takes to be compromised” applies to many $X, including Tailscale.
Even if you understand what you are doing, you are still exposed to every single security bug in all of the services you host. Most of these self hosted tools have not been through 1% of the security testing big tech services have.
Now you are exposed to every security bug in Tailscale's client, DERP relays, and coordination plane, plus you have added a trust dependency on infrastructure you do not control. The attack surface did not shrink, it shifted.
I run the tailscale client in it's own LXC on Proxmox. Which connects to nginx proxy manager also in it's own LXC, which then connects to Nextcloud configured with all the normal features (Passkeys, HTTPS, etc). The Nextcloud VM uses full disk encryption as well.
Any one of those components might be exploitable, but to get my data you'd have to exploit all of them.
You do not need to exploit each layer because you traverse them. Tailnet access (compromised device, account, Tailscale itself) gets you to nginx. Then you only need to exploit Nextcloud.
LXC isolation protects Proxmox from container escapes, not services from each other over the network. Full disk encryption protects against physical theft, not network attacks while running.
And if Nextcloud has passkeys, HTTPS, and proper auth, what is Tailscale adding exactly? What is the point of this setup over the alternative? What threat does this stop that "hardened Nextcloud, exposed directly" does not? It is complexity theater. Looks like defense in depth, but the "layers" are network hops, not security boundaries.
And, Proxmox makes it worse in this case as most people won't know or understand that proxmox's netoworking is fundamentally wrong: its configured with consistent interface naming set the wrong way.
For every remote exploit and cloud-wide outage that has happened over the past 20 years my sshd that is exposed to the internet on port 22 has had zero of either. There were a couple of major OpenSSH bugs but my auto updater took care of that before I saw it on the news.
You can trust BugCorp all you want but there are more sshd processes out there than tailnets and the scrutiny is on OpenSSH. We are not comparing sshd to say WordPress here. Maybe when you don’t over engineer a solution you don’t need to spend 100x the resources auditing it…
If you only expose SSH then you're fine, but if you're deploying a bunch of WebApps you might not want them accessible on the internet.
The few things I self host I keep out in the open. etcd, Kubernetes, Postgres, pgAdmin, Grafana and Keycloak but I can see why someone would want to hide inside a private network.
Yeah any web app that is meant to be private is not something I allow to be accessible from the outside world. Easy enough to do this with ssh tunnels OR Wireguard, both of which I trust a lot more than anything that got VC funding. Plus that way any downtime is my own doing and in my control to fix.
How would another service be impacted by an open UDP port on a server that the service is not using?
Using a BigCorp service also has risks. You are exposed to many of their vulnerabilities, that’s why our information ends up in data leaks.
Someone would need your 256-bit key to do anything to an exposed Wireguard port.
In theory.
In the same theory, someone would need your EC SSH key to do anything with an exposed SSH port.
Practice is a separate question.
SSH is TCP though and the outside world can initiate a handshake, the point being that wireguard silently discards unauthenticated traffic - there's no way they can know the port is open for listening.
Uh, you know you can scan UDP ports just fine, right? Hosts reply with an ICMP destination unreachable / port unreachable (3/3 in IPv4, 1/4 in IPv6) if the port is closed. Discarding packets won't send that ICMP error.
It's slow to scan due to ICMP ratelimiting, but you can parallelize.
(Sure, you can disable / firewall drop that ICMP error… but then you can do the same thing with TCP RSTs.)
That's why you discard ICMP errors.
If anything, that's why you discard ICMP port unreachable, which I assume you meant.
If you're blanket dropping all ICMP errors, you're breaking PMTUD. There's a special place reserved in hell for that.
(And if you're firewalling your ICMP, why aren't you firewalling TCP?)
Not even remotely comparable.
Wireguard is explicitly designed to not allow unauthenticated users to do anything, whereas SSH is explicitly designed to allow unauthenticated users to do a whole lot of things.
> SSH is explicitly designed to allow unauthenticated users to do a whole lot of things
I'm sorry, what?
You could also use ZeroTier and get similar capabilities without a third-party being a blocker.
or netbird
Interesting product here, thanks although I prefer the p2p transport layer (VL1) plus an Ethernet emulation layer (VL2) for bridging and multicast support.
Headscale is a thing
Headscale is only really useful if you need to manage multiple users and/or networks. If you only have one network you want to have access to and a small number of users/devices it only increases the attack surface over having one wireguard listening because it has more moving parts.
I set it up to open the port for few secs via port knocking. Plus another script that runs on the server that opens connections to my home ip addr doing reverse lookup to a domain my router updates via dyndns so devices at my home don’t need to port knock to connect.
I think the most important thing about Tailscale is how accessible it is. Is there a GUI for Wireguard that lets me configure my whole private network as easily as Tailscale does?
This is where using frontier models can help - You can have them assist with configuring and operating wireguard nearly as easily as you can have them walk you through Tailscale, eliminating the need for a middleman.
The mid-level and free tiers aren't necessarily going to help, but the Pro/Max/Heavy tier can absolutely make setting up and using wireguard and having a reasonably secure environment practical and easy.
You can also have the high tier models help with things like operating a FreePBX server and VOIP, manage a private domain, and all sorts of things that require domain expertise to do well, but are often out of reach for people who haven't gotten the requisite hands on experience and training.
I'd say that going through the process of setting up your self hosting environment, then after the fact asking the language model "This is my environment: blah, a, b, c, x, y, z, blah, blah. What simple things can I do to make it more secure?"
And then repeating that exercise - create a chatgpt project, or codex repo, or claude or grok project, wherein you have the model do a thorough interrogation of you to lay out and document your environment. With that done, you condense it to a prompt, and operate within the context where your network is documented. Then you can easily iterate and improve.
Something like this isn't going to take more than a few 15 minute weekend sessions each month after initially setting it up, and it's going to be a lot more secure than the average, completely unattended, default settings consumer network.
You could try to yolo it with Operator or an elevated MCP interface with your system, but the point is, those high tier models are sufficiently good enough to make significant self hosting easily achievable.
> Ideal if you have the resources (time, money, expertise). There are different levels of qualifications, convenience, and trust that shape what people can and will deploy. This defines where you draw the line - at owning every binary of every service you use, at compiling the binaries yourself, at checking the code that you compile.
Wireguard is distributed by distros in official packages. You don't need time, money and expertise to setup unattended upgrades with auto reboot on a debian or redhat based distro. At least it is not more complicated than setting an AI agent.
What about SMTP, IMAP(S), HTTP(S), various game servers parent mentioned have open ports for?
Having a single port open for VPN access seems okay for me. That's what I did, But I don't want an "etc" involved in what has direct access to hardware/services in my house from outside.
How does wireguard interfere with email?
> I'd rather expose a Wireguard port and control my keys than introduce a third party like Tailscale.
This is what I do. You can do Tailscale like access using things like Pangolin[0].
You can also use a bastion host, or block all ports and set up Tor or i2p, and then anyone that even wants to talk to your server will need to know cryptographic keys to route traffic to it at all, on top of your SSH/WG/etc keys.
> I am not sure why people are so afraid of exposing ports. I have dozens of ports open on my server including SMTP, IMAP(S), HTTP(S), various game servers and don't see a problem with that.
This is what I don't do. Anything that needs real internet access like mail, raw web access, etc gets its own VPS where an attack will stay isolated, which is important as more self-hosted services are implemented using things like React and Next[1].
Is a container not enough isolation? I do SSH to the host (alt-port) and then services in containers (mail, http)
Depends on your risk tolerance.
I personally wouldn't trust a machine if a container was exploited on it, you don't know if there were any successful container escapes, kernel exploits, etc. Even if they escaped with user permissions, that can fill your box with boobytraps if they have container-granted capabilities.
I'd just prefer to nuke the VPS entirely and start over than worry if the server and the rest of my services are okay.
- [deleted]
Yea I feel that too.
there are some well respected compute providers as well which you can use and for very low amount, you can sort of offload this worry to someone else.
That being said, VM themselves are good enough security box too. I consider running VM's even on your home server with public facing strategies usually allowable
Yeah, I only run very little on VPS, so this is practically free to me. Everything else I host at home behind Wireguard w/ Pangolin.
I understand where you are coming from but no, containers aren't enough isolation.
If you are running some public service, it might have bugs and of course we see some RCE issues as well or there can be some misconfig and containers by default dont provide enough security if an hacker tries to break in. Containers aren't secure in that sense.
Virtual machines are the intended use case for that. But they can be full of friction at time.
If you want something of a middle compromise, I can't recommend incus enough. https://linuxcontainers.org/incus/
It allows you to setup vm's as containers and even provides a web ui and provides the amount of isolation that you can trust (usually) everything on.
I'd say to not take chances with your home server because that server can be inside your firewall and can infect on a worst case scenario other devices but virtualization with things like incus or proxmox (another well respected tool) are the safest and provide isolation that you can trust with. I highly recommend that you should take a look at it if you deploy public serving services.
It's the way the internet was meant to work but it doesn't make it any easier. Even when everything is in containers/VMs/users, if you don't put a decent amount of additional effort into automatic updates and keeping that context hardened as you tinker with it it's quite annoying when it gets pwned.
There was a popular post less than a month ago about this recently https://news.ycombinator.com/item?id=46305585
I agree maintaining wireguard is a good compromise. It may not be "the way the internet was intended to work" but it lets you keep something which feels very close without relying on a 3rd party or exposing everything directly. On top of that, it's really not any more work than Tailscale to maintain.
> There was a popular post less than a month ago about this recently https://news.ycombinator.com/item?id=46305585
This incident precisely shows that containerization worked as intended and protected the host.
It protected the host itself but it did not protect the server from being compromised and running malware, mining cryptocurrency.
Containerizing your publicly exposed service will also not protect your HTTP server from hosting malware or your SMTP server from sending SPAM, it only means you've protected your SMTP server from your compromised HTTP server (assuming you've even locked it down accurately, which is exactly the kind of thing people don't want to be worried about).
Tailscale puts the protection of the public portion of the story to a company dedicated to keeping that portion secure. Wireguard (or similar) limit the protection to a single service with low churn and minimal attack surface. It's a very different discussion than preventing lateral movement alone. And that all goes without mentioning not everyone wants to deal with containers in the first place (though many do in either scenario).
I just run an SSH server and forward local ports through that as needed. Simple (at least to me).
I do that as well, along with using sshd as a SOCKS proxy for web based stuff via Firefox, but it can be a bit of a pain to forward each service to each host individually if you have more than a few things going on - especially if you have things trying to use the same port and need to keep track of how you mapped it locally. It can also a lot harder to manage on mobile devices, e.g. say you have some media or home automation services - they won't be as easy to access via a single public SSH host via port forwarding (if at all) as a VPN would be, and wireguard is about as easy a personal VPN as there is.
That's where wg/Tailscale come in - it's just a traditional IP network at that point. Also less to do to shut up bad login attempts from spam bots and such. I once forgot to configure the log settings on sshd and ended up with GBs of logs in a week.
The other big upside (outside of not having a 3rd party) in putting in the slightly more effort to do wg/ssh/other personal VPN is the latency+bandwidth to your home services will be better.
> and wireguard is about as easy a personal VPN as there is.
I would argue OpenVPN is easier. I currently run both (there are some networks I can’t use UDP on, and I haven’t bothered figuring out how to get wireguard to work with TCP), and the OpenVPN initial configuration was easier, as is adding clients (DHCP, pre-shared cert+username/password).
This isn’t to say wireguard is hard. But imo OpenVPN is still easier - and it works everywhere out of the box. (The exception is networks that only let you talk on 80 and 443, but you can solve that by hosting OpenVPN on 443, in my experience.)
This is all based on my experience with opnsense as the vpn host (+router/firewall/DNS/DHCP). Maybe it would be a different story if I was trying to run the VPN server on a machine behind my router, but I have no reason to do so - I get at least 500Mbps symmetrical through OpenVPN, and that’s just the fastest network I’ve tested a client on. And even if that is the limit, that’s good enough for me, I don’t need faster throughput on my VPN since I’m almost always going to be latency limited.
How many random people do you have hitting port 22 on a given day?
Dozens. Maybe hundreds. But they can't get in as they don't have the key.
change port.
After years of cargo-culting this advice—"run ssh on a nonstandard port"—I gave up and reverted to 22 because ssh being on nonstandard ports didn't change the volume of access attempts in the slightest. It was thousands per day on port 22, and thousands per day on port anything-else-i-changed-it-to.
It's worth an assessment of what you _think_ running ssh on a nonstandard port protects you against, and what it's actually doing. It won't stop anything other than the lightest and most casual script-based shotgun attacks, and it won't help you if someone is attempting to exploit an actual-for-real vuln in the ssh authentication or login process. And although I'm aware the plural of "anecdote" isn't "data," it sure as hell didn't reduce the volume of login attempts.
Public key-only auth + strict allowlists will do a lot more for your security posture. If you feel like ssh is using enough CPU rejecting bad login attempts to actually make you notice, stick it behind wireguard or set up port-knocking.
And sure, put it on a nonstandard port, if it makes you feel better. But it doesn't really do much, and anyone hitting your host up with censys.io or any other assessment tool will see your nonstandard ssh port instantly.
Conversely, what do you gain by using a standard port?
Now, I do agree a non-standard port is not a security tool, but it doesn't hurt running a random high-number port.
> Conversely, what do you gain by using a standard port?
One less setup step in the runbook, one less thing to remember. But I agree, it doesn't hurt! It just doesn't really help, either.
it did for me.
I've tried using a nonstandard port but I still see a bunch of IPs getting banned, with the added downside of if I'm on the go sometimes I don't remember the port
Underrated reply - I randomize the default ports everywhere I can, really cuts down on brute force/credential stuffing attempts.
or keep the port and move to IPv6 only.
Also to Simon: I am not sure about how Iphone works but in android, you could probably use mosh and termux to then connect to the server as well and have the end result while not relying on third party (in this case tailscale)
I am sure there must be an Iphone app which could probably allow something like this too. I highly recommend more people take a look into such workflow, I might look into it more myself.
Tmate is a wonderful service if you have home networks behind nat's.
I personally like using the hosted instance of tmate (tmate.io) itself but It can be self hosted and is open source
Once again it has third party issue but luckily it can be self hosted so you can even have a mini vps on hetzner/upcloud/ovh and route traffic through that by hosting tmate there so ymmv
>It's the way the internet was meant to work but it doesn't make it any easier. Even when everything is in containers/VMs/users, if you don't put a decent amount of additional effort into automatic updates and keeping that context hardened as you tinker with it it's quite annoying when it gets pwned.
As someone who spent decades implementing and securing networks and internet-facing services for corporations large and small as well as self-hosting my own services for much of that time, the primary lesson I've learned and tried to pass on to clients, colleagues and family is:
No, that's not universally true. But it's a smart assumption to make for several reasons:If you expose it to the Internet, assume it will be pwned at some point.1. No software is completely bug free and those bugs can expose your service(s) to compromise;
2. Humans (and their creations) are imperfect and will make mistakes -- possibly exposing your service(s) to compromise;
3. Bad actors, ranging from marginally competent script kiddies to master crackers with big salaries and big budgets from governments and criminal organizations are out there 24x7 trying to break into whatever systems they can reach.
The above applies just as much to tailscale or wireguard as it does to ssh/http(s)/imap/smtp/etc.
I'll say it again as it's possibly the most important concept related to exposing anything:
If you're lucky (and good), it may not happen while you're responsible for it, but assuming it will and having a plan to mitigate/control an "inevitable" compromise will save your bacon much better than just relying on someone else's code to never break or have bugs which put you at risk.If you expose it to the Internet, assume that, at some point, it will be compromised and plan accordingly.Want to expose ports? Use Wireguard? Tailscale? HAProxy? Go for it.
And do so in ways that meet your requirements/use cases. But don't forget to at least think (better yet script/document) about what you will do if your services are compromised.
Because odds are that one day they will.
Every time I put anything anywhere on the open net, it gets bombarded 24/7 by every script kiddie, botnet group , and these days, AI company out there. No matter what I'm hosting, it's a lot more convenient to not have to worry about that even for a second.
> Every time I put anything anywhere on the open net, it gets bombarded 24/7 by every script kiddie, botnet group , and these days, AI company out there
Are you sure that it isn't just port scanners? I get perhaps hundreds of connections to my STMP server every day, but they are just innocuous connections (hello, then disconnect). I wouldn't worry about that unless you see repeated login attempts, in which case you may want to deploy Fail2Ban.
Port scanners don't try to ssh into my server with various username/password combinations.
I prefer to hide my port instead of using F2B for a few reasons.
1. Log spam. Looking in my audit logs for anything suspicious is horrendous when there's just megs of login attempts for days.
2. F2B has banned me in the past due to various oopsies on my part. Which is not good when I'm out of town and really need to get into my server.
3. Zero days may be incredibly rare in ssh, but maybe not so much in Immich or any other relatively new software stack being exposed. I'd prefer not to risk it when simple alternatives exist.
Besides the above, using Tailscale gives me other options, such as locking down cloud servers (or other devices I may not have hardware control over) so that they can only be connected to, but not out of.
You can tweak rate thresholds for F2B, so that it blocks the 100-attempts-per-second attackers, but doesn't block your three-attempts-per-minute manual fumbling.
I know this. But I don't like that they still get to try at least once, and there's still the rest of my list.
This is a good reason not to expose random services, but a wireguard endpoint simply won't respond at all if someone hits it with the wrong key. It is better even than key based ssh.
I've managed wireguard in the past, and would never do it again. Generating keys, distributing them, configuring it all...... bleh!
Never again, it takes too much time and is too painful.
Certs from Tailscale are reason enough to switch, in my opinion!
The key with successful self hosting is to make it easy and fast, IMHO.
> I'd rather expose a Wireguard port and control my keys than introduce a third party like Tailscale.
I’m working on a (free) service that lets you have it both ways. It’s a thin layer on top of vanilla WireGuard that handles NAT traversal and endpoint updates so you don’t need to expose any ports, while leaving you in full control of your own keys and network topology.
Apparently I'm ignorant about Tailscale, bacause your service description is exactly what I thought Tailscale was.
The main issue people have with Tailscale is that it's a centralised service that isn't self hostable. The Tailscale server manages authentication and keeping track of your devices IPs.
Your eventual connection is direct to your device, but all the management before that runs on Tailscales server.
Isn't this what headscale is for?
This is very cool!
But I also think it's worth a mention that for basic "I want to access my home LAN" use cases you don't need P2P, you just need a single public IP to your lan and perhaps dynamic dns.
Where will you host the wg endpoint to open up?
- Each device? This means setting up many peers on each of your devices
- Router/central server? That's a single point of failure, and often a performance bottleneck if you're on LAN. If that's a router, the router may be compromised and eavesdrop on your connections, which you probably didn't secure as hard because it's on a VPN.
Not to mention DDNS can create significant downtime.
Tailscale fails over basically instantly, and is E2EE, unlike the hub setup.
To establish a wg connection, only one node needs a public IP/port.
> Router/central server? That's a single point of failure
Your router is a SPOF regardless. If your router goes down you can't reach any nodes on your LAN, Tailscale or otherwise. So what is your point?
> If that's a router, the router may be compromised and eavesdrop on your connections, which you probably didn't secure as hard because it's on a VPN.
Secure your router. This is HN, not advice for your mom.
> Not to mention DDNS can create significant downtime.
Set your DNS ttl correctly and you should experience no more than a minute of downtime whenever your public IP changes.
> one node needs a public IP/port
A lot of people are behind CGNAT or behind a non-configurable router, which is an abomination.
> Secure your router
A typical router cannot be secured against physical access, unlike your servers which can have disk encryption.
> Your router is a SPOF regardless
Tailscale will keep your connection over a downstream switch, for example. It will not go through the router if it doesn't have to. If you use it for other usecases like kdeconnect synchronizing clipboard between phone and laptop, that will also stay up independent of your home router.
A public IP and DDNS can be impossible behind CGNAT. A VPN link to a VPS eliminates that problem.
The VPS (using wg-easy or similar solutions) will be able to decrypt traffic as it has all the keys. I think most people self-hosting are not fine with big cloud eavesdropping on their data.
Tailscale really is superior here if you use tailnet lock. Everything always stays encrypted, and fails over to their encrypted relays if direct connection is not possible for various reasons.
When I said "you just need a single public IP" I figured it was clear that I wasn't claiming this works for people who don't have a public IP.
My biggest source of paranoia is my open home assistant port, while it requires a strong password and is TLS-encrypted, I'm sure that one day someone will find an exploit letting them in, and then the attacker will rapidly turn my smart home devices on and off until they break/overheat the power components until they start a fire and burn down my house.
That seems like a very irrational fear. Attackers don't go around trying to break into Home Assistant to turn the lights on at some stranger's house.
There's also no particular reason to think Home Assistant's authentication has to have a weakness point.
And your devices are also unlikely to start a fire just by being turned on and off, if that's your fear you should replace them at once because if they catch fire it doesn't matter if it's an attacker or yourself turning them on and off.
People are putting their whole infrastructure onto HA - cars, Apple/Google/other accounts, integrations to grid companies, managing ESP software etc..
I think that has more potential for problems than turning lights on and off and warrants strong security.
- [deleted]
Why expose HA to the internet? I’m genuinely curious.
I don't have a static IP, so tailscale is convenient. And less likely to fail when I really need it, as apposed to trying to deal with dynamic dns.
Speaking of Wireguard, my current topology has all peers talking to a single peer that forwards traffic between peers (for hole punching / peers with dynamic ips).
But some peers are sometimes on the same LAN (eg phone is sometimes on same LAN as pc). Is there a way to avoid forwarding traffic through the server peer in this case?
I guess I'm looking for wireguard's version of STUN. And now that I know what to google for, finally found some promising leads.
https://github.com/jwhited/wgsd
https://www.jordanwhited.com/posts/wireguard-endpoint-discov...
Have your network managing software setup a default route with a lower metric than wireguard default route based on wifi SSID. Can be done easily with systemd-networkd, because you can match .network file configurations on SSID. You're probably out of luck with this approach on network-setup-challenged devices like so called smart phones.
I don't fully understand your topology use case. You have different peers that are "road-warriors" and that sometimes happen to be both on the same LAN which is not your home LAN, and need to speak the one to the other? And I guess you are connecting to the other peer via DNS, so your DNS record always points to the Wireguard-provided IP?
The way I do it is to have two different first level domains. Let's say:
- w for the wireguard network. - h for the home network.
Nothing fancy, just populate the /etc/hosts on every machine with these names.
Now, it's up to me to connect to my server1.h or server1.w depending whether I am at home or somewhere else.
- [deleted]
Two separate WG profiles on the phone; one acting as a Proxy (which forwards everything), and one acting just as a regular VPN without forwarding.
A mesh-type wireguard network is rather annoying to set up if you have more than a few devices, and a hub-type network (on a low powered router) tends to be so slow that it necessitates falling back to alternate interfaces when you're at home. Tailscale does away with all this and always uses direct connections. In principle it is more secure than hosting it on some router without disk encryption (as the keys can be extracted via a physical attack, and a pwned router can also eavesdrop on traffic).
"Back in the day"(just few years ago) I used to expose a port for RDP on my router, on a non-standard port. Typically it would be fine and quiet for a few weeks, then I assume some automatic scanner would find it and from that point onwards I could see windows event log reporting a log in attempt every second, with random login/password combinations, clearly just looking for something that would work. I would change the port and the whole dance would repeat all over again. Tens of thousands of login attempts every day, all year round. I used to just ignore it, since clearly they weren't going to log in with those random attempts, but eventually just switched to OpenVPN.
So yeah, the lesson there is that if you have a port open to the internet, someone will scan it and try to attack it. Maybe not if it's a random game server, but any popular service will get under attack.
> someone will scan it and try to attack it. Maybe not if it's a random game server, but any popular service will get under attack.
That's fine, it's only people knocking on a closed door. You cannot host things such as email or HTTP without open ports, your service needs to be publicly accessible by definition.
Of course. A port is a door. If service listening on a port is secure and properly configured (e.g. ssh), whole Internet can bang on it all day every day, they won't let through without proper key. Same for imap, xmpp or any othet service.
But what can you expect from people who provide services but won't even try to understand how they work and how they are configured as it's 'not fun enough', expecting claude code to do it right for them.
Asking AI to do thing you did 100 times before is OK I guess. Asking AI to do thing you never did and have no idea how it's properly done - not so much I'd say. But this guy obviously does not signal his sysadmin skills but his AI skills. I hope it brings him the result he aimed for.
> I am not sure why people are so afraid of exposing ports.
Similar here, I only build & run services that I trust myself enough to run in a secure manner by themselves. I still have a VPN for some things, but everything is built to be secure on its own.
It's quite a few services on my list at this point and really don't want to have a break in one thing lead to a break in everything. It's always possible to leave a hole in one or two things by accident.
On the other side this also means I have a Postgres instance with TCP/5432 open to the internet - with no ill effects so far, and quite a bit of trust it'll remain that way, because I understand its security properties and config now.
It's worth considering: Run the PiVPN script on a Ubuntu/Debian based VM. Set it to use a non-standard random port. That will be your only port exposed to the internet.
Add the generated Wireguard key to any device (laptops, phones, etc) and access your home LAN as if it was local from anywhere in the world for free.
Works well, super easy to setup, secure, and fast.
People are not full time maintainers of their infra though, that's very different to companies.
In many cases they want something that works, not something that requires a complex setup that needs to be well researched and understood.
Wireguard is _really_ simple in that sense though. If you're not doing anything complicated it's very easy to set up & maintain, and basically just works.
You can also buy quite a few routers now that have it built in, so you literally just tick a checkbox, then scan a QR code/copy a file to each client device, done.
This may come with its own limitations, though.
My ISP-provided router (Free, in France) has WG built-in. But other than performance being abysmal, its main pain point is not supporting subnet routing.
So if all you want is to connect your phone / laptop while away to the local home network, it's fine. If you want to run a tunnel between two locations with multiple IPs on the remote side, you're SoL.
Defence in dept. You have a layer of security even before a packet reaches your port. I might have a zero day on your service, but now I also need to breach your reverse proxy to get to it.
Tailscale works behind NAT, wireguard does not unless you also have a publicly reachable relay server which introduces its own maintenance headaches and cost.
The answer is people who don't truly understand the way it works being in charge of others who also don't in different ways. In the best case, there's an under resourced and over leveraged security team issuing overzealous edicts with the desperate hope of avoiding some disaster. When the sample size is one, it's easy to look at it and come to your conclusion.
In every case where a third party is involved, someone is either providing a service, plugging a knowledge gap, or both.
i tried wireguard and ended up giving up on it, too many isps just block it here or use some kind of tech that fucks with it and i have no idea why, i couldn't connect to my home network because it was blocked on whatever random wifi i was on
the new problem is now my isp uses cgnat and there's no easy way around it
tailscale avoids all that, if i wanted more control i'd probably use headscale rather than bother with raw wireguard
And there's nothing wrong with it. That is what wireguard is meant to be - a rock-solid secure tunneling implementation that's easy to build higher-level solutions on.
Which router OS are you using? I have openwrt + daily auto updates configure with a couple of packages blacklisted that I manually update now & then.
If you expose ports, literally everything you are hosting and every plugin is an attack surface. Most of this stuff is built by single hobbiest devs on the weekend. You are also exposed to any security issues you make in your configuration. My first attempt self hosting I had redis compromised because I didn't realise I had exposed it to the internet with no password.
Behind a VPN your only attack surface is the VPN which is generally very well secured.
You exposed your redis publicly? Why?
Edit: This is the kind of service that you should only expose to your intranet, i.e. a network that is protected through wireguard. NEVER expose this publicly, even if you don't have admin:admin credtials.
I actually didn't know I had. At the time I didn't properly know how docker networking worked and I exposed redis to the host so my other containers could access it. And then since this was on a VPS with a dedicated IP, this made it exposed to the whole internet.
I now know better, but there are still a million other pitfalls to fall in to if you are not a full time system admin. So I prefer to just put it all behind a VPN and know that it's safe.
> but there are still a million other pitfalls to fall in to if you are not a full time system admin.
Pro tip: After you configure a new service, review the output of ss -tulpn. This will tell you what ports are open. You should know exactly what each line represents, especially those that bind on 0.0.0.0 or [::] or other public addresses.
The pitfall that you mentioned (Docker automatically punching a hole in the firewall for the services that it manages when an interface isn't specified) is discoverable this way.
Thanks, didn't know about this one.
Isn't GP's point inadvertently exposing stuff? Just mention docker networking on HN and you'll get threadfuls of comments on how it helpfully messes with your networking without telling you. Maybe redis does the same?
I mitigate this by having a dedicated machine on the border that only does routing and firewalling, with no random services installed. So anything that helpfully opens ports on internal vms won't automatically be reachable from the outside.
I have a VPS with OVH, I put Tailscale on it and it's pretty cool to be able to install and access local (to the server) services like Prometheus and Grafana without having to expose them through the public net firewall or mess with more apache/nginx reverse proxies. (Same for individual services' /metrics endpoints that are served with a different port.)
Yggdrasil network is probably the future. At Hoppy Network we're about to release private yggdrasil relays as a service so you don't get spammed with "WAN" traffic. With Yggdrasil, IP addresses aren't allocated by an authority - they are owned and proven by public key cryptography.
I did it and I was just hacked because of a CVE on my pangolin reverse proxy! Sadly, I didn't knew of the CVE soon enough and I only noticed when a crypto malware took my fan 100% all day long...
> introduce a third party like Tailscale.
Well just use headscale and you'll have control over everything.
That just moves the problem, since headscale will require a server you manage with an open port.
Sure, tailscale is nice, but from an open-port-on-the-net perspective it's probably a bit below just opening wireguard.
With ports you have dozens or hundreds of applications and systems to attack.
With tailscale / zerotier / etc the connection is initiated from inside to facilitate NAT hole punching and work over CGNAT.
With wireguard that removes a lot of attack surfaces but wouldn't work if behind CGNAT without a relay box.
This is the truth. I've been exposing 22 and 80 for decades, and nothing has happened. The ones I know who had something bad happen to them exposed proprietary services or security nightmares like wordpress.
I used to do that, but Tailscale with your own headscale server is pretty snazzy. The other thing is with cloudflared running your server doesn't have to be Internet-routable. Everything is tunneled.
"I'd rather expose a Wireguard port and control my keys than introduce a third party like Tailscale."
It's always perplexing to me how HN commenters replying to a comment with a statement like this, e.g., something like "I prefer [choice with some degree of DIY]", will try to "argue" against it
The "arguments" are rarely, "I think that is a poor choice because [list of valid reasons]"
Instead the responses are something like, "Most people...". In other words, a nonsensical reference to other computer users
It might make sense for a commercial third party to care about what other computer users do, but why should any individual computer user care what others do (besides genuine curiosity or commercial motive)
For example, telling family, friends, colleagues how you think they should use their computers usually isn't very effective. They usually do not care about your choices or preferences. They make their own
Would telling strangers how to use their computers be any more effective
Forum commenters often try to tell strangers what to do, or what not to do
But every computer user is free to make their own choices and pursue their own preferences
NB. I am not commenting on the open ports statement
O5QXGIBLGMQC4LQK
Skill issue. Not to mention the ongoing effort required to maintain and secure the service. But even before that, a lot of people are behing CGNAT. Tailscale makes punching a hole through that very easy. Otherwise you have to run your own relay server somewhere in the cloud.
put simply and fairly bluntly: because they do not know how things work.
but actually it's worse. this is HN - supposedly, most commenters are curious by nature and well versed into most basic computer stuff. in practice, it's slowly less and less the case.
worse: what is learned and expected is different from what you'd think.
for example, separating service users sure is better than nothing, but the OS attack surface as a local user is still huge, hence why we use sandboxes, which really are just OS level firewalls to reduce the attack surface.
the open port attack surface isnt terrible though: you get a bit more of the very well tested tcp/ip stack and up to 65k ports all doing the exact same thing, not terrible at all.
Now, add to it "AI" which can automatically regurgitate and implement whatever reddit and stack overflow says.. it makes for a fun future problem - such forums will end up with mostly non-new AI content (new problem being solved will be a needle in the haystack) - and - users will have learned that AI is always right no matter what it decides (because they don't know any better and they're being trained to blindly trust it).
Heck, i predict there will be a chat, where a bunch of humans will argue very strongly that an AI is right while its blatantly wrong, and some will likely put their life on the line to defend it.
Fun times ahead. As for my take: humans _need_ learning to live, but are lazy. Nature fixes itself.
Honestly the managed PKI is the main value-add from Tailscale over plain wireguard.
I’ve been meaning to give this a try this winter: https://github.com/juanfont/headscale
Tailscale does not solve the "falling behind on updates" problem, it just moves the perimeter. Your services are still vulnerable if unpatched: the attacker now needs tailnet access first (compromised device, account, or Tailscale itself).
You have also added attack surface: Tailscale client, coordination plane, DERP relays. If your threat model includes "OpenSSH might have an RCE" then "Tailscale might have an RCE" belongs there too.
WireGuard gives you the same "no exposed ports except VPN" model without the third-party dependency.
The tradeoff is convenience, not security.
BTW, why are people acting like accessing a server from a phone is a 2025 innovation?
SSH clients on Android/iOS have existed for 15 years. Termux, Prompt, Blink, JuiceSSH, pick one. Port N, key auth, done. You can run Mosh if you want session persistence across network changes. The "unlock" here is NAT traversal with a nice UI, not a new capability.
> BTW, why are people acting like accessing a server from a phone is a 2025 innovation?
> SSH clients on Android/iOS have existed for 15 years
That is not the point, Tailscale is not just about having a network connection, it's everything that goes with. I used to have OpenVPN, and there's a world of difference.
- The tailscale client is much nicer and convenient to use on Android than anything I have seen.
- The auth plane is simpler, especially for non tech users (parents, wife) whom I wish to access my photo album. They are basically independent with tailscale.
- The simplicity also allows me to recommend it to friends and we can link between our tailnet, e.g. to cross backup our NAS.
- Tailscale can terminate SSH publicly, so I can selectively expose services on the internet (e.g. VaultWarden) without exposing my server and hosting a reverse proxy.
- ACLs are simple and user friendly.
You are listing conveniences, which is fair. I said the tradeoff is convenience, not security.
> "Tailscale can terminate SSH publicly"
You are now exposing services via Tailscale's infrastructure instead of your own reverse proxy. The attack surface moved, it did not shrink.
> Tailscale does not solve the "falling behind on updates" problem, it just moves the perimeter.
nothing 100% fixes zero days either, you are just adding layers that all have to fail at the same time
> You have also added attack surface: Tailscale client, coordination plane, DERP relays. If your threat model includes "OpenSSH might have an RCE" then "Tailscale might have an RCE" belongs there too.
you still have to have a vulnerable service after that. in your scenario you'd need an exploitable attack on wireguard or one of tailscale's modifications to it and an exploitable service on your network
that's extra difficulty not less
The "layers" argument applies equally to WireGuard without Tailscale. Attacker still needs VPN exploit + vulnerable service.
The difference: Tailscale adds attack vectors that do not exist with self-hosted WireGuard: account compromise, coordination plane, client supply chain, other devices on your tailnet. Those are not layers to bypass, they are additional entry points.
Regardless, it is still for convenience, not security.
I agree! Before Tailscale I was completely skeptical of self hosting.
Now I have tailscale on an old Kindle downloading epubs from a server running Copyparty. Its great!
Maybe I'm dumb, but I still don't quite understand the value-add of Tailscale over what Wireguard or some other VPN already provides. HN has tried to explain it to me but it just seems like sugar on top of a plain old VPN. Kind of like how "pi-hole" is just sugar on top of dnsmasq, and Plex is just sugar on top of file sharing.
I think you answered the question. Sugar. It's easier than managing your own Wireguard connections. Adding a device just means logging into the Tailscale client, no need to distribute information to or from other devices. Get a new phone while traveling because yours was stolen? You can set up Tailscale and be back on your private network in a couple minutes.
Why did people use Dropbox instead of setting up their own FTP servers? Because it was easier.
Yeah, but "people" here are alleged software engieners. It is quite disheartening.
First and foremost they are humans, with a limited time on Earth.
Being a software engineer doesn't mean you want to spend you free time tinkering about your self-hosting setup and doing support for your users.
With Tailscale, not only you don't have to care about most things since _it just works_, but also on-boarding of casual users is straightforward.
Same goes for Plex. I want to watch movies/shows, I don't want to spend time tinkering with my setup. And Plex provides exactly that. Ditto for my family/friends that can access my library with the same simple experience as Netflix or whatever.
Meanwhile, I have a coworker who want to own/manage everything. So they don't want to use Tailscale and they dropped Plex when they forced to use the third-party login system. Now they watch less than a third than they used to be, and they share their setup with nobody since it's too complicated to do.
To each their own, but my goal is to enjoy my setup and share it with others. Tailscale and Plex give me that.
There is a difference between "I choose not to" and "I cannot". The thread is full of people saying Tailscale "unlocked" self-hosting, implying capability, not time savings or time preference.
Choosing convenience is fine. But if basic port forwarding or WireGuard is beyond someone's skill set, "software engineer" is doing a lot of heavy lifting.
I am not saying they are, but if it really is the case, then yeah.
As for file sharing... I remember when non-SWEs knew how to torrent movies, used DC++ and so on. These days even SWEs have no idea how to do it. It is mind-boggling.
To me the "unlocked" is just another hyperbole used by some people, partly because they lack initial knowledge, partly because its click-bait.
The way I understand it is more like "without the ease of use provided by X, even though I could have done it, I wouldn't have done it because it would require time and energy that I'm not willing to put in".
Since we're talking about self-hosting, to me the main focus is not skill set but time and energy.
There's the same debate around NAS products like Synology that are sold with a high markup, meanwhile "every SWE should be able to make their own NAS using recycled hardware".
Sure. And I did all of this: - homemade NAS setup - homemade network setup - homemade mediaplayer setup
It was fun and I learned a lot.
But I moved to some more convenient tools so that I can just use them as reliable services, and focus on other experimentations/tinkering.
To be honest, the fact that you insist that Plex is just "file sharing" that can be replaced by torrents makes me think you either don't know what Plex actually is, or you are acting in bad faith.
I did not say Plex is "just file sharing that can be replaced by torrents". Those were two separate points:
1. The "unlocked" framing implies capability, not time preference
2. General technical literacy has declined: non-SWEs used to torrent, use DC++ extensively, etc.
I was not comparing Plex to torrenting. I was observing that basic file-sharing knowledge used to be common and now is not (see Netflix et al).
> time and energy being the focus
Sure, that is fair. But that is a different claim than "Tailscale unlocked self-hosting for me" which is how it is often framed.
Okay, maybe I misunderstood what you were saying then.
But still, I insist that it's important to understand that, even if we share some similarities based on our interests/skills/work, we come from different backgrounds and have different priorities.
And part of the issue here is probably how people are framing things when they write about their experience. In tech, some of us are coming from a world of nerds where the norm is to be mater-of-factly, while some others are more extroverted and tend to put emphasis on random boring things.
Regarding this post in particular, I was more concerned about how the author was amazed by the fact that a 2025 computer could run 10 services in parallel... or that relying on a proprietary service (Claude) to manage all their setup was giving them "a strong feeling of independence".
Time savings and time preference are most definitely "unlocking." I have limited time, I have limited money, I have limited interest. Could I reinvent wheels instead of using existing software? Sure! But having that existing software definitely unlocks possibilities that would not be open to me if I were required to build, debug, test, and maintain everything I use day-to-day.
Software engineering is a broad spectrum where we can move up and down its abstraction ladder. Using off-the-shelf tools and even third-party providers is fine. I don't have to do everything from scratch - after all, I didn't write my own text editor. I'm also happy to download prepacked and preconfigured software on my Linux distro instead of compiling and adding them to PATH manually.
I could, I just choose not to and direct my interests elsewhere. Those interests can change over time too. One day someone with Tailscale can decide to explore Wireguard. Similarly, someone who runs their own mail server might decide to move to a hosted solution and do something else. That's perfectly fine.
To me, this freedom of choice in software engineering is not disheartening. It's liberating and exciting.
That is a strawman though, and I am not sure why all replies assume extremes all the time.
Nobody said do everything from scratch. The point is: basic networking (port forwarding, WireGuard) should not be beyond someone's capability as a software engineer.
"I use apt instead of compiling" is a time tradeoff. "I can't configure a VPN" is a skill gap. These are not equivalent.
If you choose convenience for whatever reasons, that is completely fine.
"I can't configure a VPN" and "I don't want to configure a VPN" are 2 entirely different things. Mind you I have no idea how complex tailscale setup is in comparison.
I'm in the middle of setting up my own homeserver. Still deciding on what/if I want to expose to the internet and not just local network and while setting everything up and tinkering is part of the fun for me. I get some people just want results that they can rely on. Tailscale, while not a perfect option, is still an option and if they're fine with the risk profile I can understand sacrificing some security for it.
It seems like we do agree. :)
For a homeserver:
- SSH with key-only auth, exposed directly. This has worked for decades. Consider non-standard port to reduce log noise (not security, just quieter logs), fail2ban if you want
- Access internal services via SSH tunnels or just work on the box directly
- If exposing HTTP(S): reverse proxy (nginx/caddy) with TLS, rate limiting
- Databases, admin panels, monitoring - access via SSH, not public (ideally)
You do not need a VPN layer if you are comfortable with SSH. It has been battle-tested longer than most alternatives.
The fun part of tinkering is also learning what is actually necessary vs. cargo-culted advice. You will find most "security hardening" guides are overkill for a homeserver with sensible defaults.
I'd argue that no, managing your own VPN is not a basic skill - certainly not in the realms of software engineering (more like network engineering).
WireGuard is ~10 lines of config and wg genkey. Calling that "network engineering" is a stretch.
The siloing of basic infrastructure knowledge into "not my discipline" is part of the problem. Software gets deployed somewhere: understanding ports, keys, and routing at a basic level is not specialized knowledge.
Honestly, if 10 lines of config is "network engineering", then the bar for software engineering has dropped considerably.
I am probably in the camp where I've found myself ovewhelmed with the amount of information about networks and I'm an alleged software engineer (without formal training in CS albeit).
The 10 loc is not a valid measure.
`sudo rm -rf /` is a 1 line of code. It's not the lines that are hard to wrap your brain around, it's the implication of the lines that really what we are talking about.
The rm -rf comparison is a bit dramatic. WireGuard's config is conceptually simple: your key, peer's key, endpoint, what IPs route through the tunnel. The "implications" are minimal. It is a point-to-point encrypted tunnel.
Being overwhelmed by networking basics is worth addressing regardless. It comes up constantly: debugging connectivity, deployments, understanding why your app cannot reach a database. 30 minutes with the WireGuard docs would demystify it. The concepts are genuinely simple and worth 30 minutes to understand as it applies far beyond VPNs.
I have become pragmatic too. I do not tinker for the sake of it anymore. But there is a difference between choosing convenience and lacking foundational knowledge. One is a time tradeoff, the other is a gap that will bite you eventually.
And with LLMs, learning the basics is easier than ever. You can ask questions, get explanations, work through examples interactively. There is less excuse now to outsource or postpone foundational knowledge, not more[1].
At some point it is just wanting the benefits without the investment. That is not pragmatism, it is hoping the gaps never matter. They usually do.
[1] You can ask an LLM to do all of that for you and make it help you understand under less than 10 minutes!
I do agree on that using LLMs to demistify, learn and explore is better alternative than handing it off to go rouge on, is a better advice. That's how I used it last weekend and I think that's what I would advocate the usage instead of just letting YourFavouriteAI be the sys admin.
My problem is not just networking knowledge. I genuinely faced issues with open source tools. Troubleshooting in the days of terrible search is also a major annoyance. Sometimes, it's just the case that some of the tools have evolved and the same commands don't work as did for someone in 2020 in some obscure forum. I remember those days of tinkering with linux and open source where you'd rely on a Samaritan (bless their soul) who said they'd go home and check up and update you.
Claude suggested me Tailscale too, but I'm glad we're having this conversation (thanks for the tips btw), so that we don't follow hallucinations or bad advice by similarly trained agents. I'm cautiously positive, but I think there's still a case to go self hosted with AI assistance. I found myself looking at possibilities rather than fearing dead ends and time black holes.
Thank you for your reply!
I am glad that it is useful to you! The "terrible search + outdated forum posts" problem is real for sure. LLMs genuinely help there by synthesizing across versions and explaining what changed.
I would say that self-hosting with AI assistance is the right approach. Use it to understand, not to blindly execute. Trust me, it is not much of a deal and you will be happy to have gone with this route afterwards!
Good luck with the setup. If you have any questions, let me know, I am always happy to help.
(I have very briefly mentioned some stuff here: https://news.ycombinator.com/item?id=46586406 but I can expand and be a bit more detailed as needed.)
Can you talk a computer illiterate relative over the phone to install Wireguard on their device (laptop, tablet, phone) so that they can connect to your network?
I have done that with Tailscale, most of the time was spent waiting for it to download.
Oh boy... If you've been an Infra Engineer you would know pretty quickly that the average software engineer can be great at writing code but not so good about managing a complex environment Reliably.
Full stack is for start ups and small projects.
If you're confident that you know how to securely configure and use Wireguard across multiple devices then great, you probably don't need Tailscale for a home lab.
Tailscale gives me an app I can install on my iPhone and my Mac and a service I can install on pretty much any Linux device imaginable. I sign into each of those apps once and I'm done.
The first time I set it up that took less than five minutes from idea to now-my-devices-are-securely-networked.
It’s a bit more than sugar.
1. 1-command (or step) to have a new device join your network. Wireguard configs and interfaces managed on your behalf.
2. ACLs that allow you to have fine grained control over connectivity. For example, server A should never be able to talk to server B.
3. NAT is handled completely transparently.
4. SSO and other niceties.
For me, (1) and (2) in particular make it a huge value add over managing Wireguard setup, configs, and firewall rules manually.
> Plex is just sugar on top of file sharing.
right, like browsers are just sugar on top of curl
curl is just sugar on sockets ;)
SSH is just sugar on top of telnet and running your own encryption algorithms by hand on paper and typing in the results.
At least postman is :P
Tailscale is Wireguard but it automatically sets everything up for you, handles DDNS, can punch through NAT and CGNAT, etc. It's also running a Wireguard server on every device so rather than having a hub server in the LAN, it directly connects to every device. Particularly helpful if it's not just one LAN you are trying to connect to, but you have lots of devices in different areas.
> Kind of like how "pi-hole" is just sugar on top of dnsmasq, and Plex is just sugar on top of file sharing.
Speaking of that, I have always preferred a plain Unbound instance and a Samba server over fancier alternatives. I guess I like my setups extremely barebone.
Yea, my philosophy for self-hosting is "use the smallest amount of software you can in order to do what you really need." So for me, sugar X on top of fundamental functionality Y is always rejected in favor of just configuring Y."
Managing the wg.conf is a colossal PITA, especially if I'm trying to like provision a new client and don't have access to my main laptop. It's crying out for a CRUD app on top of it, and I think tailscale is basically that plus a little. The value add seems obvious.
Also plex is way more than sugar on top of file sharing; it's like filesharing, media management, and a CDN rolled into one product. Soulseek isn't going to handle transcoding for you.
I use Tailscale for exactly those reasons, plus the easy SSL certificates and clients for Android and iOS.
From this thread, I've learned about Pangolin:
https://github.com/fosrl/pangolin
Which seems very compelling to me too. If it has apps that allow various devices connect to the VPN it might be worth it to me to trial using it instead of Tailscale...
If Plex is "just file sharing" then I guarantee you'd find Tailscale "just WireGuard".
I enjoy that relative "normies" can depend on it/integrate it without me having to go through annoying bits. I like that it "just works" without requiring loads of annoying networking.
For example, my aging mother just got a replacement computer and I am able to make it easy to access and remotely administer by just putting Tailscale on it, and have that work seamlessly with my other devices and connections. If one day I want to fully self-host, then I can run Headscale.
I always assumed it was because a lot of ISPs use CGNAT and using tailscale servers for hole punching is (slightly) easier than renting and configuring a VPS.
It's plug and play.
And some people may not value that but a lot of people do. It’s part of why Plex has become so popular and fewer people know about Jellyfin. One is turnkey, the other isn’t.
I could send a one page bullet point list of instructions to people with very modest computer literacy and they would be up and running in under an hour on all of their devices with Plex in and outside of their network. From that point forward it’s basically like having your own Netflix.
You don’t have to run the control plane and you don’t have to manage DNS & SSL keys for the DNS entries. Additionally the RBAC is pretty easy.
All these are manageable through other tools, but it’s more complicated stack to keep up.
Tailscale is able to punch holes in CGNAT which a vanilla wireguard cannot
Setting up wireguard manually can be a pain in the butt sometimes. Tailscale makes it super easy but then your info flows through their nodes.
Yes, that is really all it is.
Is Tailscale still recording metadata about all your connections? https://github.com/tailscale/tailscale/issues/16165
Just be sure to run it with --accept-dns=false otherwise you won't have any outbound Internet on your server if you ever get logged out. That was annoying to find out (but easy to debug with Claude!)
Its especially important in the CGNAT world that has been created and the enormous slog that IPv6 rollout has ultimately become.
Yeah same story for me. I did not trust my sensitive data on random self hosting apps with no real security team. But now I can put the entire server on the local network only and split tunnel VPN from my devices and it just works.
LLMs are also a huge upgrade here since they are actually quite competent at helping you set up servers.
> The biggest reason I had not to run a home server was security: I'm worried that I might fall behind on updates and end up compromised.
In my experience this is much less of an issue depending on your configuration and what you actually expose to the public internet.
Os-side, as long as you pick a good server os (for me that’s rocky linux) you can safely update once every six months.
Applications-wise, i try and expose as little as possible to the public internet and everything exposed is running in an unprivileged podman container. Random test stuff is only exposed within the vpn.
Also tailscale is not even a hard requirement: i rub openvpn and that works as well, on my iphone too.
The truly differentiating factor is methodological, not technological.
People are way too worried about security imo. Statistically, no one is targeting you to be hacked. By the time you are important and valuable enough for your home equipment to be a target you would have hired someone else to manage this for you
Oh, sure, no one is targeting me specifically.
Its only swarms of bots and scripts going through the entire internet, including me.
iptables and fail2ban should be installed pretty early, and then - just watch the logs.
I think this is very dangerous perspective. A lot of attacks on infra are automated, just try to expose a Windows XP machine to the internet for a day and see with how much malware you end up with. If you leave your security unchecked, you will end up attacked; not by someone targeting you specifically, but having all your data encrypted for ransom might still create a problem for you (even if the attacker doesn’t care about YOUR data specifically).
Once, when I was young and inexperienced, I left a server exposed to the Internet by accident (I accidentally exposed a user with username postgres, password postgres). In hours the machine had been hacked to run a botnet. Was I stupid? Yes. But I absolutely wasn't a high-profile enough person to "be a target" - clearly someone was just scanning IP addresses.
Crying inside myself after a crypto miner took my VM this past week.
Now I wish there was some kind of global, single-network version of Tailscale...
TS is cool if you have a well-defined security boundary. This is you / your company / your family, they should have access. That is the rest of the world, they should not.
My use case is different. I do occasionally want to share access to otherwise personal machines around. Tailscale machine sharing sort of does what I want, but it's really inconvenient to use. I wish there was something like a Google Docs flow, where any Tailscale user could attempt to dial into my machine, but they were only allowed to do so after my approval.
You have more or less described OpenZiti. Just mint a new identity/JWT for the user, create a service, and viola, only that user has access to your machine. Fully open source and self-hostable.
Take a look at Zrok it might be what you want: https://zrok.io
These are two very separate issues. Tailscale or other reverse proxies will give you access from the WAN.
Claude Code or other assistants will give you conversational management.
I already do the former (using Pangolin). I'm building towards the latter but first need to be 100% sure I can have perfect rollback and containement across the full stack CC could influence.
I've started experimenting with Claude Code, and I've decided that it never touches anything that isn't under version control.
The way I've put this into practice is that instead of letting claude loose on production files and services, i keep a local repo containing copies of all my service config files with a CLAUDE.md file explaining what each is for, the actual host each file/service lives on, and other important details. If I want to experiment with something ("Let's finally get around to planning out and setting up kea-dhcp6!"), Claude makes its suggestions and changes in my local repo, and then I manually copy the config files to the right places, restart services, and watch to see if anything explodes.
Not sure I'd ever be at the point of trusting agentic AI to directly modify in-place config files on prod systems (even for homelab values of "prod").
I just have a vpn server on my fiber modem/router (edgerouter-4) and use vpn clients on my devices. I actually have two vpn networks - one that can see the rest of my home network (and server) and the other that is completely isolated and can't see anything else and only does routing. No need to use a third-party and I have more flexibility
Tailscale is a good first step, but its best to configure wireguard directly on your router. You can try headscale but it seems to be more of a hobby project - so native wireguard is the only viable path. Most router OS's supports wireguard these days too. You can ask claude to sanity check your configuration.
Besides the company that operates it, what is the big difference between Tailscale and Cloudflare tunnels? I've seen Tailscale mentioned frequently but I'm not quite sure what it gets for me. If it's more like a VPN, is it possible to use on an arbitrary device like a library kiosk?
I don't use Cloudflare tunnels for anything.
But Tailscale is just a VPN (and by VPN, I mean: Something more like "Connect to the office networ" than I do "NordVPN"). It provides a private network on top of the public network, so that member devices of that VPN can interact together privately.
Which is pretty great: It's a simple and free/cheap way for me to use my pocket supercomputer to access my stuff at home from anywhere, with reasonable security.
But because it happens at the network level, you (generally) need to own the machines that it is configured on. That tends to exclude using it in meaningful ways with things like library kiosks.
You can self host a tailscale network entirely on your own, without making a single call to Tailscale Inc.
Your cloudflare tunnel availability depends on Cloudflare’s mood of the day.
There’s also Cloudflare tunnels for stuff that you want to be available to the internet but dont want to open ports and deal with that. You can add an auth policy that only works with your email and Github/whatever SSO.
Just use subpath routing and fail2ban and Im very comfortable with exposing my home setup to the world.
The only thing served on / is a hello world nginx page. Everything else you need to know the randomly generated subpath route.
Why not cloudflare tunnels ?
Even behind a tunnel, if you happen to be running an older version of a service (like Immich) with a known exploit, you are still vulnerable to attacks. Tailscale sidesteps this by keeping the service completely "invisible" to the outside world, so the two don't quite compare in my view.
CF tunnels are game changers for me!
definitely, but to be fair, beyond that it's just linux. Most people would need claude code to get what ever they want to use linux for running reliably (systemd service, etc.)
i'm still waiting for ECC minipcs, then i'll go all in on local DBs too
Supermicro has some low power options such as https://www.supermicro.com/en/products/system/Mini-ITX/SYS-E...