> We rebuilt key AWS features ourselves
At what cost? People usually exclude the cost of DIY style hosting. Which usually is the most expensive part. Providing 24x7 support for the stuff that you've home grown alone is probably going to make large dent into any savings you got by not outsourcing that to amazon.
> $24,000 annual bill felt disproportionate
That's around 1-2 months of time for a decent devops freelancer. If you underpay your devs, about 1/3rd of an FTE per year. And you are not going to get 24x7 support with such a budget.
This still could make sense. But you aren't telling the full story here. And I bet it's a lot less glamorous when you factor in development time for this.
Don't get me wrong; I'm actually considering making a similar move but more for business reasons (some of our German customers really don't like US hosting companies) than for cost savings. But this will raise cost and hassle for us and I probably will need some re-enforcements on my team. As the CTO, my time is a very scarce commodity. So, the absolute worst use of my time would be doing this myself. My focus should be making our company and product better. Your techstack is fine. Been there done that. IMHO Terraform is overkill for small setups like this; fits solidly in the YAGNI category. But I like Ansible.
> Providing 24x7 support for the stuff that you've home grown alone is probably going to make large dent into any savings you got by not outsourcing that to amazon.
I don’t understand why people keep propagating this myth which is mostly pushed by the marketing department of Azure, AWS and GCP.
The truth is cloud provider doesn’t actually provide 24/7 support to your app. They only ensure that their infrastructure is mostly running for a very loose definition of 24/7.
You still need an expert on board to ensure you are using them correctly and are not going to be billed a ton of money. You still need people to ensure that your integration with them doesn’t break on you and that’s the part which contains your logic and is more likely to break anyway.
The idea that your cloud bill is your TCO is a complete fabrication and that’s despite said bill often being extremely costly for what it is.
I think both things are true - people overestimate the level of support provided by AWS, but also re-building the laundry list of stuff OP did in-house to save $24k/year seems onerous.
But the idea that AWS provides some sort of white glove 24/7 support is laughable for anyone that's ever run into issues with one of their products...
Only incident we needed AWS support we had an engineer on call with us for several hours (including a shift change for them). Might’ve been a one-off but has seemed like their support is pretty phenomenal (also talked with someone who worked there and they thought it was good).
If you pay for enterprise support they will absolutely stay on a call with you during a production outage. Best support I've seen from any of the vendors I've used.
I love the fact that AWS was willing to make kernel updates to support our use cases within a week of flagging it
It’s mostly stuff they would have done on aws anyway, and probably with crappy tools/reproducability
Why would cloud providers support anything more than their infrastructure?
The core question is what is the value they bring compared to what they cost.
You will definitely get support reasonably fast if something breaks because of them but that’s not where breakage happens most of the time. The issue will nearly always be with how you use the services. To fix that, you need someone who understands both the tech you use and how it’s offered by your cloud provider. At which point, you have an expert on board anyway so what’s the point of the huge bill?
A hoster will cost you less for most of the benefits. They already offer most of the bricks required for easy scalability.
Hefty bill is for things like RDS, IAM, Systems Manager and all other tools they have. Rebuilding and supporting these is a non-trivial exercise.
It is more trivial than it seems. How did people manage a Postgres instance prior to RDS? Of the entire feature list, what parts of RDS do you use?
1. Dumping a backup every so often?
2. Exporting its performance via Prometheus, and displaying in a dashboard?
3. Machine disk usage via Prometheus?
4. An Ansible playbook for recovery? Maybe kicking that into effect with an alert triggered from bullet 2 and 3.
5. Restoring the database that you backed up into your staging env, so you get a recurring, frequent check of its integrity.
This would be around 100 to 500 lines of code of which an LLM can do for you.
What am I missing?
There is a lot more - Aurora to handle our spiky workload (can grow 100x from normal levels at times) - Zero-ETL into RedShift. - Slow query monitoring, not just metrics but actual query source. - Snapshots to move production data into staging to test queries.
Besides this we also use - ECS to autoscale app layer - S3 + Athena to store and query logs - Systems Manager to avoid managing SSH keys. - IAM and SSO to control access to the cloud - IoT to control our fleet of devices
I’ve never seen how people operate complex infrastructures outside of a cloud. I imagine that using VPS I would have a dedicated dev. ops acting as a gatekeeper to the infrastructure or I’ll get a poorly integrated and insecure mess. With cloud I have teams rapidly iterating on the infrastructure without waiting on any approvals and reviews. Real life scenario 1. Let use DMS + PG with sectioned tables + Athena 2. Few months later: let just use Aurora read replicas 3. Few months later: Let use DMS + RedShift 4. Few months later: Zero-ETL + RedShift.
I imagine a dev. ops would be quite annoyed by such back and forth. Plus he is busy keeping all the software up to date.
> I’ve never seen how people operate complex infrastructures outside of a cloud
That’s your issue. If all you have is a hammer, everything looks like a nail.
I have the same issue with the junior we hire nowadays. They have been so brain washed that the idea that the cloud is the solution and they can’t manage without them that they have no idea of what to do instead of reaching for them.
> I imagine that using VPS I would have a dedicated dev. ops acting as a gatekeeper to the infrastructure or I’ll get a poorly integrated and insecure mess.
You just describe having a real mess after this.
> I imagine a dev. ops would be quite annoyed by such back and forth.
I would be quite annoyed by such back and forth even on the cloud. I don’t even want to think about the costs of changing so often.
>That’s your issue. If all you have is a hammer, everything looks like a nail.
While I admit lack of experience at scale I had my share of Linux admin experience to understand how it could be done. My point is that building a comparable environment without cloud would be much more than just 500 LoC. If you have relevant experience please share.
>I would be quite annoyed by such back and forth even on the cloud. I don’t even want to think about the costs of changing so often.
In cloud it took 1-2 weeks per iteration with several months in between when we have been using the solution. One person did it all, nobody in the team even noticed. Being able to iterate like this is valuable.
I wanted to comment on this but mistakenly put the answer here. Sorry.
>What you see as “rapid iteration” looks a lot like redoing the same work every few months because of shifting cloud-native limitations.
This is not the case. The reason for iteration is the search for solution in the space we don’t know well enough. In this particular case cloud made iteration cheap enough to be practical.
I asked you to think about what it would take to build well integrated suite of tools (PG + backups + snapshots + prom + logs + autoscaling for DB and API + ssh key management + SSO into everything). It is a good exercise, if you ever built and maintained such a suite with uptime and ease of use comparable to AWS I genuinely would like to hear about it.
Ln the case of AWS: customer obsession.
It's their first "leadership principle" (their sort of corporate religion, straight out of the lips of Jeff himself)
You’re describing exactly the kind of vendor lock-in treadmill I was trying to avoid. What you see as “rapid iteration” looks a lot like redoing the same work every few months because of shifting cloud-native limitations.
Also, the idea that using VPS or non-hyperscaler clouds means “poorly integrated and insecure mess” feels like AWS marketing talking. Good ops doesn’t mean gatekeepers — it means understanding your system so you don’t need to swap out components every quarter because the last choice didn’t scale as promised.
I’d rather spend time building something stable that aligns with my compliance and revenue goals, than chasing the latest AWS feature set. And by the way, someone still has to keep all that AWS software up to date — you’ve just outsourced it and locked yourself into their way of doing it.
It makes sense if you consider there is a risk you might get kicked out by AWS because the US government force Amazon to close your account. The US is also hinting about going to war against Europe (Greenland), which makes a bad idea to have any connection to the US.
... and the US just made the EU very unhappy by killing the ICC's Microsoft subscription. Which by the way was hosted on Azure in Europe (meaning local or "sovereign cloud" or whatever they call it provides exactly zero protection against US sanctions).
So no more Microsoft software then?
The EU isn't willing to pay for that. They'll just throw the ICC under the bus, just like they'll throw any EU company that the US sanctions under the bus. That costs less. The EU has a nice name for throwing people under the bus like this: it's called "the peace dividend".
[dead]
AWS features may be expensive to replicate 100% but what if one only needs 80%. One also needs to consider the effort involved in configuring AWS and maintaining the skills for that. Then there are opportunity costs of using e.g. AWS dashboards vs. better ones with grafana etc..
I guess a lot depends on size, diversity and dynamics of the demand. Not every nail benefits from contact with the biggest hammer in the toolbox.
> AWS features may be expensive to replicate 100% but what if one only needs 80%.
You are correct, but I think you're missing the point: my 80% and your 80% don't overlap completely.
> $24,000 annual bill felt disproportionate
>> That's around 1-2 months of time for a decent devops freelancer. If you underpay your devs, about 1/3rd of an FTE per year. And you are not going to get 24x7 support with such a budget.
In terms of absolute savings, we’re talking about 90% of 24k, that’s about 21.6k saved per year. A good amount, but you cannot hire an SRE/DevOps Engineer for that price; even in Europe, such engineers are paid north of 70k per year.
I personally think the TCO (total cost of ownership) will be higher in the long run, because now every little bit of the software stack has to be managed by their infra team/person, and things are getting more and more complex over time, with updates and breaking changes to come. But I wish them well.
In mid sized companies, creating/using/maintaining AWS resources requires nevertheless one or more teams of devops/sre.
Out of experience, in the long run, this "managed aws saved us because we didn't need people" feels always like the typical argument made by saas sales people. In reality, many services/saas are really expensive, and you probably will only need a few features which sometimes you can rollout yourself.
The initial investment might be higher, but in the long run I think it's worth it. It's a lot like Heroku vs AWS. Superexpensive, but it allows you with little knowledge to push a POC in production. In this case, it's AWS vs self hosted or whatever.
Finally, can we quantify the cost of data/information? This company seems to be really "using" this strategy (= everything home made, you're safe with us) for sales purposes. And it might work, although for the final consumer this might have a higher price, which finally pays the additional devops to maintain the system. So who cares?
How important is for companies to not be subject to CLOUD act or funny stuff like that?
70k? Just hire in Poland/Czechia/Slovakia for 50% off!
Unless by Europe you mean the Apple feature availability special of UK/Germany/France/Spain/Italy
Spain and Italy are closer to the Poland bracket than the UK/Germany one, possibly even lower for some roles.
When I got hired by a very big conglomerate here in Sweden, he said Sweden and Poland are amongts the cheapest in Europe for developer salaries, and I would think devops will be close.
You can easily find a decent devops here (not Google-level, no) for much less than 70k I would say, especially if theyre under 30 or so.
My colleagues were talking about salaries in the range of $40-60k... About 8-10 years ago. And I don't think it got any cheaper
Still, it’s highly location-dependent, and mileage varies drastically between countries.
I’m an SWE with a background in maths and CS in Croatia, and my annual comp is less than what you claim here. Not drastically, but comparing my comp to the rest of the EU it’s disappointing, although I am very well paid compared to my fellow citizens. My SRE/devops friends are in a similar situation.
I am always surprised to see such a lack of understanding of economic differences between countries. Looking through Indeed, a McDonald’s manager in the US makes noticeably more than anyone in software in southeast Europe.
As I wrote elsewhere in this thread:
Being able to stay compliant and protect revenue is worth far more than quibbling over which cloud costs a little less or much a monthly salary for an employee is in various countries.
The real ratio to look at is cloud spend vs. the revenue.
For me, switching from AWS to European providers wasn’t just about saving on cloud bills (though that was a nice bonus). It was about reducing risk and enabling revenue. Relying on U.S. hyperscalers in Europe is becoming too risky — what happens if Safe Harbor doesn’t get renewed? Or if Schrems III (or whatever comes next) finally forces regulators to act?
If you want to win big enterprise and governmental deals, Then you got to do whatever it takes and being compliant and in charge is a huge part of that.
Only if you want to hire students. Experienced senior engineers have pretty much the same 70k+ price tag in Poland.
You won’t find anyone competent for that kind of money there.
I think many European countries have SRE lower for than 70k. How good is hard to judge. Our DevOps likely earns less, but she is just decent, not google-level.
Isn’t $24k also a naive accounting of the annual cost of AWS in this case? What FTE-equivalent was required to set up the services they use at AWS? What FTE-equivalent is required to keep the annual bill down to $24k from say $48k or $100k?
Before migration (AWS): We had about 0.1 FTE on infra — most of the time went into deployment pipelines and occasional fine-tuning (the usual AWS dance). After migration (Hetzner + OVHCloud + DIY stack): After stabilizing it is still 0.1 FTE (but I was 0.5 FTE for 3-4 months), but now it rests with one person. We didn’t hire a dedicated ops person.
I am curious why you think AWS services are more hands-off than a series of VPSs configured with Ansible and Terraform? Especially if you are under ISO 27001 and need to document upgrades anyway.
I was emphasizing that if the new Hetzer expenses are naive, then it was also naive to consider that AWS only costs $24k per year.
My point was that AWS is not hands-off. You still have to set it up, you have to keep a close eye on expenses, and Amazon holds your hand less than many people seem to expect.
> That's around 1-2 months of time for a decent
Presumably they are in Europe? so labour is a few times cheaper here.
> Providing 24x7 support
They are not maintaining the hardware itself and it’s not like Amazon is doing providing devops for free. Unless you are using mainly serverless stuff the difference might not be that significant
Amazon’s effort in making sure things _actually are up_ is fundamentally different than budget clouds.
The systems you design when you have reliable queues, durable storage, etc. are fundamentally different. When you go this path you’re choosing to solve problems that are “solved” for 99.99% of business problems and own those solutions.
Still, things fail. A-tier clouds also fail, and you may still have to design for it. Rule of thumb, if you are capable of rolling out your own version, you'll be far more competent planning for & handling downtime, and will often have full ownership of the solution.
Also, any company with strict uptime requirements will have proper risk analysis in place, outlining the costs of the chosen strategy in case of downtime; these decisions require proper TCO evaluation and risk analysis, they aren't made in a vacuum.
This is a strangely limited view. Cloud providers have done the work of building fault-tolerant distributed systems for many of the _primitives_ with large blast radius on failure.
For example, you'd be hard pressed to find a team building AWS services who is not using SQS and S3 extensively.
Everyone is capable of rolling their own version of SQS. Spin up an API, write a message to an in memory queue, read the message. The hard part is making this system immediately interpretable and getting "put a message in, get a message out" while making the complexities opaque to the consumer.
There's nothing about rolling your own version that will make you better able to plan this out -- many of these lessons are things you only pick up at scale. If you want your time to be spent learning these, that's great. I want my time to be spent building features my customers want and robust systems.
I see where you’re coming from — no doubt, services like SQS and S3 make it easier to build reliable, distributed systems without reinventing the wheel. But for me, the decision to shift to European cloud providers wasn’t about wanting to build my own primitives or take on unnecessary complexity. It was about mitigating regulatory risk and protecting revenue.
When you rely heavily on U.S. hyperscalers in Europe, you’re exposed to potential disruptions — what if data transfer agreements break down or new rulings force major changes? The value of cloud spend, in my view, isn’t just in engineering convenience, but in how it helps sustain the business and unlock growth. That’s why I prioritized compliance and risk reduction — even if that means stepping a little outside the comfort of the big providers’ managed services.
> For example, you'd be hard pressed to find a team building AWS services who is not using SQS and S3 extensively
I design and develop products that rely on queuing systems and object storage; if its SQS or S3 is an implementation detail (although S3 is also a de-facto standard). Some of those products may rely on millions of very small objects/messages; some of them may rely on fixed-size multi-MB blocks. Knowing the workload, you can often optimize it in a non-trivial way, instead if just using what the provider has.
> The hard part is making this system immediately interpretable and getting "put a message in, get a message out" while making the complexities opaque to the consumer.
Not really, no. As you said, is already a solved problem. Aws obviously has different scale requirements than my reality, but by having ownership I also have only a fraction of the problems.
> There's nothing about rolling your own version that will make you better able to plan this out -- many of these lessons are things you only pick up at scale.
I cannot agree with you on this. As an example, can you tell me which isolation level is guaranteed on an Autora instance? And what if it is a multi-zone cluster? (If you can, kudos!); next question is, are developers aware of this?
If you have done any cursory solution design, you will know the importance of mastering the above questions on development workflow.
Fred Brooks, the author of The Mythical Man-Month said:
> “Software is ten times easier to write than it was ten years ago, and ten times as hard to write as it will be ten years from now.”
Ansible, Hetzner, Prometheus and object storage will give you RDS if you prompt an LLM, or at least give you the parts of RDS that you need for your use case for a fraction of the cost.
Hetzner is also working on their own Managed RDS offering. Their own S3 Offering is also relatively new. Back then, they've also had job offerings for DB Experts
your implicit assumption that AWS requires less (exoensive) labour is just not true
Exactly our insight having maintained the same app both places.
I have helped hundreds of people migrate to AWS and never had a single person spend more effort unless they went for an apples to apples disaster. I have only seen this when people take a high overhead tool they don’t understand (eg k8s) and move to cloud services they don’t understand.
> Don't get me wrong; I'm actually considering making a similar move but more for business reasons (some of our German customers really don't like US hosting companies) than for cost savings
There will be a new AWS European Sovereign Cloud[1] with the goal of being completely US independent and 100% compliant with EU law and regulations.
[1]: https://www.aboutamazon.eu/news/aws/aws-plans-to-invest-7-8-...
> There will be a new AWS European Sovereign Cloud[1] with the goal of being completely US independent
The idea that anything branded AWS can possibly be US independent when push comes to shove is of course pure fantasy.
It's not really the brand that's the problem.
If Amazon partnered with an actually independent European company, provided the software and training, and the independent company set it up and ran it; in case of dispute, Amazon could pull the branding and future software updates, but they wouldn't be able to access customer data without consent and assistance of the other company and the other company would be unlikely to provide that for requests that were contrary to European law. It would still be branded AWS for Europe, and nobody would doubt its independence.
This way, where it's all subsidiaries of Amazon can't be trusted though.
I can guarantee that if you read your comment with Amazon substituted for Huawei, you'd object against the okayness of that arrangement. Same thing.
But it will check boxes on compliance checklist.
Not on the “political concerns” checklist which is getting more and more important
I don't know, with that argument you can argue that everything is dependent on everything, for instance, the EU automobile industry is hugely dependent on materials and chips from all over the world including US and thus real independence is a pipe dream.
This is one of the reasons we were wondering if the US can switch off our fighter jets. The ones we own, brought from the US.
The US clearly state that extraterritoriality is fine with them. Depending on the company, one gag order is enough to sabotage a whole company.
> I don't know, with that argument you can argue that everything is dependent on everything
It is. And China have been the only ones intelligent enough to have understood this very long ago. They also show that while entire independence on their scale may be a pipe dream, getting close to it is feasible.
Of course you know, the US government can use many methods to enforce their demand, it makes no sense to use an Amazon alternative to Amazon, it's such a nonsense to join a conversation about migrating away from Amazon suggesting that.
Our customers across EU (hospitals) are not impressed or interested (n=175). Such a delusional project.
The ICC move by MS made hospitals go in an even higher gear to prepare off-ramp plans. From private Azure cloud to “let’s get out”
There are still US AWS people the US gov can apply pressure to. Sovereignty requires nothing on US soil, people, infrastructure, entities, etc. What Microsoft and AWS are doing is performance art around “EU sovereignty.”
Even if you are cloud native, it makes sense to have scaffolding to allow for vendor mitigation, unless you want to tie your entire companies future to the whims of a single company.
Monitoring and persistence layers are cross cutting and already an abstraction with impedance mismatch already.
You don't need a full blown SOA2 systems, just minimal scaffolding to build on later.
Even if you stick to AWS for the remainder of time, that scaffolding will help when you grow, AWS services change, or you need a multi cloud strategy.
As a CTO, you need to also de-risk in the medium and longer term, and keeping options open is a part of that.
Building tightly coupled systems with lots of leakage is stepping over dollars to pick up pennies unless selling and exiting is your plan for the organization.
The author doesn't mention what they had to write, but typically it is cloud provider implementation details leaking into your code.
Just organizing ansible files in a different way can often help with this.
If I was a CTO who thought this option was completely impossible for my org, I would start on a strategic initiative to address it ASAP.
Once again you don't need to be able to jump tomorrow, but to me the belief that a vendor has you locked in would be a serious issue to me.
90% sounds good but the real dollar amount feels low.
Two reasons for this stick out:
- Are the multi-million dollar SV seed rounds distorting what real business costs are? Counting dev salaries etc. (if there is at least one employee) it doesn't seem worth the effort to save $20k - i.e., 1/5 of a dev salary? But for a bootstrapped business $20k could definitely be existential.
- The important number would be the savings as percent of net revenue. Is the business suddenly 50% more profitable? Then it's definitely worth it. But in terms of thinking about positively growing ARR doing cost/benefit on dropping AWS vs. building a new (profitable) feature I could see why it might not make sense.
Edit to add: it's easy to offhand say "oh yeah easy, just get to $2M ARR instead of saving $20k- not a big deal" but of course in the real world it's not so simple and $20k is $20k. The prevalent SV mindset of just spending without thinking too much about profitability is totally delusional except for like 1 out of 10000 startups.
From the blog post: "We are a Danish workforce management company doing employee scheduling." Definitely not a VC-funded SV startup. Probably bootstrapped.
Yes, bootstraped for our own money. It makes a difference.
If I generalize, I see two kinds of groups for whom this reduction of cost does not matter. The first group are VC-funded, and the second group are in charge of +million AWS bill. We do not have anything in common with these companies, but we have something in common with 80% of readers on this forum and 80% of AWS clients.
It was cool reading your article.
We're also bootstrapped and use Hetzner, not AWS (except for the occasional test), for very much the same reasons as you.
And we are also fully infrastructure as code using Ansible.
We used to be a pure software vendor, but are bringing out a devtool where the free tier runs on Hetzner. But with traction, as we build out higher tier services, it's an open question on what infrastructure to host it on.
There are a kazillion things to consider, not the least of which is where the user wants us to be.
My last contact with AWS support (100€/month tier) was someone feeding me LLM generated slop that contained hallucinations about nonexistent features and configuration options.
This is what I'm wondering too. 90% is a lovely number to throw around but what is the opportunity cost?
> Cost of DIY and support: You’re absolutely right that 24x7 ops could eat up any savings if you built everything from scratch without automation or if you needed dedicated staff watching dashboards all night. In our case:
• We heavily invested upfront in infrastructure-as-code (Terraform + Ansible) so that infra is deterministic, repeatable, and self-healing where possible (e.g. auto-provisioning, automated backup/restore, rolling updates).
• Monitoring + alerting (Prometheus + Alertmanager) means we don’t need to watch screens — we get woken up only if there’s truly a critical issue.
• We don’t try to match AWS’s service level (e.g. RTO of minutes for every scenario) — we sized our setup to our risk profile and customers’ SLAs.
> True cost comparison:
• The migration was done as part of my CTO role, so no external consulting costs. The time investment paid back within months because the ongoing cost to operate the infra is low (we’re not constantly firefighting).
• I agree that if you had to hire more people just to manage this, it could negate the savings. That’s why for some teams, AWS is still a better fit.
> Business vs. cost drivers: Honestly, our primary driver was sovereignty and compliance — cost savings just made the business case easier to sell internally. Like you, our European customers were increasingly skeptical of US cloud providers, so this aligned with both compliance and go-to-market.
> Terraform / YAGNI: Fair point! Terraform probably is more than we need for the current scale. I went with it partly because it fits our team’s skillset and lets us keep options open as we grow (multi-provider, DR regions, etc).
And, finally, because this, I am posting about it. I am sharing as much as I can, and just spread the work about it. I just sharing my experience and knowledge. If you have any questions or want to discuss further, feel free to reach out at jk@datapult.dk!
I think it's indeed the opportunity cost and the commoditization of the infrastructure and operational expertise that drives startups to AWS. But over time, as you scale, they can easily become your biggest component to your marginal cost, without an easy exit, because they locked you in.
I feel there is a lot of FUD spread whenever someone moves off the cloud, with the inane comparison to the annual wage of a dedicated sysadmin, trying to discourage you from doing a “reckless” migration which will bite you in the ass, your servers will catch fire every day and that it is better to stay within the golden handcuffs of AWS and GCP.
I wonder if it’s both stockholm syndrome and learned helplessness of developers that cannot imagine having to spend a little more effort and save, like OP, 90% off their monthly bill.
Yeah sure for some use cases AWS is the market leader, but let’s not kid ourselves, 9/10 companies on AWS don’t require more than a few servers and a database.
Well said. It reminds me of a story I heard in a podcast once.
A database administrator for a drug cartel became an informant for the police.
His cartel boss called him in on a weekend due to a server errors. He said in the podcast "I knew I've been found out because a database running Linux never crashes"
Makes you wonder what everyone is telling themselves about the need for RDS..
Good enough is good enough for most folks. In most cases, downtime is cheaper than higher reliability.
[dead]
I kind of cringed reading this article because there is also the cost in downtime which doesn't seem to be considered along with the RTO timelines.
Hetzner has had issues where they just suddenly bring servers down with no notice, sometimes every server attached to an account because they get a bogus complaint, and in some cases it appears they are still up but all your health checks fail, where you are scurrying around trying to find the cause with no visibility or lifeline. All this costs money, a lot of money, and its unmanageable risk.
For all the risks and talk of compliance, what about the counterparty-risk where a competitor (or whoever) sends a a complaint from a nonexistent email which gets your services taken down. Sure after support gets involved and does their due dilligence they see its falsified and bring things back up but this may be quite awhile.
It takes their support at least 24 hours just to get back to you.
DIY hosting is riddled with so many unmanageable costs I don't see how OP can actually consider this a net plus. You basically are playing with fire in a gasoline refinery, once it starts burning who knows when the fire will go out so people can get back to work.
Totally valid concerns — I don’t disagree that DIY hosting comes with real risks that managed platforms abstract away (but AWS could close your account too).
We didn’t go into this blind though — we spent a lot of time testing scenarios (including Hetzner/OVH support delays) and designing mitigation strategies.
Some of what we do:
• Our infra is spread across multiple providers (Hetzner, OVH)) + Cloudflare for traffic management. If Hetzner blackholes us, we can redirect within minutes. • DB backups are encrypted and replicated nightly to various regions/providers (incl. one outside the primary vendors), with tested restore playbooks.
The key point: no platform is free of counterparty risk — whether that’s AWS pulling a region for legal reasons, or Hetzner taking a server offline. Our approach tries to make the blast radius smaller and the recovery faster, while also achieving compliance and cutting costs substantially (~90% as noted).
DIY is definitely not for everyone — it is more work, but for our particular constraints (cost, sovereignty, compliance) we found it a net win. Happy to share more details if helpful!
Oh, an imagine being kicked out of AWS and you used Aurora.. My certified multi-cloud setup with standard components should not make you cringe.
With respect, there's a big difference between "could close your account" and have "closed people's accounts" temporarily based on unlawful complaints.
I probably won't be responding after this or in the future on HN because I took a significant blast off my karma for keeping it real and providing valuable feedback. You have a lot of people brigading accounts that punish those that provide constructive criticism.
Generally speaking AWS is incentivized to keep your account up so long as there is no legitimate reason for them taking it down. They generally vet claims with a level of appropriate due diligence before imposing action because that means they can keep billing for that time. Spurious unlawful requests cost them money and they want that money and are at a scale where they can do this.
I'm sure you've spent a lot of time and effort on your rollout. You sound competent, but what makes me cringe is the approach you are taking that this is just a technical problem when it isn't.
If you've done your research you would have ran across more than a few incidents where people running production systems had Hetzner either shut them down outright, or worse often in response to invalid legal claims which Hetzner failed to properly vet. There have also been some strange non-deterministic issues that may be related to hardware failing, but maybe not.
Their support is often a one response every 24 hours, what happens when the first couple responses are boilerplate because the tech didn't read or understand what was written. 24 hours + % chance of skipping the next 24 hours at each step; and no phone support, which is entirely unmanageable. While I realize they do have a customer support line, it is for most an international call and the hours are bankers hours. If your in Europe you'll have a lot easier time lining up those calls, but anywhere else and you are dealing with international calls with the first chance of the day being midnight.
Having a separate platform for both servers is sound practice, but what happens when your DAG running your logging/notification system is on the platform that fails, but not a failover. The issues are particularly difficult when half your stack fails on one provider, stale data is replicated over to your good side, and you have nonsensical, or invisible failures; and its not enough to force an automatic failover with traffic management which is often not granular enough.
Its been awhile since I've had to work with Cloudflare tm, so this may have become better but I'm reasonably skeptical. I've personally seen incidents where the RTO for support for direct outages was exceptional, but then the RTO for anything above a simple HTTP(200) was nonexistent with finger pointing, which was pointless because the raw network captures were showing the failure at L2/L3 traffic on the provider side which was being ignored by the provider. They still argued, and downtime/outage was extended as a result. Vendor management issues are the worst when contracts don't properly scope and enforce timely action.
Quite a lot of the issues I've seen with various hosting providers OVH and Hetzner included, are related to failing hardware, or transparent stopgaps they've put in place which break the upper service layers.
For example, at one point we were getting what appeared to be stale cache issues coming in traffic between one of a two backend node set on different providers. There was no cache between them, and it was breaking sequential flows in the API while still fulfilling other flows which were atomic. HTTP 200 was fine, AAA was not, and a few others. It appeared there was a squid transparent proxy placed in-line which promptly disappeared upon us reaching out to the platform, without them confirming it happened; concerning to say the least when your intended use of the app you are deploying is knowledge management software with proprietary and confidential information related to that business. Needless to say this project didn't move forward on any cloud platform after that (and it was populated with test data so nothing lost). It is why many of our cloud migrations were suspended, and changed to cloud repatriation projects. Counter-party risk is untenable.
Younger professionals I've found view these and related issues solely as technical problems, and they weigh those technical problems higher than the problems they can't weigh because of lack of experience and something called the streetlamp effect which is an intelligence trap often because they aren't taught a Bayes approach. There's a SANS CTI presentation on this (https://www.youtube.com/watch?v=kNv2PlqmsAc).
The TL;DR is a technical professional can see and interrogate just about every device, and that can lead to poor assumptions and an illusion of control which tend to ignore problems and dismiss them when there is no real clear visibility about how those edge problems can occur (when the low level facilities don't behave as they should). The class of problems in the non-deterministic failure domain where only guess and check works.
The more seasoned tend to focus more on the flexibility needed to mitigate problems that occur from business process failures, such as when a cooperative environment becomes adversarial, which necessarily occurs when trust breaks down with loss, deception, or a breaking of expectations on one parties part. This phase change of environment, and the criteria is rarely reflected or touched on in the BC/DR plans; at least the ones that I've seen. The ones I've been responsible for drafting often include a gap analysis taking into account the dependencies, stakeholder thoughts, and criteria between the two proposed environments, along with contingencies.
This should includes legal obviously to hold people to account when they fail in their obligations but even that is often not enough today. Legal often costs more than simply taking the loss and walking away absent a few specific circumstances.
This youthful tendency is what makes me cringe. The worst disasters I've seen were readily predictable to someone with knowledge of the underlying business mechanics, and how those business failures would lead to inevitable technical problems with few if any technical resolutions.
If you were co-locating on your own equipment with physical data center access I'd have cut you a lot more slack, but it didn't seem like you are from your other responses.
There are ways to mitigate counter-party risk while receiving the hosting you need. Compromises in apples to oranges services given the opaque landscape rarely paint an objective view, which is why a healthy amount of skepticism and disagreement is needed to ensure you didn't miss something important.
There's an important difference between constructive criticism intended to reduce adverse cost and consequence, and criticisms that simply aren't based in reality.
The majority of people on HN these days don't seem capable of making that important distinction in aggregate. My relatively tame reply was downvoted by more than 10 people.
These people by their actions want you to fail by depriving you of feedback you can act on.
I sincerely appreciate it — and I would never downvote a reply like this. It's clear you’ve been around the block, and I respect the experience and nuance you're bringing to the discussion.
On the topic of Hetzner and account risks, I completely agree: this is not just a technical issue, and that's why we built a multi-cloud setup spanning Hetzner and OVH in Europe. The architecture was designed from the start to absorb a full platform-level outage or even a unilateral account closure. Recovery and failover have been tested specifically with these scenarios in mind — it's not a "we'll get to it later" plan, it's baked into the ops process now.
I’ve also engaged Hetzner directly about the reported shutdown incidents — here’s one of the public discussions where I raised this: https://www.reddit.com/r/hetzner/comments/1lgs2ds/comment/mz...
What I got in a private follow-up from Hetzner support helped clarify a lot about those cases. Without disclosing anything sensitive, I’ll just say the response gave me more confidence that they are aware of these issues and are actively working to handle abuse complaints more responsibly. Of course, it doesn't mean the risk is zero — no provider offers that — but it did reduce my level of concern.
Regarding Cloudflare, I actually agree with your point: vendor contract structure and incentives matter. But that’s also why I find the AWS argument interesting. While it’s true that AWS is incentivized to keep accounts alive to keep billing, they also operate at a scale where mistakes (and opaque actions) still happen — especially with automated abuse handling. Cloudflare, for its part, has consistently been one of the most resilient providers in terms of DNS, global routing, and mitigation — at least in my experience and according to recent data. Neither platform is perfect, and both require backup plans when they become uncooperative or misaligned with your needs.
The broader point you make — about counterparty risk, legal ambiguity, and the illusions of control in tech stacks — is one I think deserves more attention in technical circles. You're absolutely right that these risks aren't visible in logs or Grafana dashboards, and can't always be solved by code. It's exactly why we're investing in process-level failovers, not just infrastructure ones.
Again, thank you for sharing your insights here. I don’t think we’re on opposite sides — I think we’re simply looking at the same risks through slightly different lenses of experience and mitigation.