I'm a Bryan Cantrill fan so I'm glad this is working out, I was extremely skeptical of them at the beginning(on HN too), I think because I've built DCs for many years and was stuck in a mindset that served my use case, I've come around to Oxide. My main concerns originally were 2 fold: "this seems bougie", is there actually a market for this, and, is there a good interoperability story with mix and match. From what I could tell the answers were "yes" and "don't care" - I had thought this wasn't a great answer but it seems I'm wrong. I was chatting with Boris Mann just last week about them and he said "actually John that isn't correct, think of how much quick compute needs to come online and how much discreet compute is going to be required with low management overhead, they're doing just fine and that market will grow" - After that I did some research and pondered on it for a day - I think my friend is right and I am wrong, I think at this point Oxide is going to be a really strong name and I wish them the best of luck.
I was skeptical as well, if only because just being a better product isn't enough to win the market. Everything we hear about Oxide sounds like an impressive green field implementation of a data center, but is that enough? Do the people making buying decisions at this scale care if their sysadmins have better tools?
> Do the people making buying decisions at this scale care if their sysadmins have better tools?
Look at who oxide is selling to and for what reasons.
It's about compute + software at rack scales. It does not matter if it is good it matters that it's integrated. Gear at this level is getting sold with a service contract and "good" means you dont have to field as many calls (keeping the margins up).
> Everything we hear about Oxide sounds like an impressive green field implementation of a data center, but is that enough?
Look at their CPU density and do the math on power. It's fairly low density. Look at the interconnects (100gb per system). Also fairly conservative. It's the perfect product to replace hardware that is aging out, as you wont have to re-plumb for more power/bandwidth, and you still get a massive upgrade.
As someone only tangentially familiar with this domain, I have questions about this:
> Look at their CPU density and do the math on power. It's fairly low density. Look at the interconnects (100gb per system). Also fairly conservative. It's the perfect product to replace hardware that is aging out, as you wont have to re-plumb for more power/bandwidth, and you still get a massive upgrade.
It sounds like the CPU density and network bandwidth are not great. If it's only suitable to replace aging systems, does that not limit their TAM? Or is that going to be their beachhead for grabbing further market share.
I am not saying that I fully endorse the characterization of the parent, but it is true that we started selling these systems two years ago, and new hardware comes out with better stats all the time.
Given how small we are, new designs and refreshes take a while. Part of growing as a company is being able to do this more often. We'll get there :)
For a small company, a limited TAM isn't a problem (and honestly is probably an advantage) if the overall market is big. Datacenters as a whole are a ~$30B market per year. The last thing you want as a small company is a bunch of different customers pulling you in different directions. By limiting your TAM, you limit the number of problems you need to solve for a few years, and if everything goes well and you start outgrowing your TAM, you can expand later.
Is there a risk that the established players can commoditize oxide’s complement here? Is oxide’s product a feature that the big companies can just clone? I’m not sure to be honest. I have followed oxide through the news and am happy to see some progress in this area, I just want to know how to understand their success in the proper context.
The complement of a set consists of everything that is not in the set. Having your complement commoditized is a good thing, it refers to everything your users need that is not part of your value proposition. If it's commoditized, your users have easier access to it hence use more of it, which drives up their demand for the things that _are_ part of your value proposition.
Well it would be in oxide's interest to do that before their competitors do if it's profitable, right? Wouldn't the more established companies have more money to invest in research and development to try to beat oxide to their own follow-up, now that the market has spoken in oxide's favor?
This is the concept I'm referring to:
https://www.joelonsoftware.com/2002/06/12/strategy-letter-v/
Datacenters seem to be increasingly power/cooling (i.e., power)/water (if they use it) limited. I'm wondering if the lower CPU density really matters when 75% of a DC risks remaining empty because the power budget is maxxed out already.
And yes the 1-for-1 replacement of older racks is probably a key selling point too.
If it translates to improved efficiency, sure. And this big of a round seems to indicate that idea has some merit
I imagine most of the customers are highly technical, not typical generic business class.
What percentage of enterprise IT compute has not moved to a public cloud?
Only 30% have moved to the public cloud
https://www.goldmansachs.com/insights/articles/cloud-revenue...
I know I have been involved in multiple efforts to move the same workloads into and then out of the cloud, as corporate budgeting requirements prioritized either capex or opex at different times.
I'm not sure anyone really knows
uptime institute publishes some good numbers from survey, which puts on prem + colo still at >50% last I checked.
And still some additional 5% in like... on prem in closets.
Last year Amazon said it was 85% on prem. I dunno who has the right numbers.
Every single company I've worked with over the past 5 years has been repatriating from the cloud to their own DCs or colo.
The cloud doesn't pan out for long running, predictable workloads. Most companies are and will continue to use VMs for many years.
I'm a fan just because they have such an incredibly good sounding product. like, it has no relevance to me, I'll never use it, but I get a deep sense of satisfaction just reading about how it works.
I must admit that I am much more unsophisticated than this, and yet I "invested" in Oxide (by running my own projects off Oxide servers), and it is gratifying to see them continue to grow. My (naive) assessment: (a) agreed with Cantrill's opinions on software, (b) liked his willingness to put himself out there, and (c) felt the eng blogs showed a high level of (socio-)technical ability.
I think for the internet to break out of walled gardens, high-quality independent datacenters need to exist -- nobody wants to manage their own datacenters, and nobody wants to rely on Google/Amazon/Microsoft's platforms or (even worse) business products. I hope this continues.
Did you order a unit from them or how did you manage to get access to the Oxide hardware?
Uh no I just rent a box from them.
How are you doing that?
Wow, looks like I am not doing that. oxide.host is an entirely different company.
Haha no worries! I was a bit confused, but now I learn about that company! Thanks :)
Lolol. Would delete my comment if I could. Ah, well, I still wish them well. :D
- [deleted]
I still dont get it. If someone else's software is running the hardware, what difference does it make if its on-prem or offsite?
> If someone else's software is running the hardware
Our stack is open source.
> what difference does it make if its on-prem or offsite?
The difference is not where it runs, it's that you own our racks, rather than rent them. In the traditional cloud, you're renting. Other vendors who sell you hardware will still have you paying software licensing fees, so it never feels like you truly own it. We don't have any licensing fees.
I like you. Are you guys hiring?
We are! https://oxide.computer/careers
Thanks! Not much for my skillset, but I'll keep an eye out!
I just have to say this is an incredible page. Everything is well thought out and there's no BS. The salary is upfront too and everything is remote. A gold standard for hiring pages?
Paying everyone at the company a flat rate of $207k, no matter their role or location, is quite the concept.
- [deleted]
Are there any storage-related roles? Will Oxide redefine storage as well?
We don’t have any storage specific positions to my knowledge, but that falls under the control plane job, they’re the ones working on Crucible.
First let me say I really like this part of your guys narrative: you have really strong opinions about how infrastructure and IT should work at many levels, like technically and aesthetically, that seems real and nice and likable.
Focusing on just this financial narrative you're weaving, what stops a bank from selling "virtual racks" that work financially the same as owning an Oxide rack, but it's just AWS?
$1m buys you 42U of, whatever. You're handed an AWS account you do not pay for, but it has the $1m worth of, whatever in it, in perpetuity. Maybe the bank even throws in some fakey market you can "part out" and "sell" your rack to, years later, at some "market price."
It seems like, the product - and maybe the experience of buying the product - is what is most important to Oxide. It's really interesting to me, because I cannot wrap my head around what this narrative is:
You guys are Apple of Racks. But minus the iPhone, because there is no monopoly here. So, Apple (Minus iPhone) of Racks. Is that it? It's the rest of their offerings, which without the iPhone monopoly effects, are Buying Experiences. It's like when people buy $10,000 Mac Studios to "run LLMs", which of course they are going to do like, zero to one times, because they are excited about the idea of the product. For the audience that needs to "run LLMs" they buy, whatever, or rent. But they don't buy Mac Studios. Just because people do something doesn't mean it makes sense.
Is the narrative, AWS Doesn't Make Sense? AWS makes a ton of sense, for basically everyone. Everybody uses it and pays up the wazoo for it. And there are good objective reasons AWS makes sense, at basically all levels. Who is fooled by, "AWS doesn't make sense?"
The problem with AWS isn't even that they are expensive. It's that Amazon is greedy. It could be cheaper, which is a different thing than being expensive. It matters because "AWS stays greedy longer than the average Y Combinator company stays private" is an interesting bet for an investor to take. They could decide to be less greedy at any time, and indeed, it did not take long after offerings of S3-like storage from others led them to simply reduce prices.
What that is telling me is, I could take $100m in funding, sell $1m "racks" of equivalent compute on the Rolls Royce of cloud infrastructure, making everything financially and legally and imaginarily the same as ownership, and then take a $300k loss, right? On each "rack", same as your loss? It's a money losing business, but here I am making the money losing very pure, very arby. Is this what you are saying customers want?
Clearly they want a physical rack. By all means, I can send them a big steel box that provides them that aesthetic experience. Cloudflare, Google, they do the physical version of this all the time: dumb, empty appliances that are totally redundant, because people ask for them. RudderStack, Weights & Biases, a bunch of companies come to mind doing the same thing in software, like so called Kubernetes Operators that literally just provision API keys but pretend to be running on your infrastructure. People ask for Kubernetes operators, they made them, but of course, they don't do anything. They are imaginarily Kubernetes operators.
The reason there are licensing fees and rentals and whatever is the enterprise sales pipeline, right? Enterprise sales is, give people want they ask for. People ask for a price that's below $X up front, so that's what IT vendors do, and then it turns out people are okay with some ongoing licensing fees, so there. That's what they do.
So what IS it?
> what stops a bank from selling "virtual racks" that work financially the same as owning an Oxide rack, but it's just AWS?
I'm struggling to understand what you're suggesting here, to be honest. First of all, banks don't sell cloud compute, so no bank is going to do that. Secondly, what does "work financially the same" mean? These are fundamentally different products, AWS is a service, Oxide is purchasing hardware that you then own.
> $1m buys you 42U of, whatever. You're handed an AWS account you do not pay for, but it has the $1m worth of, whatever in it, in perpetuity. Maybe the bank even throws in some fakey market you can "part out" and "sell" your rack to, years later, at some "market price."
What would be the advantage to anyone in this arrangement? Why not just have an AWS account in this case?
> "AWS stays greedy longer than the average Y Combinator company stays private"
Just to be clear, we are not a yc company. But beyond that:
> The problem with AWS isn't even that they are expensive. It's that Amazon is greedy. It could be cheaper, which is a different thing than being expensive.
It is true that if Amazon dropped prices, then the "rent vs buy" equation changes for some customers. But there always will be some people for whom it makes sense to own, and some people for whom it makes sense to buy.
> RudderStack, Weights & Biases
Neither of these companies seem to sell general cloud computing? They also don't sell hardware? These seem like completely different businesses.
> So what IS it?
We sell servers. Customers buy those servers, put them in a data center, and get a private cloud. That's the business. Other folks are doing similar sorts of things, but they all tend to be integrating parts from various vendors. We believe that our product is of a higher quality, because we built the whole thing, from the ground up. Hardware and software, working together. There are other things that matter as well, but that's the big picture.
> I'm struggling to understand what you're suggesting here, to be honest. First of all, banks don't sell cloud compute, so no bank is going to do that. Secondly, what does "work financially the same" mean? These are fundamentally different products, AWS is a service, Oxide is purchasing hardware that you then own.
You're telling me it's important to people to "own" instead of "rent." Well I can manufacture an "Own" out of a "Rent": I write up a contract for my customer that says "$1,000,000 for 100 EPYC servers", I ship an empty steel box, the end user gets an AWS account which they cannot add anything to, and it has 100 EPYC server metal instances in it. I pay the bills in that account. Okay? Now I have created "owning" out of renting.
If we pontificate on the objective value of owning versus renting, such as the ability to sell the hardware, I can manufacture that too: you might want to sell your empty steel box 5 years later, and I will buy it from you for $300,000. Or maybe the value of owning versus renting is that owning is "cheaper" than AWS. Okay, I'll sell the rack for $500,000 instead of $1,000,000. Do you see?
The important part of course is, I didn't have to make any racks. I didn't have to write any software. I give people something they really, really want, AWS, and I give it to them in the shape of an "Own." Of course, your "Own" and my "Own" are different, but whose "Own" is more different from a typical IT purchaser? You guys know 100x better than me.
I agree that it sounds stupid though. That's BAD. If it sounds stupid to manufacture an "Own" out of a "Rent", and it is an arbitrage, and it also is something people like, that is BAD for you. If it sounds like something banks should not be involved in, that is BAD. Banks are involved in extremely lucrative businesses!
> First of all, banks don't sell cloud compute, so no bank is going to do that.
Ha ha, but this is what you do! You might not be a bank, but you are a $100m bank account. You're more bank than I am today. And you are selling something - you know, you say you sell a rack, and you say, "that's the business," and there are people on this thread - and this is not at all an unorthodox opinion - who are saying, what is really the material difference between cloud and on-premises compute, when the interfaces between the two look so similar?
A huge difference is "Rent" versus "Own" which is why we are talking about it and why you brought it up. But I am showing you that you can manufacture an "Own" out of a "Rent," and it would be interesting to see, well, should I take $100m and spend $200m on R&D with it (ha ha), or should I take $100m and directly fuel it into a flywheel of reselling AWS into a shape that people like, which is "Own" instead of "Rent"?
> Hardware and software, working together.
See it's stuff like this that says to me, "Apple (Minus the iPhones) of Racks." That is really your thinking. It's about buying experiences. You guys are answering this question, clearly, in your own rhetoric. Because if it was really about "Own" versus "Rent," a bank could do it.
Similar to what others noted earlier I'm having trouble understanding exactly what you're trying to communicate here. I'll respond based to points I am clear on.
Selling a customer a contract for on-premises computing and giving them a fake metal box and SaaS is borderline unethical depending on the terms of said contract. I understand the sentiment of that point though. There are many reasons a customer chooses to own instead of rent. Legal requirements, financial incentives, and even control over performance to name a few.
On-premises computing was so good that the cloud providers packaged it up and sold it back to people at a premium that could only ever be rented. The finances of that model don't make sense to many businesses as they look to reignite their on-premises computing with the modernity of the cloud providers. That's where Oxide shines in my opinion- being able to have on-premises computing that combines the efficiencies of the hyper scalers with an API-driven approach to managing resources. We take that a step further by building hardware and software in-house for additional benefits such as power efficiency, control over the networking stack, additional telemetry, etc.
there's seriously nothing complicated about this. Define ownership. Who cares about owning a bunch of computers? If you can get the compute, and you pay up front for it once, and then you can sell "it" later, and it's cheaper than renting, what does it matter if the compute comes in the form of a physical box or if it is in the cloud? So these are all things that matter about ownership! To me, occupying a physical space is not important for ownership, wrt servers.
Nobody is saying anything about anything unethical... I am mocking the idea of needing the steel box, forget about the steel box.
It's just Amazon Reserved Instances, but with an indefinite period. Okay? Isn't that an attractive product?
Why am I talking about banks? Because maybe in Oxide's deck it says, "Amazon will NEVER do this. Amazon will NEVER sell indefinite reserved instances." Fine. Well a bank can simply pay spot prices and sell you an up front price, if you want. Okay? It's the same thing. It only matters what Amazon does when we're talking about $100m Series B, which is what this article is about! It's not about the technology.
> On-premises computing was so good that the cloud providers packaged it up and sold it back to people at a premium that could only ever be rented.
No... guys... AWS makes sense. It's not a premium "that could only ever be rented." There are a ton of much cheaper cloud providers. Amazon just happens to be selling the Rolls Royce of clouds. They have a ridiculous margin. Figma makes more profit for AWS each year than it will ever make for itself in its entire lifetime. "that could only ever be rented" is simply not true, they can afford to make all sorts of innovative pricing models, reserved instances being one of them.
Oxide just hasn't had to compete with "99 Year AWS Reserved Instances." But absolutely, positively, utterly nothing stops them from offering that. They already give you a massive, MASSIVE discount for 3 year reservations.
That said, obviously not having to deal with human beings managing hardware is valuable. It's the same shit as the difference between "AI" meaning a computer and overseas workforces. They might produce the same outputs for the same cost, but think deeply about yourself: how much are you willing to pay to deal with a computer instead of an IT tech? To avoid phone calls? To avoid doing things that might be faster, but are in person?
You are casually transitioning between something that is pretty well understood through the history of online commercial computing (i.e. 1970s): 1) owned hardware versus leased/timeshared/rented hardware or services and 2) concepts of financial instruments that many readers here are probably not intimately familiar with.
If I needed a generator to run a remote mining operation, and you just told me to just buy energy futures instead, we'd be having a silly discussion. Whether it makes sense for me to rent or buy the generator has more to do with governments, [,tax ]laws, and risks that ultimately manifest as cashflow decisions. You have some valid thread you are pulling on for what are the economics of general purpose compute and to whom, but your argument needs a lot more care to carefully define and make your case and why it is okay to dismiss the outlier cases for instance.
> If I needed a generator to run a remote mining operation, and you just told me to just buy energy futures instead, we'd be having a silly discussion.
Exactly. Just because they are similar in some senses doesn't mean that they're fungible. Generator manufacturers still have a business even though you can purchase energy futures.
okay well this is an interesting and engaging conversation, if you guys someday make something in the $1,000 range you have a customer.
It would be like if you needed to buy a generator versus, I will give you a giant cable that has all the same economics as buying a generator. In that case yeah, if I am giving you all the same economics as buying your own generator, but cheaper, you would be stupid not to take my deal. It’s got nothing to do with energy futures. My suggestion is to copy what I am saying to a chatbot, and ask it what “looks like a duck, quacks like a duck” means, and what’s going on here.
"giant cable" -- this kind of thing does obviously happen, rural electrification is heavily subsidized by governments (see the rest of my comment..). But it is also not a realistic option in plenty of cases as such and therefore industrial companies like Cat and Cummins are happy to sell multi-million dollar generators. If your approach to the subject is to copy it into a chatbot it might explain why this discussion is ultimately specious, because a casual and even jocular approach to this is not really adequate to make any point.
I'm not part of Oxide, but I think you're assuming everyone is okay with not controlling the hardware they run on.
There is plenty of addressable market where a public cloud or even a colocation of hardware doesn't make sense for whatever bespoke reason.
I don't think their target customer is startups, and that's okay. You likely aren't their target customer. But they've identified a customer profile, and want to provide a hardware + software experience that hasn't been available to that customer before.
It's not clear to me that the counter-party risk is _remotely_ similar between the two offerings. Someone reselling indefinite access to AWS could relatively easily set things up to go "poof" after a few years, whereas if it's your box, that's not possible. Now, critical components of the box could croak, and someone could still be screwed, but even there, you likely have more options for support than just paying Amazon's ransom.
yeah... AWS isn't going to set up anything to go poof, Oxide, despite making something awesome, will go poof if it charges too little money for it.
Amazon is expensive but not that expensive.
Oxide is selling a technical solution to technical customers with ownership needs or desires for security, regulatory, or other reasons. Access to the complete stack source code is another. I know a bajillion of these customers and they all have very, very deep pocket books.
They are specifically going after features in a way that no other vendor is, with an extreme care of execution of their crafts at the highest level.
The problems oxide is solving for these customers is something Amazon has shown no interest in. Could they? Sure, they could do anything they put their pocket books to doing, but they haven't.
> You're telling me it's important to people to "own" instead of "rent." Well I can manufacture an "Own" out of a "Rent": I write up a contract for my customer that says "$1,000,000 for 100 EPYC servers", I ship an empty steel box, the end user gets an AWS account which they cannot add anything to, and it has 100 EPYC server metal instances in it. I pay the bills in that account. Okay? Now I have created "owning" out of renting.
No, you haven’t. You didn’t deliver what was paid for. This wouldn’t be accepted.
> Do you see?
I’m sorry, but I do not. You’re not describing a business. You’re throwing some numbers out to describe a fraudulent enterprise.
> what is really the material difference between cloud and on-premises compute, when the interfaces between the two look so similar?
The material differences are around capex vs open spending, that you can locate your own hardware where you want to (which matters for things like “must be in a colo in southern Manhattan” (or any of the other reasons why physical location matters for latency reasons) or “must not leave the soil of $COUNTRY”), and the entirety of “TCO of owning is cheaper than renting for many workloads.” That’s just some of the larger obvious ones.
Okay well my feedback to you is, maybe have a chatbot read what I am saying to you, and ask for a charitable interpretation, which is the site’s rules. It’s simply a “looks like a duck, quacks like a duck, it is a duck” argument. It’s not complicated. The way I am looking at things is how a lot of execs at giant companies and VCs look at things, you are welcome to ask them too, or ask for that frame of reference from a chatbot. You guys just raised $100m, you should be able to engage over questions of arb or whatever, and just know this stuff, and be able to talk to bankers and talk like a banker.
The stubbornness around interpreting this argument as negatively and as flawed as possible is a bad look.
I have to agree with much of this, if you need something that feels like "the cloud" but on-premise then you could have used OpenStack for the last 10 years.
The only reasons to use Oxide racks are that you get an all-in-one solution and they don't charge you a subscription fee, you only pay upfront for the hardware once. But if this company goes public one day shareholders will surely push for a subscription based licensing model.
I have yet to see the benefit of "custom software" for "custom hardware". To me it looks like a liability, if Oxide stops to exist tomorrow you'll be left with a hunk of metal which is a dead end. The software being open source doesn't change that, if you have enough manpower to support such software on your own then you can surely support any other more flexible solution.
I wish them all the best but yeah I can't see any reason why someone would pay oxide over aws onprem rack solution.
I'm sure they'll find _some_ customers but they're going to be few and far between.
I think they just pretend that AWS OnPrem Rack doesn’t exist.
Cloud computing is significantly more expensive than self-hosting for most large organizations. People are slowly figuring that out, and Oxide was a bet on the timing of that realization.
That's almost always been true but servers are more like commodities at this point.
iphone is nice and upgrade from commodities phone and I have one, but I wouldn't care much if my fridge has sleek UI because I just need it to be a fridge.
Servers are commodities, a on-prem cloud is not.
- [deleted]
Banks care _a lot_ where the data is store, hence most banks are inclined to keep customer data on-prem.
Because data must be on-prem, banks are stuck in legacy infra paradigms. The whole org suffers, innovation is stiffled, yada yada…
An on-prem cloud product (hardware+software) is a game changer for these companies, IMO.
My question to oxide: how easy is to integrate external hardware into the cloud? For example: bunch of GPUs or a bunch of next-gen hardware like SambaNova.
Is the software open-source with reproducible builds of any runtime binaries?
Oxide has been remarkably transparent about the development and architecture of critical system components. We can only hope they succeed and inspire others to follow their transparency lead.
Open source is a requirement but not the only one. There are countless examples of companies building integrated solutions based off of open source projects that, when they went bankrupt, there was nobody to pick up the available pieces and continue moving the stack forward. Just pointing out that open source is not this magical escape hatch that some people think (at least not in corporate environments).
Especially so for Oxide's decidedly non-Linux setup. They are in a niche software ecosystem with practically no one else. Apparently mostly because they're practically all ex-Solaris staff.
https://www.illumos.org/docs/about/who/
(Listing all projects using ZFS or DTrace as "who uses Illumos" is cheating.)
I remember many Linux fans saying that monocultures were bad until Linux became so popular that Linux was the one benefiting from a monoculture. Despite that, the rationale against monocultures still applies.
That said, Illumos is influential as an organ donor to many others. There are a number of awesome technologies in it.
Oh I would love to have some healthy competition to Linux, but I am not rooting for Solaris to do that, I'd rather have one of the Rust-based microkernel actually git gud. Time to shake the foundations of the age-old security and isolation models, not resuscitate a dusty old thing built on piles of C and shell on top of a large monolithic kernel and pretend everything's fine.
Well, good news: we have one of those too![0]
Oh I am well aware. But I am hoping to run dynamic workloads, including virtual Linux machines, on a PC. It's a bit of a different world.
Latest one still in my to-read pile: https://lwn.net/Articles/1022920/
You want to run dynamic workloads on a PC? As in a desktop PC? That is clearly a completely different market than Oxide serves.
Or do you mean PC as in rackmounted servers? If that's what you meant, PC is a very poor word for it. That's kind of the point Oxide made from the beginning. Why are you running server workloads on a PC with a funny shape? Why do you need 84 power supplies (2/shelf) in your rack? Why do you need any keyboard or graphics controllers? Why don't you design for purpose a rack-sized server?
Or did you mean exactly what you wrote: "a PC"? You only need one server, not a whole rack's worth? Again, that is not the market Oxide is targeting.
Or you need to be able to run "dynamic workloads" that could require 40-4000 CPUs? You need hypervisors and orchestration, etc.? And you don't want them to be Solaris, or to run on Solaris? And you know all about Hubris and you don't want that either? But you think it would be nice if they weren't Linux? Maybe if they were modern microkernels written in something like Rust? But not the Hubris microkernel written in Rust?
I'm going to have to take you at your word. Your needs are "a bit of a different world" than Oxide fits.
But it's pretty cool that you still got some friendly personal attention from two big-name Oxide employees who seem willing to try to help you if they can. If you ever do find yourself in a world that aligns with theirs it appears that they are willing to try to accommodate you.
We're talking about healthy competition for Linux, Rusty microkernels, and I'm saying Hubris is not what I'm looking for because of the stated reasons. Hubris workloads are defined at build time and it does not target x86.
When I say PC I mean the large ecosystem of compatible performant hardware that exist out there, as opposed to e.g. RISC-V at this stage.
> Apparently mostly because they're practically all ex-Solaris staff.
I absolutely do not have a Solaris/illumos background! The first time I ever sshed into an illumos machine was my first day on the job.
The exception proves the rule. You can't deny the deep Solaris heritage.
I wouldn't deny that, no.
- [deleted]
- [deleted]
- [deleted]
Even if it makes no sense as a technical thing, businesses will buy it. Look at all the huge companies that keep spending millions trying to DIY their own datacenter for the 3rd time. Enterprise loves to buy big iron and self-hosting crap, so I'm sure they will be successful selling this. However, I think they're going to need to branch out to more services in order to continue increasing their revenue every year (after 5+ years lets say).
> Look at all the huge companies that keep spending millions trying to DIY their own datacenter for the 3rd time
You mean all the huge companies that ran multiple datacenters before the cloud was even a thing?