This is an interesting approach, and I wish it will succeed.
I am still skeptical. In the late 90s or early 2000s Linus was interviewed on TV and what he said stuck with me to this day. When asked about competitors he roughly said:
No one likes writing device drivers and as long no one young and hungry comes along who is good at writing device drivers I am save.
I think he was already well aware at that time that keeping the driver interface unstable is his moat. A quarter of a century later kernels that run on virtualized hardware are a dime a dozen but practically useable operating systems in the traditional sense of abstracting away real hardware can still be counted on one hand.
> keeping the driver interface unstable is his moat
Maybe we will have young and hungry AI-for-system researchers who would like to take on the job of developing AI agents that translate Linux drivers in C to Asterinas ones in (safe) Rust.
Another feasible approach is to reuse Linux drivers by running a Linux kernel inside some kind of isolated environments. For example, the HongMeng kernel leverages User-Mode Linux to reuse Linux drivers on HongMeng [1]. Asterinas can take a similar approach.
[1] https://www.usenix.org/conference/osdi24/presentation/chen-h...
The bottleneck, for the most part, is actually being able to test them. Even a translation by a skilled engineer is liable to have issues if they don't actually have the hardware to test things out. Linux's driver support is built out mainly by people doing that, either hobbyists scratching their own itch of hardware they own or manufacturers contributing drivers for their own hardware.
(It's also why regressions are pretty common: it's completely infeasible to test all of linux on each release, some people test some parts of it, but it's all very ad-hoc, very little is automated, and it's not at all unified)
OKL4, deployed on tons of phones, had the ability to run drivers stand alone or in driver VM's that wrapped them. Other guests could call either. I think Genode uses a similar, L4-based component.
There was also an academic project that combined virtualization with Windows drivers.
If you port drivers from Linux those drivers would have to be GPLv2-licensed.
That needn't be a problem, assuming the linking-clause of the GPL2 doesn't extend to device drivers. Gpl2 doesn't extend to userspace processes linking into the kernel, so maybe?
Yeah but unless the drivers run like microkernel services in a separate userspace, the GPL applies to anything you link into your kernel
This is the future. Hardware has already standardized more towards USB HID than in previous decades, Linus interview included. When AI can develop these device drivers based on just probing the HID info, we’ll be on Cloud9. Because maybe then, we’ll get the year of the Linux desktop.
Such standard interfaces are rarely the problem, though there is often a headache of dealing with the pile of 'quirky' hardware that just so happens to work well enough with exactly what windows happens to do. The pain point is all the things that aren't that. Nonstandard, niche hardware which maybe has a few thousand users, or big and complex interfaces like graphics cards which are basically whole OSs on their own.
[dead]
Pretty sure if your device actually just uses USB HID, it already works on Linux without a custom driver.
What requires a custom driver is when your device adds its own non standard features.
USB it's a nightmare.
Sarcasm is too for some
Yeah, it's very easy. Real devices usually adhere to the specs.. only very few exceptions: https://github.com/torvalds/linux/blame/master/drivers/hid/h... .. /s
[dead]
> I think he was already well aware at that time that keeping the driver interface unstable is his moat.
Does Linus have/want a moat? He's not a tech startup founder. He's a kernel hacker who has had success beyond his wildest dreams, and whose needs will be met for the rest of his working life no matter what happens.
It seems like projection to talk about this aspect of the kernel as if it's some intentional strategy for preventing certain kinds of competition.
I don't believe he wanted or intended a moat. The drivers need to be in the kernel to have a working kernel, that is a kernel that actually runs on the hardware. Move the drivers out of the kernel and Linux would have died long ago as there would have been a proliferation of proprietary drivers that stopped being maintained once the hardware was no longer on sale. And poor driver support is why no other kernel has taken root.
> Move the drivers out of the kernel and Linux would have died long ago as there would have been a proliferation of proprietary drivers that stopped being maintained once the hardware was no longer on sale.
Isn't this basically the situation on Android?
They're still in the kerenel AFAIK, but you need to compile the kernel specifically with a bunch of source-restricted or binary blobs to get things to work.
They're not open and maintained as part of the broader kernel release which is why the driver situation degrades so quickly once the hardware is no longer actively supported.
Prior art includes SPIN OS (Modula 3), JX OS (Java), House OS' H-Layer (Haskell), and Verve. Each one had a type-safe, memory-safe language for implementing the features. They usually wall off the unsafe stuff behind checked, function calls. Some use VM's, too.
Ignoring performance or adoption, the main weaknesses are: abstraction, gap attacks; unsafe code bypassing the whole thing; compiler or JIT-induced breakage of safety model; common, hardware failures like cosmic rays. This is still far safer than kernels and user apps in unsafe languages.
One can further improve on it by using static analysis of unsafe code, ensuring all unsafe functions respect type-safe/memory-safe interfaces, compilers that preserve abstraction safety during integration, and certified compilers for individual components. We have production tools for all except secure, abstract compilation which (a) is being researched and (b) can be checked manually for now.
> but practically useable operating systems in the traditional sense of abstracting away real hardware can still be counted on one hand.
I think this is telling. There are plenty of 'standards' for interfaces in the hardware world. Some (mostly USB) are even 'followed', but the realities of hardware are that it never behaves nominally. So without someone willing to spend the time writing code to handle the quirks that can't be patched out and the errata it's very hard to run on physical hardware with any performance or support.
On the other hand, running on real hardware is less important if none of your hardware is real!
98% of Linux I interact with is running virtualized: on my desktop/laptop systems it’s either Virtualbox full-screened so I can use Windows for drivers, or a headless VM managed by Docker.app on my Mac. All my employer’s production workloads are AWS virtual machines.
My only Linux bare metal hardware is a home server, which I’m planning to retire soon-ish, replaced by a VM on an ebay Mac mini to reduce power bill & fan noise.
If someone can make a Linux compatible kernel that’s more secure and just as performant, it’s much easier these days to imagine a large new user base adopting it despite a dearth of drivers.
In computer science we are taught it's turtles all the way down but in the real world you learn that you hit the world of bits and bytes really fast.[1]
My point is that every virtualized environment needs a layer that talks to real hardware down below. We have enough diversity in the upper layers but not enough in the lowest layer.
[1] I heard it expressed like this from an Azul Systems employee first, but unfortunately don't remember who it was.
Your os might be virtualised but very often the actual hardware leaks through that virtualisation, often intentionally.
I don’t see any support for an os that doesn’t have good driver support for accelerators, whether GPU/TPU or otherwise. And if your look into some of the accelerators built into modern amd and intel chips that becomes a nightmare just supporting the CPU, never mind USB host controllers and network interfaces etc
I have written device drivers for, literally, decades now.
Its the old guys who write the best drivers, naturally.
For me, Asterinas represents a refreshing way to approach some thorny problems in the embedded space, in which embedded-Linux on ARM, RISC-V and MIPS is a viable, economically-speaking, platform for a great deal of industry.
While Asterinas is really sexy, if this same approach were taken for, say, FreeRTOS as well along the way .. then there could at least, also, be "on one hands" worth of operating systems, abstracted, in the "lets just use rust' camp ..
> keeping the driver interface unstable is his moat
It's basically like npm update, at the kernel level.
Hmmm... Would containers + AI enable a scattershot "just let the LLM keep trying stuff until it works" approach to driver development?
Given the number of times I've bricked hardware during reverse-engineering and driver development, I don't find it super likely, tbh. I'm by no means an expert here, but it's one of those things where if you already have good enough documentation (which in this case could be a known-good implementation) then it's more of a translation task and LLMs could absolutely be helpful there, but the edge cases are sharp and frequent.
It's interesting though, because you don't really need to reverse engineer anything if the device has an in-tree Linux driver. You "just" need to port the Linux driver to your OS. This is certainly something an LLM can help with, although the usual skepticism applies (it works until it doesn't, etc.)
In fact I sometimes wonder whether it's feasible to write a new kernel while somehow shimming into Linux's driver model, while still keeping your own kernel unique (ie. not just a straight clone of Linux itself.) Some way of "virtualizing" the driver layer so that a driver can "think" it's in a Linux kernel but with a layer of indirection to everything somehow.
Maybe in a few years. I find ai most successful when you can provide a very clear spec and solid test suite, when I don’t have that it makes a lot of mistakes without handholding.