Read somewhere that it is relatively easy to adapt NetBSD's drivers into a custom kernel... maybe Serenity folks can go that way? Device drivers are huge obstacle for any fledgling OS.
One of Serenity's philosophical decisions is that as much as possible they build everything themselves from scratch. So even if NetBSD's drivers would be easy to adapt and have a compatible license they probably wouldn't go that route and would instead write their own drivers.
This is a noble and worthy goal in itself. Too much software development nowadays is just copying or gluing existing code. Clean-room implementations ensure we (collectively) still have the wits to rebuild and maintain the difficult parts.
It also means we burn time and energy "rediscovering" the same knowledge we failed to better preserve and communicate.
I'm all for greenfield when appropriate but I also get more quality work done standing on giants.
You may also discover a new and better way to solve an old problem.
For most things, build on the work of others, but every now that then, we should check if those "giants" actually found the best solution, so that we may chance direction if we're heading down the wrong path.
The knowledge is already preserved. It's about keeping the muscles fit.
Rediscovering the same knowledge has kind of been the point of Serenity OS since the project's inception.
> Device drivers are a huge obstacle for any fledgling OS.
I've wondered if new/hobby OSes would fare better by starting out targeting a popular single board computer like a raspberry pi? A mostly fixed set of hardware to make/get drivers for and test your system on.
I think the path Serenity took is the better one. Initially targeting QEMU as the single supported platform. You have the same advantage as far as targeting a single platform for drivers but contributors don't need to buy additional hardware, can develop using the platform/tools they are accustomed to, starting instances is faster than rebooting hardware and no need to deal with the issues of remotely debugging. Targeting a specific SBC as a 2nd platform after a certain level of stability is reached is probably a good idea.
QEMU is a fixed set of hardware. And far easier to target than a Pi.
The founder of SerenityOS created it as therapy and a pure “happiness” project. I am not sure actually using it was a real goal. So, he did the stuff he found interesting. That led him to writing display servers and web engines and crypto libraries and away from “real” drivers. He wrote his own C/C++ standard libraries and userland utilities but only enough driver code to make QEMU happy. It only ever ran in a VM on his Linux desktop. In the end, he found the web browser more interesting than the OS it was created for.
Very different project from Linux where what Linus wanted was an OS for his own computer. Linus was happy to leave the userland to others and still sticks to the kernel even now.
> In the end, he found the web browser more interesting than the OS it was created for.
To be fair, his career was heavily focused on browser development before he ended up in a period of unemployment (I can't recall the exact circumstances), at which point he developed SerenityOS as a means of meditation/to give him purpose.
He still works on the OS, he's just more fulfilled working in a realm he specializes in and has pivoted focus there.
You can follow his monthly SerenityOS YouTube updates leading up to the Ladybird announcement for a more detailed rundown.
That implies AArch64 support which many hobby OSes don't have, usually because the introductory osdev material is written largely for x86.
But yes, raspi is a good platform if you are targeting arm.
As I'm also designing an OS, my biggest piece of advice for anyone seriously considering it is to target two archs at once, in parallel. Then adding a third becomes much easier.
Raspberry Pi has a bizarre boot sequence and bringup process, much of it which is not open and not implemented in open source code. I think it's probably not a great platform for this sort of thing, despite it being decently well-documented.
(And even then, its USB controller, for example, has no publicly-available datasheet. If you want to write your own driver for it, you have to read the Linux driver source and adapt it for your needs.)
For anyone that hasn't fallen into this rabbit hole yet it's a good one: raspberry pi started out as a kind of digital billboard appliance, so they chose a GPU with efficient 1080p decoding and strapped a CPU to the die. On power up the (proprietary) GPU boots first and then brings up the CPU.
That's as far as I got before discovering the Armbian project could handle all that for me. Coincidentally that's also when I discovered QEMU because 512MB was no longer enough to pip install pycrypto once they switched to Rust and cargo. My pip install that worked fine with earlier versions suddenly started crashing due to running out of memory, so I got to use Armbians faculties for creating a disk image by building everything on the target architecture via QEMU. Pretty slick. This was for an Orange Pi.
>GPU boots first and then brings up the CPU.
IS that the reason for the full screen of colors before you see the boot sequence? Never thought about that.
Yep.
The "color gamut" display, as you call it, is a GPU test pattern, created by start.elf (or start4.elf, or one of the other start*.elf files, depending on what is booting). That 3rd stage bootloader is run by the GPU which configures other hardware (like the ARM cores and RAM split).
You could probably skip some of the difficult parts if you bring in an existing bootloader that can provide a UEFI environment (it's how Linux & the BSDs boot on ARM Macs). But Serenity is all about DIY/NIH
openSUSE does the same to boot on Raspberry Pi as well. Probably a few other distros as well.
It seems like a lot of the OS dev momentum is shifting to RISC-V. Lots of recent tutorials and courses going that way. Any links to your OS?
RISC-V is the new hotness but it has limited usefulness in general purpose osdev at the moment due to slower chips (for now) and the fact not a lot of ready-to-go boards use them. I definitely think that's changing and I plan to target RISC-V; I have just always had an x86 machine, and I have built some electronics that use aarch64, so I went with those to start.
Kernel is still in early stages but progress is steady - it's "quietly public". https://github.com/oro-os
> A mostly fixed set of hardware
But it's not. Over time they've revised the SOC (processor) and gone from 32 to 64 bit capability. The latest - Pi 5 - has totally re-architected the I/O subsystem, putting most I/O functions on their RP1 chip and connecting that to the SOC using PCIE.
And as already mentioned, the unusual boot sequence: The GPU takes control on power up and loads the initial code for the CPU.
All of the OSs I'm aware of that run on the Pi depend on "firmware" from the Raspberry Pi folk. Looking at the files in the folder that holds this stuff, it's pretty clear that every variant of the Pi has a file that somehow characterizes it.
> All of the OSs I'm aware of that run on the Pi depend on "firmware" from the Raspberry Pi folk. Looking at the files in the folder that holds this stuff, it's pretty clear that every variant of the Pi has a file that somehow characterizes it.
That's not very different from depending on the BIOS/UEFI firmware on a PC; the main difference is that older Raspberry Pi didn't have a persistent flash ROM chip, and loaded its firmware from the SD card instead. Newer Raspberry Pi do have a persistent flash ROM chip, and no longer need to load the firmware from the SD card (you can still do it the old way for recovery).
> And as already mentioned, the unusual boot sequence: The GPU takes control on power up and loads the initial code for the CPU.
Even this is not that unusual; AFAIK, in recent AMD x86 processors, a separate built-in ARM core takes control on power up, initializes the memory controller, and loads the initial code for the CPU. The unusual thing on Raspberry Pi is only that this separate "bootstrap core" is also used as the GPU.
> Even this is not that unusual; AFAIK, in recent AMD x86 processors, a separate built-in ARM core takes control on power up,
Good point. I think most modern motherboards (and all server boards) have a "management engine" that prepares the board on power up and even serves some functions during operation. I believe that that's what supports IPMI/iDRAC even when the host is running.
I don't think that changes the situation WRT the variety of H/W for the Pis through their history and even at present.
I've also argued in favor of that; I don't actually like Pis personally, but they're a super common, cheap enough, easy to acquire system and that's huge.
Raspberry Pi's are highly proprietary for hardware blobs...
Well unless they want to target PowerPC and make interested parties buy a Raptor Talos workstation what else is open enough for you? (Actually I would support this) Are there RISCV systems that are blobless?
Hey, if they target PowerPC I could run it on my eMac!
It also documented far better than any other easily accessible hardware. I’d pick the Pi over any other real hardware target.
I don't know where this idea that the RPi has good hardware documentation comes from. One glaring example is its DWC USB controller. Sure, it has a Linux driver that is open source, but its datasheet is not publicly available!
So if you want to develop your own driver for it, you have to second guess its documentation by reading at the driver's comments. This is bad.
What do you mean by documented? Sure we have a general idea of how stuff works, and some implementations can even serve as a reference but almost nothing is documented in an official sense. Your average Chinese SBC is much, much better documented, in the sense that the SOCs are at least officially documented. The Broadcom soc isn't.
The few devs I know in that space prefer the various Beaglebone SBCs
+1 here. Peak beaglebone DX to me was the beaglebone white. Full Soc documentation, and only a single usb cable to carry power, jtag, and serial.
A gem of a device and hmm maybe I should write some code for one again..
I think the replies to this post may be missing the point? AIUI The raspi CPU drivers being closed makes it actually pretty hard to write an open driver for it. So you would need raspberry pi or their CPU supplier to write the driver for you, which they wouldn't do for a small OS. It took multiple years to support raspi 4 in mainline Linux and AFAIK raspi 5 still does not have a fully functioning mainline driver. That's why Raspberry Pi OS exists. You would pick a CPU that has open drivers because it would be easier to write your own for a different operating system.
Sure, that's one of the reasons I don't like them. But AFAIK that's not an impediment to running a custom OS, so I think for a lot of projects the tradeoff is good.
And yet there is a wide variety of operating systems available for the RPi, so it doesn't seem to be too much of a hurdle in practice.
As a maintainer of one of these OSes that runs on the pi (though not the person that did the port): no, it's a pretty big issue.
What issues did the proprietary blobs cause versus porting to other arm based SBCs without them?
It's not so much the proprietary blobs, as the complete lack of documentation and debuggability for the peripherals. The PC platform, and several other SBCs, are either well documented, or at least give you the possibility of obtaining hardware with documentation.
That, combined with general flakiness (eg, power delivery issues for peripherals on older pis), and you end up with users blaming the software for hardware issues.
It's just not very fun to develop an OS for.
Probably not the raspberry pi as it is one of the less conventional SBC in terms of booting and while its hardware is more documented than ever, it's still a less documented Broadcom custom chip.
Rump kernel/anykernel is the concept. The drivers can run in user space with minimal underlying support.
The solution is to pick 1 good set of hardware (that the software authors sell themselves if possible) and build drivers for that and only that.
It's basically what Apple does and has done from the start as far as I can tell. The only breakthrough consumer Unix-like thing out there.
System76 is another example of (almost) that.
Frame.work comes close although they don't focus on the OS as much.
Apple used to sell Macs with Intel (integrated), AMD, and Nvidia GPUs. In other departments, they also had multiple vendors and chips. The Apple Silicon transition streamlined the product lines, but before, there were tons of drivers, and macOS had to support all of them for 5+ years after the product release.
I believe it. Their libc is remarkably tidy, I've copied stuff out of it several times for various projects. Can't speak for the drivers, though.