Fun post! An alternative to using futexes to store thread queues in kernel space is to store them yourself. E.g. the parking_lot[0] Rust crate, inspired by WebKit[1], uses only one byte to store the unlocked/locked/locked_contended state, and under contention uses the address of the byte to index into a global open-addressing hash table of thread queues. You look up the object's entry, lock said entry, add the thread to the queue, unlock it, and go to sleep. Because you know that there is at most one entry per thread, you can keep the load factor very low in order to keep the mutex fast and form the thread queue out of a linked list of thread-locals. Leaking the old hash on resizing helps make resizing safe.
As a result, uncontended locks work the same as described in the blog post above; under contention, performance is similar to a futex too. But now your locks are only one byte in size, regardless of platform – while Windows allows 1-byte futexes, they're always 4 bytes on Linux and iirc Darwin doesn't quite have an equivalent api (but I might be wrong there). You also have more control over parked threads if you want to implement different fairness criteria, reliable timeouts or parking callbacks.
One drawback of this is that you can only easily use this within one process, while at least on Linux futexes can be shared between processes.
I've written a blog post[2] about using futexes to implement monitors (reëntrant mutexes with an associated condvar) in a compact way for my toy Java Virtual Machine, though I've since switched to a parking-lot-like approach.
[0]: https://github.com/amanieu/parking_lot [1]: https://webkit.org/blog/6161/locking-in-webkit [2]: https://specificprotagonist.net/jvm-futex.html
> But now your locks are only one byte in size,
That's not a very useful property, though. Because inter-core memory works on cache-line granularities, packing more than one lock in a cache line is a Bad Idea™. Potentially it allows you to pack more data being protected by a lock with that data... but alignment rules means that you're going to invariably end up spending 4 or 8 bytes (via a regular integer or a pointer) on that lock anyways.
In rust the compiler will auto-pack everything so your 1 byte mutex would be placed after any multibyte data to avoid padding.
That's typically not true due to the `Mutex<T>` design: the `T` gets padded to its alignment, then placed into the `struct Mutex` along with the signaling byte, and that struct is padded again before being put into the outer struct.
You can avoid this with a `parking_lot::Mutex<()>` or `parking_lot::RawMutex` guarding other contents, but then you need to use `unsafe` because the borrow checker doesn't understand what you're doing.
I coincidentally was discussing this elsewhere recently: https://www.reddit.com/r/rust/comments/1ky5gva/comment/mv3kp...
- [deleted]
You could use CAS loops throughout to make your locks "less than one byte" in size, i.e. one byte, or perhaps one machine word, but using the free bits in that byte/word to store arbitrary data. (This is because a CAS loop can implement any read-modify-write operation on atomically sized data. But CAS will be somewhat slower than special-cased hardware atomics, so this is a bad idea for locks that are performance-sensitive.)
Single bit spin locks to protect things like linked list nodes are not unheard of.
Enough to be able to pack a mutex and a pointer together for example. If you are carefully packing your structs a one byte mutex is great.
Yup, that's what I'm doing - storing the two bits needed for an object's monitor in the same word as its compressed class pointer. The pointer doesn't change over the lock's lifetime.
But you can embed this 1 byte lock into other bigger objects (eg. high bytes of a pointer).
With 4 byte locks your run into the exact same false sharing issues.
Doesn't the futex2 syscall allow 1 byte futexes on recent kernel?
Double checks. Nope. The api is there and the patch to implement them has been posted multiple times: https://lore.kernel.org/lkml/20241025093944.707639534@infrad...
But the small futex2 patch will not go forward until some users say they want/need the feature