This is excellent. Too often people guess at things when they could be more empirical about them. Ever since I learned the scientific method (I think 3rd or 4th grade) I was all about the 'let's design an experiment' :-).
Let me state up front that I have no idea why Wayland would have this additional latency. That said, having been active in the computer community at the 'birth' of X11 (yes I'm that old) I can tell you that there was, especially early on, a constant whine about screen latency. Whether it was cursor response or xterm scrolling. When "workstations" became a thing, they sometimes had explicit display hardware for just the mouse because that would cut out the latency of rendering the mouse in the frame. (not to mention the infamous XOR patent[1])
As a result of all this whinging, the code paths that were between keyboard/mouse input and their effect on the screen, were constantly being evaluated for ways to "speed them up and reduce latency." Wayland, being something relatively "new" compared to X11, has not had this level of scrutiny for as long. I'm looking forward to folks fixing it though.
Display devices (usually part of your GPU) still have "explicit display hardware just for the mouse" in form of cursor planes. Later this has been generalized as overlay planes.
Planes can be updated and repositioned without redrawing the rest of the screen (the regular screen image is on the primary plane), so moving the cursor is just a case of committing the new plane position.
The input latency introduced by GNOME's Mutter (the Wayland server used here) is likely simply a matter of their input sampling and commit timing strategy. Different servers have different strategies and priorities there, which can be good and bad.
Wayland, which is a protocol, is not involved in the process of positioning regular cursors, so this is entirely display server internals and optimization. What happens on the protocol level is allowing clients to set the cursor image, and telling clients where the cursor is.
Protocols can bake in unfortunate performance implications simply by virtue of defining an interface that doesn't fit the shape needed for good performance. Furthermore, this tends to happen "by default" unless there is a strong voice for performance in the design process.
Hopefully this general concern doesn't apply to Wayland and the "shape" you have described doesn't sound bad, but the devil is in the details.
I don't think the Wayland protocol is actually involved in this. Wayland describes how clients communicate with the compositor. Neither the cursor, nor the mouse are a client, so no where in the path between moving the mouse and the cursor moving on screen is Wayland actually involved.
The story is different for applications like games that hide the system cursor to display their own. In those cases, the client needs to receive mouse events from the compositor, then redraw the surface appropriately, all of which does go through Wayland.
According to Asahi Lina, X does async ioctl that can update the cursor even during the scanout of the current frame, while Wayland does atomic, synced updates on everything, cursor involved, which has the benefit of no tearing and the cursor's state is always in sync with the content, but it does add an average of 1 more frame latency (either updates just in time for the next frame), or it will go to the next frame.
This is not what Wayland does, it is what a particular display server with Wayland support decided to do.
Second, just to be clear, this only discusses mouse cursors on the desktop - not the content of windows, and in particular not games even if they have cursors. Just the white cursor you browse the Web with.
Anyway, what you refer to is the legacy drm interface that was replaced by the atomic one. The legacy interface is very broken and does not expose new hardware features, but it did indeed handle cursors as its own magical entity.
The atomic API does support tearing updates, but cursor updates are currently rejected in that path as drivers are not ready for that, and at the same time, current consensus is that tearing is toggled on when a particular fullscreen game demands it, and games composite any cursors in their own render pass so they're unaffected. Drivers will probably support this eventually, but it's not meant to be a general solution.
The legacy API could let some hardware swap the cursor position mid-scanout, possibly tearing the cursor, but just because the call is made mid-scanout does not mean that the driver or hardware would do it.
> but it does add an average of 1 more frame latency
If you commit just in time (display servers aim to commit as late as possible), then the delay between the commit and a tearing update made just before the pixels were pushed is dependent on the cursor position - if the cursor is at the first line shown, it makes no difference, if on the last shown, it'll be almost a frame newer.
Averaging cursor positions mean half a frame of extra latency, but with a steady sampling rate instead of rolling shutter.
Proper commit timing is usually the proper solution, and more importantly helps every other aspect of content delivery as well.
Sure, it's what Gnome Wayland does, but the Wayland protocol does sort of mandate that every frame should be perfect, and the cursor has to match the underlying content, e.g. if it moves over a text it has to change to denote that it is selectable.
I do believe it is a useful tradeoff, though.
Wayland support tearing too, not sure if gnome do but KDE if the application supports, it can draw with tearing for less latency in full screen
It seems like it should be possible to do the X async method without tearing.
When updating the cursor position, check if line being output overlaps with the cursor. If it isn't, it's safe to update the hardware cursor immediately, without tearing. Otherwise, defer updating the cursor until later (vblank would work) to avoid tearing.
Of course, this assumes it's possible to read what row of the frame buffer is being displayed. I think most hardware would support it, but I could see driver support being poorly tested, or possibly even missing entirely from Linux's video APIs.
This would have to be done by the kernel driver for you GPU. I kind of doubt that it's possible (you're not really scanning out lines anymore with things like Display Stream Compression, partial panel self refresh and weird buffer formats), and doubt even more that kernel devs would consider it worth the maintenance burden...
I mean at some point it's a fundamental choice though, right? You can either have sync problems or lag problems and there's a threshold past which improving one makes the other worse. (This is true in audio, at least, and while I don't know video that well I can't see why it would be different.)
Well there are opportunities to do the wrong thing though, like sending an event to the client every time it get an update. Which means that high poll rate mice would DDOS less efficient clients. This used to be a problem in Mutter, but that particular issue was fixed.
Yeah, Wayland isn't designed in such a way that would require any additional latency on cursor updates. The Wayland protocols almost entirely regard how applications talk to the compositor, and don't really specify how the compositor handles input or output directly. So the pipeline from mouse input coming from evdev devices and then eventually going to DRM planes doesn't actually involve Wayland.
Software works best when the developers take responsibility for solving user's problems.
> Wayland, which is a protocol
This is wayland's biggest weakness. The effect is diffusion of responsibility.
You're kind of getting tripped up on terminology. The OP didn't measure Wayland; they measured GNOME Shell which does take responsibility for its performance. Also, I'm not aware of any latency-related mistakes in Wayland/Weston (given its goal of tear-free compositing).
> You're kind of getting tripped up on terminology.
I'm not. My comment doesn't address the latency of gnome shell. I understand the boring technical distinctions between wayland and wayland client libraries and wayland display servers and gnome shell and mutter and sway, blah blah blah. Much like I understand that Linux is a kernel. That it is inspired by UNIX, but it is technically not a UNIX. I also understand that if someone describes themselves as a Linux user they probably don't just mean that they have a Android phone or that the display controller in their dishwasher or wireless access point happens to include Linux the kernel.
The "well acksually wayland is just the name of the protocol" that emerges whenever a problem is brought up is a symptom of the underlying problem with wayland the system. The confusion that gives rise to these deflections is also a symptom of that problem.
By Conway's law systems end up resembling the organisations that produce them. In this way the design of wayland the system seems to be designed by people who don't want to work together. I can see a parallel with microservice architecture.
The distinction between protocol and implementation IS significant here.
Imagine comparing HTTP1.1 vs HTTP3. These are protocols, but in practice one compares implementations. I can pick curl for http1.1, but Python for http3, and http3 would very likely measures as slower.
Is that the protocols fault?
>> ”well acksually wayland is just the name of the protocol" […] is a symptom of the underlying problem
> well acksually
It’s not the protocol’s fault, but the system and organisation that brought it.
Which brings up the other problem that Wayland introduced, that instead of one incredibly old inscrutable software stack doing these things there are now five (and counting!) new and insufficiently tested software stacks doing these things in slightly different ways.
It would have been nice if KDE and Valve could (would?) work together to reimplement KWin's features on top of wlroots. That would have basically made Gnome the sole holdout, and I imagine they'd eventually have switched to the extended wlroots as well, or at least forked it.
There's also EFL and one other I'm forgetting at the moment, but yeah.
All this proves is that it's possible for a protocol to not be the determining factor; which says nothing about whether it's possible that it _is_ a determining factor.
You’re quite right. We’d need similar benchmarks done with other compositors.
I very much doubt that Wayland makes a difference for this test; Wayland is for IPC between the client and server. Moving the cursor around is done by the server, without needing to talk to the client.
Is it a weakness of the web that HTTP is just a protocol specification?
The "problem" with this in Wayland is that before people 'ran Xorg with GNOME on top", now they just run GNOME the same way they run Chrome or Firefox to use HTTP - it will take time for people to get used to this.
It's the biggest strength!
Every time someone complains about wayland there's someone informing you how it achtually it isn't.
Your biggest strength is also your biggest weakness.
The organisation of Wayland sounds great, but it is very hard to share optimised code between compositors since key parts that affect performance (in this case latency) are largely developed outside of any shared library code.
The "organisation" of Wayland reminds me of the UNIX wars; this is going to get worse before it gets better.
SVR4 Wayland anyone?
xref the time it has taken the Rust rewrite of the GNU coreutils and arguably coreutils is a much easier problem.
See also AI discussions and holding a telephone incorrectly, also do we not own phones
It's no different from X11, which is also a protocol/spec with many implementations.
The difference is that x11 was SK bag that you required a tons of plugins to have a working system(xorg) and no one could reliable maintain that mess of codes and plugins where every change could break another
- [deleted]
If there is no difference then how does the official reference implementation of Wayland, that nearly everyone uses daily, handle it? /s
Wayland as a just protocol... Isn't that the same argument they used when it shipped without copy/paste or a screensaver?
It's not an "argument", it's just a description of what Wayland is. But no, the correct protocol has had copy-paste since day one, and I dont remember there being issues with screensavers.
In the metaphor of a web server and a web browser, Wayland would be the HTTP specification. What you're usually interested in is what server you're running, e.g. GNOME's Mutter, KDE's Kwin, sway, niri, or what client you're running, e.g. Gtk4, Qt6, etc.
Wayland don't support screensaver and clipboard is managed by portals no?, idk why the complain tho, no one uses screensaver and clipboard being managed by a portal dsounds more logical than creating a protocol for that
> they sometimes had explicit display hardware for just the mouse because that would cut out the latency
That's still the case. The mouse cursor is rendered with a sprite/overlay independent from the window composer swapchain, otherwise the lag would be very noticeable (at 60Hz at least). The trickery starts when dragging windows or icons around, because that makes the swapchain lag visible. Some window composers don't care any longer because a high refresh rate makes the lag not as visible (eg macOS) others (eg Windows I think) switch to a software mouse cursor while dragging.
GNOME has unredirection support, so I don't think this test is applicable to actual in-game performance.
A fullscreen app ought to be taking a fast path through / around the compositor.
This isn't about in-game performance, this is about the desktop feeling sluggish.
- [deleted]
Hardware cursor is still a thing to this day on pretty much all platforms.
On sway, if you use the proprietary Nvidia drivers, it is often necessary to disable hardware cursors. I wonder if there is something similar happening here. Maybe wayland gnome doesn't use hardware cursors?
Once upon a time XFree86 and Xorg updated the pointer directly in a SIGIO handler. But that's ancient history at this point, and nowadays I wouldn't expect Wayland and Xorg to have a hugely different situation in this area.
IIRC it all started going downhill in Xorg when glamour appeared. After the cursor rendering path wasn't async-safe for execution from the signal handler (which something opengl-backed certainly wouldn't be), the latency was worse.
I remember when even if your Linux box was thrashing your mouse pointer would stay responsive, and that was a reliable indicator of if the kernel was hung or not. If the pointer prematurely became unresponsive, it was because you were on an IDE/PATA host and needed to enable unmask irq w/hdparm. An unresponsive pointer in XFree86 was that useful of a signal that something was wrong or misconfigured... ah, the good old days.
Asahi Lina's comment on the topic: https://lobste.rs/s/oxtwre/hard_numbers_wayland_vs_x11_input...
I'm not sure where this absolute "tearing is bad" and "correctness" mentality comes from. To me, the "correct" thing specifically for the mouse cursor is to minimize input lag. Avoiding tearing is for content. A moving cursor is in various places along the screen anyway, which will include biological "afterglow" in one's eyes and therefore there's going to be some ghosted perception of the cursor anyway. Tearing the cursor just adds another ghost. And at the same time, this ghosting is the reason keeping cursor input lag to a minimum is so important, to keep the brain's hand-eye coordination in sync with the laggy display.
That is a great comment and everyone should read it. It also demonstrates a common truism that system goals dictate performance.
Tl;dr: X bad and tears but Wayland with 1.5 frame latency (or more) good. Now, if you have a monitor (or TV) with 30i refresh rate, you're screwed.
1.5 is the worse case, and implementation specific
Reminds me of how we used 'dir' command to test computer speed by watching how fast it would scroll by.
Or later, seeing how long it took to load /usr/bin in a filemanager.
How old is Wayland?
I'll be reading a dream of spring in my grave at this rate.
I understand I'm complaining about free things, but this is a forced change for the worse for so long. Wayland adoption should have been predicated on a near universal superiority in all input and display requirements.
Intel and AMD and Nvidia and Arm makers should be all in on a viable desktop Linux as a consortium. Governments should be doing the same because a secure Linux desktop is actually possible. It is the fastest path to showcasing their CPUs and 3d bling, advanced vector /computer hardware.
Wayland simply came at a time to further the delay of the Linux desktop, at a time when Windows was attempting to kill Windows with its horrid tiles and Apple staunchly refused a half billion in extra market cap by offering osx on general x86.
>How old is Wayland?
This argument don't make sense because Wayland started as a hobby and not to replace x11, was after it got traction, other people/companies started contributing that it matter
Wayland is a protocol. The problems people complain about are generally implementation details specific to GNOME or KDE or (in general) one particular implementation.
There's rarely any such thing as "universal superiority", usually you're making a tradeoff. In the case of X vs Wayland it's usually latency vs. tearing. Personally I'm happy with Wayland because there was a time when watching certain videos with certain media players on Linux was incredibly painful because of how blatant and obtrusive the tearing was. Watching the same video under Wayland worked fine.
Early automobiles didn't have "universal superiority" to horses, but that wasn't an inhibitor to adoption.
"Wayland is a protocol" is exactly the problem. Protocols suck; they just mean multiplying the possibility of bugs. A standard implementation is far more valuable any day.
With X11, it was simple: everybody used Xfree86 (or eventually the Xorg fork, but forks are not reimplementations) and libX11 (later libxcb was shimmed underneath with careful planning). The WM was bespoke, but it was small, nonintrusive, and out of the critical path, so bugs in it were neither numerous nor disastrous.
But today, with Wayland, there is no plan. And there is no limit to the bugs, which must get patched time and time again every time they are implemented.
X had garbage handling of multiple monitors and especially multiple monitors with different DPIs, and there was "no plan" to deal with that either. Nobody wanted to work on the X codebase anymore. The architecture bore no resemblance to the way any other part of the desktop stack (or hardware) works.
Most of the garbage aspect is because toolkits refuse to support multiple monitors on DPI with X11 with the argument that "Wayland is just around the corner", for decades now.
For example Qt does per-monitor DPI just fine on X11; it's just that the way to specify/override the DPI values just sucks (an environment variable).
This stupid decision is going to chase us until the end of times since Xwayland will have no standardized way to tell its clients about per-display DPI.
It's not useful if you have to specify a scaling factor before the application has started, when the application can move monitors.
This is something feasible on Wayland, X draws one large wide screen display.
X could do seveal different screens I did have this working once. However then moving an application to a different display was impossible (an app could do it but it was a lot of work so nobody bothered). I few cad programs supported two streens but they were seperate and the two didn't meet.
Most people want to drag windown between screens and sometimes even split down the middle. One large display supports that much easier so that is what everyone switched to in the late 1990
I was using it that way until about 2020. (Mint 13 MATE supported, but it seems that capability was lost somewhere along the line. Shame, because I have a dual monitor setup where the second monitor is often displaying the picture from a different device, so in that situation I absolutely cannot have applications deciding to open on the busy-elsewhere screen. I miss being able to set a movie running on one monitor and have it not disappear if I flipped virtual desktops on the other!)
X11 used to provide separate displays, but at some point due to hardware changes (and quite probably due to prominence of intel hardware, actually) it was changed to merged framebuffer with virtual cut out displays.
In a way, Wayland in this case developed a solution for issue its creators brought into this world first
It can still provide seperate displays. The problem is you couldn't do something like drag a window from display 1 to 2°. IIRC it's also annoying to launch two instances of a program on both displays. The hacky merged framebuffer thing is a workaround to these problems. But you can have independent DPIs on each display.
° For most programs.
Yeah there were certainly tradeoffs. It's much harder to use separate displays now, though - last time I tried, I could address the two displays individually (":0.0" and ":0.1") if I launched X on its own, but something (maybe the display manager?) was merging them into a single virtual display (":0") as soon as I tried to use an actual desktop environment. (This was was Mint 20, MATE edition, a few years ago - I gave up and reverted to a single-monitor setup at that point.)
Yes. It just proves that all you needed is a better way to specify the per-monitor DPI, one that can be updated afterwards, or even set by the WM on windows.
> It's not useful if you have to specify a scaling factor before the application has started, when the application can move monitors.
Windows does this. Try to use in Windows 2 monitors with 2 different scalling factors. It is hit or miss. 100 and 150 works. 100 and 125 doesn't.
>Nobody wanted to work on the X codebase anymore.
That, i think, is the main issue. Nobody wants to work with GTK1, or GTK2, or GTK3 anymore. Nobody wants to work with QT1, or QT2, or QT3 or QT4 anymore. Everybody wants the new shiny toy. Over and over again.
It is CADT all over. Earlier X was developed by an industry consortium. Now Wayland is a monopoly pushed by RedHat.
> Now Wayland is a monopoly pushed by RedHat.
Pushed but still it seems it flails and takes too long to do the basic stuff
Software has bugs and water is wet. Wait til you hear about HTTP, TCP, UDP, IP torrents, etc... and "simple" is not really a term I would designate to X11. I mean, its fine, but even just the ecosystem surrounding X is convoluted, outdated and absurd. Things like xinit, startx, .Xauthority, xresources, xhost etc... are all a mess.
> Wayland is a protocol. The problems people complain about are generally implementation details specific to GNOME or KDE or (in general) one particular implementation.
I feel like at some point this is a cop-out. Wayland is a protocol but its also a "system" involving many components. If the product as a whole doesn't work well then its still a failure regardless of which component's fault it is.
Its a little like responding to someone saying we haven't reached the year of linux on the desktop by saying: well actually linux is just the kernel and its been ready for the desktop for ages. Technically true but also missing the point.
Wayland. Solving yesterday's problems, tomorrow.
unlike x which couldn't solve yesterdays problems.
X is fine, most of the problems people bring up are niche and minor. Meanwhile, Wayland induces problems of its own, like breaking all sorts of accessibility systems and multi-window X applications with no solution in sight.
To be fair this wayland issue is also niche. Author of the linked article wrote that this is reported by really small percentage of users most of the people do not notice it.
I believe you. Can you share an example? To be clear, I'm pretty sure this can be done with Gtk and Qt, but maybe you are talking about older apps written directly in Xlib or Xt?> multi-window X applications
It's not the toolkit (as they work on X11 and other OSes with the existing toolkits), it's Wayland that's the issue (there are a series of Wayland protocols to implement this, but they are being blocked).
The key detail in the “Wayland is a protocol” is that there are several other implementations, some of them extremely mature. The implementation being tested here isn’t exactly know to be a good one.
If there were a single Wayland implementation in existence, I’d agree with your sentiment.
What components? Wayland is literally an XML protocol that turns an XML file into a method of communication. libwayland-server and libwayland-client only handle communication and the internal event loop. Its completely up to the developer to write the implementations of these functions and register them in the server.
Then a client is going to query the server and ask request to do stuff via a unix socket. In fact, you don't even need libwayland, you can raw dog it over sockets manually. The idea is that there are standard protocols that can be queried and used,and you can implement this in any environment you want to. You could write the "frontend" in html and JS and run a wayland compositor on the web (which has been done [1]), you could do it with text or anything really, most people use some graphics stack.
The components are:
- the compositor, of which there are multiple implementations (gnome, kde, all the wlroots compositors)
- the client, which often uses one of several toolkits (gtk, qt, several smaller frameworks, or even directly using the protocol)
- the wayland protocol (or rather protocols, because there are several extensions) itself
- other specifications and side channels for communication, in particular dbus.
Many issues (although I don't think the one in OP) are due to the protocol being underspecified, and so the client and compositor disagree about some semantics, or doesn't even have a standard way to accomplish something across all compositors.
Nit: Wayland isn't an XML protocol. The "calls" and their arguments are described in XML, but the data is transmitted in a fairly simple binary encoding.
I mean most of the things are the fault of a badly designed or non existent protocols:
-Problems with non western input systems
-Accessibility
-Remote control(took around 2 years to be stable I think?)
-Bad color management
Then there's the things that did work in x11 but not in wayland:
-Bad support for keymapping(the input library says keymapping should be implemented by the compositor, gnome says not in scope, so we have a regression)
-bad nvidia support for the first two years? three years?
While these things are compositor/hw vendor faults, the rush to use wayland and nearly every distro making it as default, forced major regressions and wayland kinda promised to improve the x11 based experience.
I have run Wayland since it was available for testing on Fedora Workstation and I have had zero problems inputting Japanese and Chinese.
With regards to accessibility, what problems have you had exactly?
> I have had zero problems inputting Japanese and Chinese.
That may be fine.
Neo2 does not work. Neo has 3 modifier keys, Gnome/Mutter/Wayland/Whatever does only support two. Neo2 has a compose key, Wayland does not honor it.
I use mod4 for navigation (arrow keys, site up) and compose for Slavish (read Polish) input (źżąę)
> With regards to accessibility, what problems have you had exactly?
Fedora shipped a broken screen reader for 8 years.
I think that gnome has had built-in IME, but at least for a long time, it wasn't possible to use a third party system with gnome, or use gnome's with other compositors. And I'm pretty sure the situation was the same for sreen readers and on-screen keyboards. The wlroots project created their own protocols to support external applications to provide such features, since that is out of scope for a compositor like sway, but there are still missing pieces.
2 finger scroll doesn't work on my thinkpad model.
Not a bug, apparently. https://gitlab.freedesktop.org/libinput/libinput/-/issues/10...
> the rush to use wayland and nearly every distro making it as default,
Which rush? It has been done by only a small fraction of distros like Fedora, after years of development of the first wayland compositors. Fedora main purpose has always been to implement bleeding edge tech stuff early so that bugs get found and fixed before people using more stable distros have to suffer from it.
Nobody has been forced in any regression and x11 has continued to be available until now and there is no sign that the most conservative distros will drop x11 support anytime soon.
Yes, and to the parents point it was a CHANGE in protocol.
I get there was cruft in x. But the moment selected was a barrier to Linux desktop adoption precisely when the greatest opportunity in decades was present.
And the desktop was reimplemented.
Now in this period kde and gnome both decided to do rewrites, Ubuntu did their own desktop, got it up to snuff, and abandoned it. The lunacy wasn't just Wayland.
If we are complaining the gnome compositor sucks... I mean , should that be the goddamn reference implementation? What percent of desktops are gnome, 80% at least? If the gnome composting ready for primetime, then Wayland isn't ready for primetime.
>If the gnome composting ready for primetime, then Wayland isn't ready for primetime.
I use Sway, which uses a different compos[i]tor than Gnome. I would like to see similar results for wlroots, Sway's compositor, though I'm not actually interested enough to do the experiment (I guess that would be comparing Sway with i3). Cursor lag in Sway is not enough to bother me. I have on occasion used Gnome on the same machine(s), and never been bothered by lag.
As others have pointed out, Wayland is a protocol, not a compositor.
"As others have pointed out, Wayland is a protocol, not a compositor."
But the Wayland protocol requires a compositor, so here we are.
Nvidia support is still poor (at least on latest cards), I'm forced to use X or I get tons of glitches. I need the proprietary drivers for machine learning.
Not that I mind particularly, X is fine and everything works as expected. I can even have the integrated (amd gpu) take care of the desktop while the Nvidia gpu does machine learning (I need all the vram I can get, the desktop alone would be 100-150mb of vram) - then I start a game on steam and the nvidia gpu gets used.
Funnily enough I had wayland enabled by default when I installed the system and I didn't understand why I was getting random freeze and artifacts for weeks. Then I realized I was not using X11
> there was a time when watching certain videos with certain media players on Linux was incredibly painful because of how blatant and obtrusive the tearing was
It was because of the crap LCD monitors (5 to 20 ms GtG) and how they are driven. The problem persists today. The (Wayland) solution was to render and display a complete frame at a time without taking into account the timings involved in hardware (you always have a good static image, but you have to wait).
I tried Tails (comes with some Wayland compositor) on a laptop. The GUI performance was terrible with only a Tor browser open and one tab.
If you do not care about hardware, you will, sooner or later, run into problems. Not everybody has your shiny 240 Hz monitor.
> How old is Wayland?
About 16 years old, for comparison, X is 40.
Hm, X works fine since 20 years. Wayland is still a protocol after 16. /s
> Wayland adoption should have been predicated on a near universal superiority in all input and display requirements.
Totally agree. The people saying "Wayland is a protocol" miss the point. Wayland is a protocol, but Wayland adoption means implementing stuff that uses that protocol, and then pushing it onto users.
Measure twice, cut once. Look before you leap. All that kind of thing. Get it working FIRST, then release it.
You have to releaseethings like this in parts because it needs too many external people to do things to make it useful. Managing those parts is something nobody has figured out and so people live you end up using it before it is ready for your use and then complaining.
When I say "release" I mean "release to users". You can release stuff to other developers, no problem. But it should all come with large warnings saying "This is not for daily use". The failure is when distros like Fedora start not only shipping it, but then saying they're going to drop the working alternative before Wayland is actually ready to do everything that X does.
(Also, I don't use Wayland. I mean I tried it out but don't see any real benefit so I don't use it regularly.)
> You have to releaseethings like this in parts because it needs too many external people to do things to make it useful.
30 years ago X came with a server and some clients. Why it is so hard to do this, for wayland, today ?
>but this is a forced change for the worse for so long
Can you explain who is forced to do what in that context?
AMD states that bugs only get fixed for Wayland.
Coincidentally I have got a graphics driver that likes crashing on OpenGL (AMD Ryzen 7 7840U w/ Radeon 780M Graphics)
> Wayland simply came at a time to further the delay of the Linux desktop
I can't tell if you are serious or not.
Serious
> I understand I'm complaining about free things, but this is a forced change for the worse for so long.
Then write code.
Asahi Lina has demonstrated that a single person can write the appropriate shims to make things work.
Vulkan being effectively universally available on all the Linux graphics cards means that you have the hardest layer of abstraction to the GPU taken care of.
A single or small number of people could write a layer that sits above Wayland and X11 and does it right. However, no one has.
You're arguing against Wayland, but for a more secure Linux desktop? I recommend you spend more time getting to know the X11 protocol then, because it has plenty of design decisions that simply cannot be secured. The same people who used to develop XFree86 designed Wayland to fix things that could not be fixed in the scope of X11.
> the X11 protocol [...] has plenty of design decisions that simply cannot be secured.
I've been hearing this for over a decade now. I don't get it. Just because xorg currently makes different clients aware of each other and broadcasts keypresses and mouse movements to all clients and allows screen capturing doesn't mean it has to. You could essentially give every application the impression that they are the only thing running.
It might seem difficult to implement, but compare it to the effort that has gone into wayland across the whole ecosystem. Maybe that was the point - motivating people to work on X was too difficult, and the wayland approach manages to diffuse the work out to more people.
I was really bullish on Wayland 10 years ago. Not so much any more. In retrospect it seems like a failure in technical leadership.
X always had the capability to isolate clients, but it is not used it would need some work which nobody does because of Wayland.
Some aspects of the client isolation are used by default when doing X11 forwarding via SSH. A remote keylogger will not work for instance.
> You could essentially give every application the impression that they are the only thing running.
> It might seem difficult to implement
This is exactly what OS did (my daily driver): https://forum.qubes-os.org/t/inter-vm-keyboard-isolation/315...
It'll be challenging to even figure out which one of the things connecting to $DISPLAY is the real window manager. Good luck on your lonely[1] journey!
[1]: The people who actually developed Xorg are now working on various Wayland-related things.
> It'll be challenging to even figure out which one of the things connecting to $DISPLAY is the real window manager.
I suspect it would be less challenging than writing a whole new wayland server.
Off the top of my head, I'd use a separate abstract domain socket for the window manager including some UUID, and then pass that to the window manager when launching it.
You could create these sockets on demand - one for each security context. On linux typically a different security contexts will either have different UIDs - in which case filesystem permissions would be sufficient - or they have different mount namespaces - in which case you make different sockets visible in different namespaces.
For SSH forwarding you could have SSH ask the X server for a new socket for forwarding purposes - so remote clients can't snoop on local clients.
> Good luck on your lonely[1] journey! > > [1]: The people who actually developed Xorg are now working on various Wayland-related things.
This is what I mean by a failure of technical leadership.
> For SSH forwarding you could have SSH ask the X server for a new socket for forwarding purposes - so remote clients can't snoop on local clients.
SSH pretty much already does this. Per default (using -X) X11 forwarding is in untrusted mode, which makes certain unsafe X11 extensions unavailable. So remote clients already cannot snoop the whole keyboard input.
"You could create these sockets on demand - one for each security context. On linux typically a different security contexts will either have different UIDs - in which case filesystem permissions would be sufficient - or they have different mount namespaces - in which case you make different sockets visible in different namespaces."
This is reminiscent of how Trusted Solaris[0] implements Mandatory Access Control (MAC) a la Orange Book[1].
[0] https://www.oracle.com/technetwork/server-storage/solaris10/... [1] https://public.milcyber.org/activities/magazine/articles/202...
> Wayland, being something relatively "new" compared to X11, has not had this level of scrutiny for as long. I'm looking forward to folks fixing it though.
Part of the problem will doubtless be the USB (and Bluetooth) stacks including the device hardware and firmware. When keyboards and mice were serial devices with their own interrupt making the code path fast was achievable. I'm not so confident that modern peripheral stacks can be made to run with the same priority. It becomes even more challenging for devices sitting on a bus with multiple device classes or multiple protocols between the device and driver (USB -> Bluetooth -> Mouse).
I hope devices can be sped up but we're a long way from a keypress triggering an interrupt and being handled in tens of milliseconds[0].
The article you're commenting on is about someone running X11 and Wayland on the same computer with the same mouse and experiencing higher input latency on Wayland. I don't think differences between serial and USB mice are relevant here.
As a systems I guy I completely agree with this. One of the things that was interesting about X11 is that it draws an API "wall" around the stuff involved in presentation. Many folks don't remember this but we had X "terminals" which was a computer system that did nothing else except render the screen and handle the input.
In that compartmentalization there was nothing else competing for attention. You didn't get USB bus contention because there was just the one mouse, ever, on the line between the user and the X11 server in the X terminal.
USB devices today are dummy fast, USB 3.1 signals at 10 GHz. You just need to check the USB queue more often :D
Alternatively you move straight up to USB 4, where we get pci-e interrupts again! :)
> USB devices today are dummy fast,
In theory, yes. Not so, in practice.
> USB 3.1 signals at 10 GHz.
Yes but a mouse works at 12 MHz.