Agreed. Windows Server 2000 through Windows 7 were peak Microsoft operating system.
By Windows 2000 Server, they finally had the architecture right, and had flushed out most of the 16 bit legacy.
The big win with Windows 7 was that they finally figured out how to make it stop crashing. There were two big fixes. First, the Static Driver Verifier. This verified that kernel drivers couldn't crash the rest of the kernel. First large scale application of proof of correctness technology. Drivers could still fail, but not overwrite other parts of the kernel. This put a huge dent into driver-induced crashes.
Second was a dump classifier. Early machine learning. When the system crashed, a dump was sent to Microsoft. The classifier tried to bring similar dumps together, so one developer got a big collection of similar crashes. When you have hundreds of dumps of the same bug, locating the bug gets much easier.
Between both of those, the Blue Screen of Death mostly disappeared.
I agree with one big exception, the refocus on COM as the main Windows API delivery mechanism.
It is great as idea, pity that Microsoft keeps failing to deliver in developer tooling that actually makes COM fun to use, instead of something we have to endure.
From OLE 1.0 pages long infrastructure in Windows 16 bit, via ActiveX, OCX, MFC, ATL, WTL, .NET (RCW/CCW), WinRT with. NET Native and C++/CX, C++/WinRT, WIL, nano-COM, .NET 5+ COM,....
Not only do they keep rebooting how to approach COM development, in terms of Visual Studio tooling, one is worse than the other, not at the same feature parity, only to be dropped after the team's KPI change focus.
When they made the Hilo demo for Windows Vista and later Windows 7 developers with such great focus on being back on COM, after how Longhorn went down, a better tooling would be expected.
Drivers can crash the rest of the kernel in Windows 7. People playing games during the Windows 7 days should remember plenty of blue screens citing either graphics drivers (mainly for ATI/AMD graphics) or their kernel anticheat software. Second, a “proof of correctness” has never been made for any kernel. Even the seL4 guys do not call their proof a proof of correctness.
Not the operating system:
Driver Verifier is a tool that developers can choose to use for testing and debugging purposes.
It's not used on production machines and it does nothing to prevent a badly written driver from crashing the kernel.
Kernel drivers have to be verified by the driver verifier to pass Windows Hardware Qualification Labs certification and get signed with the Windows signing key that lets them load without warnings. There are fewer outside kernel drivers today, though, because plugging random peripheral cards into PC buses is no longer a big thing.
This is true for certification, which is mandatory for Server OS, distributing through Windows Update, or certain classes of drivers such as anti-malware or biometric authentication, but you can still submit drivers to Microsoft for "attestation signing" that will load without warnings on desktop OS without having to run them through the testing suite.
In any case, running the certification tests does not provide runtime protection for drivers running in kernel mode, as demonstrated by CrowdStrike. Only Windows 10 started introducing hardware virtualization-based isolation of kernel components (to provide isolation of security subsystems, not runtime checks to prevent crashes): https://learn.microsoft.com/en-us/windows-hardware/design/de...
Yet drivers that have passed Windows Hardware Qualification Labs certification have had blue screens. Also, Microsoft hands out Windows kernel driver signing keys to anyone who pays them. You don't need to have a driver go through the Windows Hardware Qualification Labs to be able to sign it with a key signed by Microsoft.
My PC used to regularly crash Windows 10 because of buggy Nvidia driver. Eventually they fixed the bug, but until then, I had a crash every few days.
From your own link:
"Driver Verifier is not normally used on machines used in productive work. It can cause ... blue screen fatal system errors."
- [deleted]
I lost less time to bluescreens than I have to forced updates and sidestepping value add nonsense like one drive, edge.
They didn't "prove the kernel is correct", they built a tool to prove that a single driver maintains an invariant throughout execution.
It does not prove that the driver will not crash the kernel. It should be fairly easy to find a driver that passed QA testing under that tool, yet still crashed the kernel. You just need one of the many driver developers for Microsoft Windows to admit to having used that tool and fixed a crash bug that it missed, and you have an example. Likely all developers who have used that tool can confirm that they have had bugs that the tool missed.
I think it ended at the first "ribbon" UI, which was in the 2003 era, but not all products ate the dirt at once.
Yeah the ribbon drove me to LibreOffice and Google Docs and I haven’t been back.
Windows 2000 Pro was the peak of the Windows UX. They could not leave well enough alone.
The original ribbon sucked but with the improvements it's hard to say it's generally a bad choice.
The ribbon is a great fit for Office style apps with their large number of buttons and options.
Especially after they added the ability to minimize, expand on hover, or keep expanded (originally this was the only option), the ribbon has been a great addition.
But then they also had to go ahead and dump it in places where it had no reason to be, such as Windows Explorer.
> The ribbon is a great fit for Office style apps with their large number of buttons and options.
To me this is the exact use case where it fails. I find it way harder to parse as it's visually intense (tons of icons, buttons of various sizes, those little arrows that are sometimes in group corners...).
Office 2003 had menus that were at most 20-25 entries long with icons that were just the right size to hint what the entries are about, yet not get in the way. The ribbon in Office 2007 (Word, for example) has several tabs full of icons stretching the entire window width or even more. Mnemonics were also made impractical as they dynamically bind to the buttons of the currently visible tab instead of the actions themselves.
Close to 20 years later, people still complain about the ribbon. (1)
I think that says something about it.
--
1. And not just "grumble, grumble... get off my lawn..." Many of its controls are at best obscure. It hides many of them away. It makes them awkward to reach.
Many new users seem as clueless, or even more so, than pre-existing customers who experienced the rug pull. At least pre-ribbon users knew there was certain functionality that they just wanted to find.
(And I still remember how MS concurrently f-cked with Excel shortcut keys. Or seemed to have, when I next picked Excel up after a couple year hiatus from being a power user.)
> The original ribbon sucked but with the improvements it's hard to say it's generally a bad choice.
This is also what I hear about GNOME. "OK, yes, GNOME 3.x was bad, but by GNOME 40 it's fine."
No, it's not. None of my core objections have been fixed.
Both ribbons and GNOME are every bit as bad as they were in the first release, nearly 20 years ago.
I know nothing of your objections, so this is more about how I think of mine and how they relate to these kinds of changes.
Being a power users is difficult, I think the best way to do software is to make it APL complicated and only educate one guy in it. The way power users in Excel/Emacs/Accounting software out perform user friendly stuff is amazing. But somethings are meant for the masses, e.g. opening a file.
Dumbing down or magification of interfaces was needed for many other reasons. Gnome and Ribbon were necessary changes IMO, what we had was never going to improve. Of course I wish there was elements that could be reused elsewhere, but that is a pipedream of Smalltalk proportions.
I am now stuck with windows at work, and it is a horrible experience. Everything is so needlessly complicated. In the same way Linux is. I do believe Gnome did manage to improve things, at least when I look at children using Mac, Linux and Windows as power users. My view is that the complexity of Linux is still a little bit easier to understand, but that is just because of a long history and easy abstractions.
I think core objections are often not compatible with products that need to fit and be produced for many people. I do software that is used once by many this has changed my view if GUIs for ever, especially in regards to desktops.
> The original ribbon sucked but with the improvements it's hard to say it's generally a bad choice.
It is a terrible choice. Always have to search for items.
For me peak UX was before Ribbon. Just menus and customizable toolbars. Didn't need nothing more to be productive enough. Nowadays I can hardly use Office suite, its feature discoverability essentially zero for me.
I never understood the issue with the ribbon UI. Epecially for Office it was great, so much easier to find stuff.
> I never understood the issue with the ribbon UI. Epecially for Office it was great, so much easier to find stuff.
1. I don't need to find stuff.
I knew where stuff is.
2. I read text. I only need menus. I don't need toolbars etc. and so I turn them all off.
I cannot read icons. I have to guess. It's like searching for 3 things I need in an unfamiliar supermarket.
3. Menus are very space efficient.
Ribbons hog precious vertical space. This is doubly disastrous on widescreens.
4. I am a keyboard user.
I use keys to navigate menus. It's much faster than aiming at targets with the mouse and I don't need to look. The navigation keys don't work any more.
Ribbons help those who don't know what they are doing and do not care about speed and efficiency.
They punish experts who do know, don't search, don't hunt, and customise themselves and their apps for speed and efficient use of time and screen space.
> They punish experts who do know, don't search, don't hunt, and customise themselves and their apps for speed and efficient use of time and screen space.
The problem is, most users are utterly braindead, they barely manage to type at speed instead of pecking at single keys. The astonishment I've gotten in some places for literally nothing more than Ctrl+C/Ctrl+V is more than enough proof.
That's also IMHO a large portion of why Linux never really took off on desktop. UX/UI people are rare enough to begin with, most of them don't work on FOSS in their free time, and so development is primarily done by nerds for nerds. That's great if you already know something about the application - but usually the learning curve is so steep that most users frustratedly give up. And documentation is either not existing, incomplete or horribly outdated, and StackOverflow etc. are even worse.
The exception is Blender. They got some serious money IIRC, cleaned up their act, and now there's a headline of some movie or game using Blender every few weeks.
100% true.
The sad thing is that Windows has a great keyboard UI and it's superbly accessible for people with visual and motor disabilities.
Who have reduced earning opportunities because they are disabled, so FOSS should be great for them, but it isn't, because the nerds don't know CUA and don't know the keyboard UI. They spend their time mastering a couple of ancient apps like Vi and Emacs and ignore the fiery furnace of UI R&D that followed for the next 20Y after those early efforts.
Learn Windows' keyboard UI and you can drive the whole OS and all its apps with the speed of a genius Vim user with 20 years' practice. It makes Emacs look like a wet paper pad and a burned stick compared to a Moleskine notebook and a top quality fountain pen.
Xfce comes close and implements maybe 75% of the UI but once you are in an app all bets are off.
> Learn Windows' keyboard UI and you can drive the whole OS and all its apps with the speed of a genius Vim user
Do you have a reference for this? I've often needed to control Windows using only a keyboard and failed to do so. I'm aware of most shortcuts in this list[1] but these are for a few very specific things. (As an aside, I also remember controlling the mouse with the numpad using the Mouse Keys accessibility setting but this is worse than both keyboard shortcuts and the mouse.)
[1]: https://en.wikipedia.org/wiki/Table_of_keyboard_shortcuts
It's called CUA.
https://en.wikipedia.org/wiki/IBM_Common_User_Access
There are dozens of them out there.
Random example:
https://www.system-overload.org/windows-shortcuts.html
General guide...
Activate menu bar with Alt. Alt + the underlined letter opens that menu or submenu.
Alt+Space opens the control menu for that window. In MDI apps, alt+hyphen opens the document's window control menu.
Then...
Alt+space, x = maXimise Alt+space, n = miNimise Alt+space, s = reSize followed by cursor key to select which edge, then cursors to change.
Hotkeys are Ctrl+letter and do that action now.
Ctrl+... p = print s = save o = open f = find c = copy x = cut (looks like scissors) v = paste (looks like an arrow: paste _V_ HERE )
Shift modifies or reverses many commands, and selects while moving.
In dialogs and forms, Tab moves forwards; Shift+Tab backwards
Ctrl+PgDown = next tab Ctrl+PgUp = previous tab Ctrl+Enter = save and close form
Ctrl+left/right = move by word instead of character Shift+home/end = select to start/end of line
Esc = cancel
Ctrl+Esc = open start menu
Then tab, and you're tabbing through the taskbar, which is a sort of dialog box.
Ctrl+Shift+Esc = open task manager
Maybe this should be on a wiki somewhere so it can be documented collaboratively...
> Do you have a reference for this?
Look for underlined single letters in menus. With apps that use the "classic" style menus instead of ribbons or plain Electron crap, the single letters are the key.
I'm curious to know if this is what lproven meant in their comment above. Alt + a-z to access menu items is available in every OS and all "native" apps, but you can't "drive the OS and all apps" this way.
For example, I would like to set options that are a few menus/button clicks deep in the Windows control panel (either the "classic" or new variant) using keyboard shortcuts/navigation. Or navigate the Windows registry editor. I'm not aware of a way to do this.
None of that is correct.
No, it's not in all OSes. I wish it were.
No, it's not in all native apps. KDE reinvents its own set of keystrokes, for instance, and half the KDE apps have no menu bars any more... And there's no global way to force them either.
Yes, the control panel and RegEdit are totally keyboard controllable.
You can literally just unplug the mouse from a Windows desktop and it remains totally 100% operable.
Some apps may not, because the developers didn't do their jobs right, but the OS is.
How else could blind people use PCs?
I totally forgot about this until just now. That really was a brilliant feature.
> Learn Windows' keyboard UI and you can drive the whole OS and all its apps with the speed of a genius Vim user with 20 years' practice
I'm sure you can give me some hints, because Microsoft, can't.
> The sad thing is that Windows has a great keyboard UI
Windows also has a great help system, online. /s
Windows actually had a decent built-in manual system with CHM, tooltips and whatnot. Even games could and did use it, like EarthSiege 2.
Back in the days when application developers stuck to the Windows-provided widgets instead of doing their own UI, it was wonderful. Symbols were consistent across applications, as were color schemes (IIRC, if you wrote your CSS correctly, Internet Explorer would pass these on to websites!) and behavior.
I miss these days.
> And documentation is either not existing, incomplete or horribly outdated, and StackOverflow etc. are even worse.
Or the documentation is very complete, but only useful if you read and comprehend it in its entirety. Open source devs need to understand that not everyone using their software wants to become an expert in it. They just want to get a task done and the software is facilitating completing that task. That is something totally normal and those users should not be thought of as less important than the power users.
> The problem is, most users are utterly braindead
Yeah, that's Microsoft's idea. All user are idiots. That's why they are not able to fix bugs but only change the UI.
Just hide the ribbon.
On a Mac, that's fine. On Windows, it's not, because then I can't control the app any more.
I have been using Word since version 4 on DOS and version 5 on Classic MacOS. On Windows, I used WinWord 1, 2, 6, 95, 97, 2000, XP and 2003... then 4 years later MS ripped out the UI I knew backwards and had known for about 16 years, since 1991, and replaced it with one inferior in every way for me.
I'm not denying it might be better for others but for me it's now a waste of disk space.
The old versions do all I need, so I keep them. For everything except Word, there is LibreOffice.
But LibreOffice Writer has no outline mode, and I am a writer: that is THE killer function of Word for me.
So, Word 97 under WINE on Linux and Word 2003 when I have to use Win10 or -- shudder -- Win11.
And it'll be back in the next update.
My big problem with it is that it’s stateful. A menu or toolbar admits muscle memory - since you get used to where a certain button or option is and you can find it easily. With ribbons you need to know if you’re in the right submenu first.
Though personally, I’m increasingly delighted by the quicksilver - style palette / action tools that vscode and IntelliJ use for infrequently used options. Just hit the hotkey and type, and the option you want appears under the enter key.
It's not easily customizable and it takes more space, not much to understand
I'm not sure it takes more space than a menu and toolbar, but regardless, monitors are a LOT larger now than in 2003 so...
Frankly, I'm motivated sure customizing is a win either. I fo a lot of remote support and it's nice to have a consistent interface.
Personally I find it faster than menus, and easier to find things I seldom use.
But I appreciate it's a personal taste thing, and some older folks prefer older interfaces.
Your monitors, those of a well-off power user, may have become larger. Most regular users I've seen are on 15" laptops with screens at 1366×768, or (if they're lucky) 1920×1080 with scaling at 1.25× or so. 17" desktop monitors used to be commonplace about 20 years ago.
The slightly larger screen real estate (if any) is more than wasted by very inefficient "modern UIs" where you won't find paddings smaller than 16px, with three buttons where there used to be enough space for 9.
Just compare and become sure! The larger screen isn't a good excuse to waste space either.
And users are way more important than the tiny group of tech support.
Also those that need tech support will be less likely to customize.
Those of us working in jobs use the same couple of functions in our office products. We don't really go and find features.
> I think it ended at the first "ribbon" UI, which was in the 2003 era,
Nah. 2007 era.
Office 2007 introduced the ribbon to the main apps: Word, Excel, I think Powerpoint. The next version it was added to Outlook and Access, IIRC.
I still use Word 2003 because it's the last pre-Ribbon version.
I don't know quite when it started to happen, but changing and/or eliminating the default Office keyboard shortcuts in the last few iterations has really irked me.
Another often-underappreciated advancement was the UAC added in Vista. People hated it, but viruses and rootkits were a major problem for XP.
People hated it because it was all over the place. Change this or that setting? UAC. Install anything? UAC. Then you'd get a virus in a software installer, confirm the UAC as usual, and it wouldn't stop a thing.
It is more of a warning than an actual security mechanism though. Similar to Mark of the Web.
No, in XP you were essentially logged in as root 24/7 (assuming it was your machine), and any program -- including your browser -- was running as root too. I remember watching a talk about how stupidly easy it was to write rootkits for XP. "Drive-by viruses" were a thing, where a website could literally install a rootkit on your machine just by visiting it (usually taking advantage of some exploit in flash, java, or adobe reader). Vista flipped it, by disabling the admin account, so that in order to do something as admin you needed to "sudo" first. That alone put a stop to tons of viruses.
I used to work in the security team at a financial institution that was still running XP until around 2017.
We got to a point around 2015 where drive-by exploit kit developers just weren't targeting XP and IE8 anymore. Phishing landing pages would roll through all the payloads they had and silently exit.
> It is more of a warning than an actual security mechanism though. Similar to Mark of the Web.
It's both a warning and an actual security mechanism.
Obviously its most visible form is triggered when an application tries to write to system-level settings or important parts of the filesystem, and also when various heuristics decide that the application is likely to want to do so (IIRC "setup.exe" and "install.exe" at the root of a removable disk are assumed to need elevation).
Because Microsoft knew that a lot of older software wrote to system areas just because it predated Windows being a multi-user system UAC also provided a partial sandboxing mechanism where writes to these areas could be redirected to user-specific folders.
The warning was also a tool in itself, because the fact that it annoyed users finally provided the right kick in the ass to lazy software developers who had no need to be writing to privileged areas of the system and could easily run under a limited user but hadn't bothered to because most non-corporate NT users were owners and thus admins and most corporate environments would just accept "make users local admin". A portion of the reason we saw UAC prompts a lot less in later versions of Windows is because Microsoft tweaked some things to make certain settings per-user and to reorganize certain dialogs so unprivileged settings could be accessed without escalation, but a lot of it is because applications that had been doing it wrong for as long as NT had existed finally got around to changing their default paths.
It got old people to call their grandsons when an image or .doc file asked for permissions though, which at the time was a huge help
> This verified that kernel drivers couldn't crash the rest of the kernel.
How did crowdstrike end up crashing windows though?
> Static Driver Verifier
Well, the Crowdstrike driver isn't (wasn't?) static. It loaded a file that Crowdstrike changed with an update.
Most drivers pass through rigorous verification on every change. But Crowdstrike is (was?) allowed to change their driver whenever they want by designing it to load a file.
The EU forced MS to allow stuff like CrowdStrike as part of an anti-trust settlement.
MS tried to use the incident to get the regulators to waive the requirement.
I'm all for anti-trust and anti-monopoly but christ alive an operating system vendor gatekeeping their kernel is literally the whole point of being an operating system vendor. Braindead regulation.
> Braindead regulation.
Only because OP didn't give the full story. Microsoft wanted to close direct access to the kernel. AV companies complained to regulators in the EU. The EU asked Microsoft if they were willing to maintain access to replacement functionality and to stick to using that functionality for its own separately sold AV products. Microsoft said no, and instead of fighting, just let Windows wither on the vine with full kernel access for all the bozos. Crowdstrike was inevitable.
The issue isn’t with the gate keeping per se. The issue is that windows defender, a competitor AV, gets full access while third parties would not. This would leave the, at a competitive disadvantage.
No, braindead take. The purpose of being an operating system vendor is to sell an operating system. If someone else modifies your operating system after they buy it, they get to keep both pieces. You don't get to stop them from modifying the thing they bought.
Do you like nanny states? How about nanny corporations?
This particular case is weird because crowdstrike is complianceware.
So, it’s more like “you don’t get to improve your product if doing so would also stop random companies from forcing your customers to break the stuff you sold to them”
Microsoft has no obligation on protect its customers from themselves if they're dead set on shooting themselves in the foot.
Microsoft has no right to prevent its customers from having full access to the things they bought.
Must human arms be braced at birth so they can only point level, lest someone try to point any object at their own foot?
Yeah, but using your analogy, we do allow people to protect their communities from random strangers that want to disfigure other people’s arms.
In fact, I pay taxes to the police and they generally handle this sort of thing pretty well.
> The big win with Windows 7 was that they finally figured out how to make it stop crashing.
Changing the default system setting so the system automatically rebooted itself (instead of displaying the BSOD until manually rebooted) was the reason users no longer saw the BSOD.
> First large scale application of proof of correctness technology.
Curious about this. How does it work? Does it use any methods invented by Leslie Lamport?