In Norway, one of the regional health authorities have recently replaced its old software. The replacement, Helseplattformen (The Health Platform, based on Epic Systems’ software) has been making headlines since before its introduction. Doctors and nurses first warned about bad usability and seeming instabilities which would lead to data loss and lower quality treatment. Their warnings were ignored, but have since become a reality. You can follow the news about the system on https://helvetesplattformen.no (“the hell platform”)
While there is certainly such a thing as bad software, a lot of old software actually has its merits, and its users will often tell you about them if you ask. In my experience, developing a desktop app for a point of sales/backoffice, the users were often very satisfied with the old software we were replacing. They liked how fast they got things done with it: For instance, the old software had keyboard shortcuts for everything. Its layout was more information dense.
A lot of modern day UX designers would be aghast at the sheer quantity of data and speed at which a user can handle it IFF the UI is adequately designed.
PoS software really plateaued in the 90s. The most common flows were so simple an average user can hold a mental map of the program and always know exactly how to get where they need to be. Pros can fly through ten levels of menus in literally half a second to access some obscure report.
In modern UI we've totally given up and just give the user a search box on every screen. Instead of a clean indexed list of menu items you can hit a key to access, we have to fumble around with tooltips.
I don't have the time for it, but if I were going to make a new web front-end framework, it would be focused on this sort of expert-level interface.
The key thing would be some sort of input-buffering system, where you can have lots of things bound to keys, and the framework will actually buffer things. Generally web browsers and modern UIs in general throw away input on reloads and such, on the generally-correct but very novice-biased theory that you didn't mean to be inputting into a screen you can't see yet. If you undo that and allow buffering keystrokes across loading actions you could start recovering some of that expert-level speed, where you can be using screens that aren't even loaded yet.
"Expert-focused" UIs don't have to be bound to the terminal. That was all an accidental thing.
Expert-focused UIs are nearly dead because all our UI frameworks actively block them. That is also not a fundamental fact of computing and can be undone.
I propose not stopping at input-buffering. Keep going, how fast could we get the REPL times if that were the KPI?
I doubt we'd be doing full page redraws and slinging json of all things. I think what you'd end up with is something that looked a little more like a game engine than react. That would be a fun project to work on.
A REPL might be an option, you do have Javascript sitting right there, but is not generally the sort of "expert" I'm thinking of.
I certainly wouldn't stop at input buffering. Another I'd want is some sort of systematic keypress discoverability. You still probably want a conventional GUI at least mostly available but using the GUI ought to be constantly teaching the user how to do whatever it is they just did with the keyboard instead. This might as well get wrapped into the framework, to standardize it and to make it easy to use.
It's worth recalling that in the early days, GUIs always had underlined letters showing what the keyboard shortcuts are. That's been lost now, I suppose because it looked messy or confused people or something. Some tools show you the underlines only once you push the modifier key that can activate them.
Unfortunately, one of the many shortcomings of the web as an application and GUI platform is (a) no menu bars, (b) no context menus, (c) no accelerator key framework, (d) keyboard focus is often nowhere in particular or somewhere stupid that will do something bad if you touch the keys.
For me, the most obvious example of input-buffering in modern tools is in the terminal, where I'll often type in subsequent command(s) while the first is still executing. Sure, it's possible to make a mistake and have to Ctrl+C the current process to stop the botched command from executing—but for "safe" commands like `cd`, it's a super convenient feature.
(The other tool that comes to mind is Emacs, but that's appreciably less mainstream than the command line in general.)
Thanks for this. I've a passing interest in those older expert-level interfaces and input buffering wasn't on my radar but is completely vital. I remember relying heavily on that functionality back in the day and yet it's non-existent in most modern interfaces.
This is bang on. I am shocked by how poor UX really is these days in most modern applications.
I worked at an old grocery store running the front office as a teenager through college maaaany years ago, and everything was a keyboard shortcut, non-touchscreen driven interface from probably the early 90’s. It took some time to get used to and train others on, but I could fly through those interfaces in a way a modern UX simply does not allow by focusing on the lowest common denominator for usability instead of any attempt to cater to ‘experts’: aka the people who have to use those interfaces day in and day out.
Sure, the modern interfaces look great (usually) and in the ideal state anyone can pick it up and use it without much instruction. But there’s no attempt to focus on the poor soul who has to use it in their day to day and just wants to get it done as quickly as possible.
Hardware, too. I worked in broadcast television in the 1990s and we used these very large Sony three-quarter inch videotape decks for editing. Two of them paired together and a monitor.
Once I learned how to edit using the buttons and the scroll wheels, I could fly through putting together a news segment with all kinds of cuts, fades and overdubs. I remember thinking, I couldn’t believe how fast my fingers were flying.
The closest thing I can compare it to is learning how to play a guitar; after a certain point muscle memory takes over.
- [deleted]
>A lot of modern day UX designers would be aghast at the sheer quantity of data and speed at which a user can handle it IFF the UI is adequately designed.
Would they even know? There was a time when there was such a thing as domain expertise and when usability studies were an integral facet of many SDLC. Those days seem long gone now. The industry seems far more content to haphazardly adapt OTS solutions or create something that's "pretty" or "modern" at the expense of being functional.
> Would they even know?
Key point. We are now in a UI idiocracy.
> we have to fumble around with tooltips.
You’re lucky to get any tooltips nowadays.
Every time I need to copy paste an email address from someone’s name in outlook and I have to wait a whole second for an on-hover tooltip to appear with the address and the “copy to clipboard” button I want to die.
I would use the desktop client but it doesn’t have an interface to hyperlink to files in sharepoint.
The professional cashier mostly disappeared as a career though. There's no point in optimizing for expert use when your users are temporary labor.
BS, in the food service industry quickness is better for prompt service. When I waited tables I could click ahead of the UI to get what I needed and the POS would catch up.
It needs to be easy and quick. Doesn’t matter if they are pros or people doing college jobs.
What he means is that GUIs won over TUIs because they're intuitive and learnable. They might be slower to use (mostly; I'm faster in IntelliJ than vim) but that is counterbalanced by the lower training costs.
In industries with high staff turnover, it's probably better to have slower workers but with much lower training costs.
There is no inherent conflict between GUIs and TUI-like speed. It’s the web tech stack and cloud-based apps, and piles upon piles of leaky abstractions in other environments, who have made things slow. Open Microsoft Word or Excel from 25 years ago, and it will be incredibly fast to operate.
Sure I agree, but getting fast at using mainframe style UI takes a lot longer and until you mastered the commands you would be slower. GUIs let people self teach which is valuable.
Web tech definitely makes it harder to give a really snappy experience.
Who said anything about "professional cashiers" or "temporary jobs"? Did you mean to reply somewhere else?
I think it's because GP mentioned "PoS software" which is often used by cashiers, but is not limited to cashiers. Anywhere a user might swipe or tap their credit card could be considered a PoS.
As a related aside, I recently visited Microcenter where they run their own PCI-compliant PoS system that's all text based. The staff all knew how to zip through various text menus to apply discounts, warranties, etc.
They are certainly paid and treated that way, but only because corps really want to buy their own bullshit.
There have never been enough highschoolers to fulfill all the """not real""" jobs that people keep insisting shouldn't pay enough to live. Hell, you used to be able to live on flipping burgers. In plenty of modern and democratic countries you actually still can live while flipping burgers.
> In plenty of modern and democratic countries you actually still can live while flipping burgers.
Depending on your demands on quality of life I'd doubt that.
Somewhere 1988 to 1989 I 'flipped burgers' as a so called crew-trainer fulltime, and made almost 5000 Deutsche Mark per month (after taxes, 'netto'), due to mostly working late/night-shifts.
Inflation adjusted that would roughly be about €5000/USD now. I think you'd be lucky to reach 2400 to 2800 now, as 'normal' burger flipper, or one level up, as crew trainer, no matter if working late/night shifts. (Because the financial benefits of working late/night are mostly gone now)
> the users were often very satisfied with the old software we were replacing. They liked how fast they got things done with it: For instance, the old software had keyboard shortcuts for everything. Its layout was more information dense.
I can say that I am also satisfied with many old programs for these reasons, and sometimes other reasons too. When writing new software, I generally try to design it like that too, with keyboard for everything (or nearly everything), and with dense layouts.
The issue is that so many different companies and vendors say so and so, way beyond what is actually delivered for the end user behind the keyboard, that the mere words have pretty much lost all credibility to the end users.
For example I’ve personally seen >60 year old cashiers at a small pharmacy with arthiritis, trembling hands, and everything, flawlessly input over 100 complex commands per minute on 30+ year old machine with a software design probably from the 70s.
With 0 bugs, 0 human perceptible delays, and not even a single misinput.
Other than the obvious — keyboard shortcuts — this also heavily relies on being able to input ahead of time, before the next screen is even drawn. Older systems buffer inputs and apply them correctly to the interface elements as they become visible.
The modern web does none of this, which is why users have to wait for each screen before inputting the next command… if it has a keyboard shortcut. It probably doesn’t.
Not quite true. Worked at epic previously. Due to the culmination of a 10 year migration from VB to electron, the os level input buffer was discarded mid 2023 since electron doesn’t handle those the same way. The primary reason for this is that VB was pretty much synchronous. However, web technologies don’t act the same way because keeping your keyboard inout while navigating to a different site would be weird.
We added an input handler to queue inputs so that sequences of shortcuts and keypresses could be used.
Additionally, the internal framework we had allowed for shortcuts and we tried to replicate as much as could shortcut wise (as well as functionality wise). Almost everything should have a shortcut or a way to navigate to it via just keyboard — they had put in a lot of effort to ensure accessibility so that they could get the va contract that went to cerner(pre-oracle acquisition)
> However, web technologies don’t act the same way because keeping your keyboard inout while navigating to a different site would be weird.
Depends. It is not the place of the underlying tech to impose such limits.
> We added an input handler to queue inputs so that sequences of shortcuts and keypresses could be used.
You must be one of the few (or any) companies that did this.
This definitely doesn't work in the general case of SPA web apps.
> The modern web does none of this, which is why users have to wait for each screen before inputting the next command…
Or wait further. Many web screens appear in a state that is initially nonresponsive, becoming responsive invisibly after an unknown and variable period of time taken for code to load.
Yeah and the worst part is, nobody, not even quite serious vendors selling stuff with all sorts of bells and whistles, do any sort of benchmarking on all these various paramters. (That I’ve seen)
So even if people wanted to believe and buy so and so product, they have no way to substantively compare who is more honest.
The United States Government is replacing the open source, internally written medical records system, https://en.wikipedia.org/wiki/VistA, with Epic. [edit] Or is it cerner?
I'm sure it's going to be way way way better.
[edits: cerner not epic?; added another way to emphasize how much better it'll be.]
It is Cerner. They are relaunching.
https://www.military.com/daily-news/2025/01/02/va-sets-sight...
They actually cancelled the migration because the commercial EMR was getting people killed.
Do you have any links for this? Tragic that people got killed, but even more so if the same software was at the core. And I sure hope it was cancelled, if that was the case.
https://www.healthcareitnews.com/news/oig-report-vha-finds-m...
There’s a link to the OIG reports themselves. It’s almost certainly not the software itself, but the way it was configured and rolled out. Cerner is the market leader for EHRs, so I highly doubt it’s intrinsically flawed; just so configurable it’s easy to cut yourself on the edges.
[dead]
Sweden had a series of disasters like that in recent years. A few months ago one region tried to switch from their old healthcare system to something delivered by Oracle, but quickly had to roll back to the old system.
https://www.theregister.com/2024/11/27/oracle_cerner_project...
In 2021 an expensive system for schools in Stockholm was so bad that some parents got together and wrote an open source app to not have to use the bad official UI.
https://www.wired.com/story/sweden-stockholm-school-app-open...
Epic runs most US hospitals. It's a monopoly with significant lock-in and control over it's customers. Also, we do manage to get things done. They will too.
I actually worked at Epic for a while. By EMR software standards Epic is very good overall. The database it is built on called InterSystems Cache is one of the best in the industry. They employ thousands of QA testers. But as a very complex software system it can be implemented very badly. I wonder if there is a language barrier.
Interesting. Does Epic come with any pre-built UI for its products? Some of the bad UX complaints have been explained away as bad reuse of interfaces adapted to the realities of US private health care (which is very different from Norway’s public health system).
I do not think a language barrier is at play; Norwegians and the other Scandinavians are among the most fluent English speakers of English as a second language in Europe, surpassed perhaps only by the Dutch.
I could see adapting Epic to a very different medical system being hard and error prone.
> https://helvetesplattformen.no (“the hell platform”)
Huh. Any etymological connection between "helvetes" and "Helvetii"/"Helvetia"?
No connection.