The NT paths are how the object manager refers to things. For example the registry hive HKEY_LOCAL_MACHINE is an alias for \Registry\Machine
https://learn.microsoft.com/en-us/windows-hardware/drivers/k...
In this way, NT is similar to Unix in that many things are just files part of one global VFS layout (the object manager name space).
Paths that start with drive letters are called a "DOSPath" because they only exist for DOS compatibility. But unfortunately, even in kernel mode, different sub systems might still refer to a DOSPath.
Powershell also exposes various things as "drives", pretty sure you could create your own custom drive as well for your custom app. For example, by default there is the 'hklm:\' drive path:
https://learn.microsoft.com/en-us/powershell/scripting/sampl...
Get-PSDrive/New-PSDrive
You can't access certificates in linux/bash as a file path for example, but you can in powershell/windows.
I highly recommend getting the NtObjectManager powershell module and exploring about:
https://github.com/googleprojectzero/sandbox-attacksurface-a...
ls NtObject:\
It's baffling than after 30 years, Windows is still stuck in a weird directory naming structure inherited from the 80's that no longer make sense when nobody has floppy drives.
> Windows is still stuck in a weird directory naming structure inherited from the 80's that no longer make sense when nobody has floppy drives.
I think you could make this same statement about *nix, except it's 10 years _worse_ (1970s). I strongly prefer the fhs over whatever MS thinks it's doing, but let's not pretend that the fhs isn't a pile of cruft (/usr/bin vs /bin, /etc for config, /media vs /mnt, etc)
All of those are optional restrictions, not mandatory. On Windows, it's (practically) mandatory.
Maybe some Windows wizards could get around the mandatory restrictions, but an average Linux user can get around the optional ones.
Drive letters are just /mnt, you can get around that, even with GUI.
So why a default Windows install still uses and shows C:?
Because A is reserved for floppy drive, and B - for zip drive.
A: and B: were both for floppies, dual floppy systems were around and common, both with and without hard disks, long before Zip disks existed, and Zip disks came around far too late (1994!) to influence the MS-DOS naming standard.
Drive B was always a floppy disk drive.
Zip disks presented themselves with drive letters higher than B (usually D: assuming you had a single hard disk). However, some (all?) Zip drives could also accept legacy 3.5" floppies, and those would show up as B.
Streaming as defacto metaphor for file access goes back to tape drives. Random Access patterns make more sense with today’s media yet we’re all still fscanf-ing
Of course there are alternatives but the resource-as-stream metaphor is so ubiquitous in Unix, it’s hard to avoid.
There is more pliability in the Linux ecosystem to change some of these things.
And anyway, there has to be a naming scheme; the naming scheme is abstracted from the storage scheme.
It's not the case that your /var and /usr are different drives; though it can be in a given installation.
Unix starts at root, which is how nature intended. It does not change characteristics based on media - you can mount a floppy at root if you want.
Why get upset over /media vs /mnt? You do you, I know I do.
For example The Step CA docs encourage using /etc/step-ca/ (https://smallstep.com/docs/step-ca/certificate-authority-ser...) for configuration for their product. Normally I would agree but as I am manually installing this thing myself and not following any of the usual docs, I've gone for /srv/step-ca.
I think we get enough direction from the ... "standards" ... for Unix file system layouts that any reasonably incompetent admin can find out which one is being mildly abused today and get a job done. On Windows ... good luck. I've been a sysadmin for both platforms for roughly 30 years and Windows is even odder than Unix.
> Unix starts at root, which is how nature intended. It does not change characteristics based on media - you can mount a floppy at root if you want.
Why is the root of one of my drives `/` while the roots of my other drives are subdirectories of that first drive?
Thinking of it in terms of namespaces might help; it's not that the drive is special, it's that there's a view that starts from / and one disk filesystem happens to be dropped there and others are dropped elsewhere; with something like initramfs there aren't any drives on /, just a chunk of ram, though you usually pivot to a physical one later (many linux-based embedded systems don't because your one "drive" is an SD card that can't handle real use, so you just keep the "skeleton" in memory and drop various bits of eMMC or SD or whatever into the tree as-convenient.)
I do get it, I just don't think that the UNIX way is necessarily more natural than the Windows way.
In multiple ways, / doesn't have to be one of your drives.
Because you (or your distro) configured it like that. You don’t have to do it that way.
Only the root of the root filesystem is /
The point is that any filesystem can be chosen as the OS’s root.
The root of all other filesystems - there could be multiple per drive - is where you tell the filesystem to be mounted, or in your automounter’s special directory, usually /run/media, where it makes a unique serial or device path.
* clarity
/usr/bin vs /bin distinction is not relevant as all major distros have gone usrmerge for years now so /bin == /usr/bin (usually /bin is a symlink)
I like being able to run games from early 2000s. Being able to write software that will still run longer after you're gone used to be a thing. But here we are with linux abandoning things like 'a.out'. Microsoft doesn't have the luxury to presume that it's users can recompile software, fork it, patch it,etc.. When your software doesn't work on the latest Windows, most people blame Microsoft not the software author.
Ok, I prefer to use software which is future compatible, like ZFS, which is 128-bit.
“The file system itself is 128 bit, allowing for 256 quadrillion zettabytes of storage. All metadata is allocated dynamically, so no need exists to preallocate inodes or otherwise limit the scalability of the file system when it is first created. All the algorithms have been written with scalability in mind. Directories can have up to 248 (256 trillion) entries, and no limit exists on the number of file systems or the number of files that can be contained within a file system.”
https://docs.oracle.com/cd/E19253-01/819-5461/6n7ht6qth/inde...
Don’t want to hit the quadrillion zettabyte limit..
Someone did some back-of-the-napkin math and calculated that to populate every byte in a 128 bit storage pool, you'd need to use enough energy to literally boil the oceans. There was a blog post on oracle.com that went into more detail, but no link into Oracle survives more than 10 years.
> Directories can have up to 248 (256 trillion) entries
It took me a minute to figure out that this was supposed to be 2^48, but even then that's ~281 trillion. What a weird time for the tera/tibi binary prefix confusion to show up, when there aren't even any units being used.
Wait are you saying Linux broke user-space? I've completely missed this and would like to know more, may I be so bold as to request a link?
> > But here we are with linux abandoning things like 'a.out'.
> I've completely missed this and would like to know more, may I be so bold as to request a link?
"A way out for a.out" https://lwn.net/Articles/888741/
"Linux 6.1 Finishes Gutting Out The Old a.out Code" https://www.phoronix.com/news/Linux-6.1-Gutting-Out-a.out (with links to two earlier articles)
Thanks!
Linux does occasionally remove stuff that seem to have no users and there is no good reason to have a.out binaries since... the late '90s ?
I was playing with some asm code and generating a.out with nasm, got stuck on why it wouldn't load..turns out linux stopped supporting it. When they say "no one uses it" they mean packages and stuff, they don't care about private code you have lying around and other use cases. With a widely deployed platform like windows, they can't assume things like that. There are certainly very valid business application that go back decades. There are literally systems that have 20+ years up time out there.
FWIW, you should be able to hack up an a.out loader pretty easily with binfmt_misc.
If someone complained to them that they still need a.out they might've reconsidered. At least that's what's happened before with old architectures.
I don’t like running games from the early 2000s outside of a sandbox of some description. If you disagree, it's because we don't have sandboxes which don't suck. Ideally, running old software in a sandbox on a modern OS should be borderline transparent — not like installing XP in a virtual machine.
While I understand the appeal of software longevity, and I think it's a noble and worthy pursuit, I also think there is an under-appreciated benefit in having unmaintained software less likely to function on modern operating systems. Especially right now, where the concept of serious personal computer security for normal consumers is arguably less than two decades old.
Inherited from the 80s? Microsoft effectively inherited drive letters via an 8086 semi-clone of CP/M called QDOS[0], it was the basis for PC-DOS and later MS-DOS. CP/M dates back to 1974.
But Gary Kildall didn't come up with the idea of drive letters in CP/M all on his own, he was likely influenced by TOPS-10[1] and CP/CMS[2], both from the late 60s.
[0] https://en.wikipedia.org/wiki/86-DOS
I don't particularly like the Windows naming structure, but it made just as much sense with later removable-media-with-fixed-drives systems (like optical drives) as it did with floppy drives. It maybe makes less sense now that storage is either fixed media or detachable drives, rather than some being removable media in fixed drives, but the period after commonn removable media is a lot shorter than the period after common floppy drives.
(And mostly, I'm talking about using drive letters rather than something like what unix does. C being the first fixed media device, may seem more arbitrary now, but it was pretty arbitrary even in the floppy era.)
There's special place in hell for the inventor of drive names. IIRC, it's something that was nicked from CP/M.
Tim Patterson certainly copied it from CP/M and may not have been aware of anything predating it, but according to Wikipedia drive letters have quite a long history: https://en.wikipedia.org/wiki/Drive_letter_assignment
Wait 'til you hear about the PDP-11 emulator of a CPU it is running on.
Yeah, try explaining “drive C:” to a kid these days, and why it isn’t A: or B: …
Of course software developers are still stuck with 80 column conventions even though we have 16x9 4K displays now… Didn’t that come from punchcards ???
Come for punchcards, stay for legibility.
80 characters per line is an odd convention in the sense that it originated from a technical limitation, but is in fact a rule of thumb perfectly familiar to any typesetting professional from long before personal computing became widespread.
Remember newspapers? Laying the text out in columns[0] is not a random quirk or result of yet another technology limitation. It is the same reason a good blog layout sets a conservative maximum width for when it is read on a landscape oriented screen.
The reason is that when each line is shorter, the entire thing becomes easier to read. Indeed, even accounting for legibility hit caused by hyphenation.
Up to a point, of course. That point may differ depending on the medium and the nature of the material: newspapers, given they deal with solid plain text and have other layout concerns, limit a line to around 50 characters; a book may go up to 80 characters. Given a program is not a relaxed fireside reading, I would place it closer to the former, but there are also factors and conventions that could bring acceptable line length up. For example, indentation and syntax highlighting, or typical identifier length (I’m looking at you, CNLabelContactRelationYoungerCousinMothersSiblingsDaughterOrFathersSistersDaughter), or editor capability to wrap lines nicely[1].
Finally, since the actual technical limitation is gone, it is actually not such a big deal to violate the line length rule on occasion.
[0] Relatedly, codebases roughly following the 80 character line length limitation unlock more interesting columnar layouts in editors and multiplexers.
[1] Isn’t the auto-wrap capability in today’s editors good enough that restricting line length is pointless at the authoring stage? Not really, and (arguably) especially not in case of any language that relies on indentation. Not that it could not be good enough, but considering code becomes increasingly write-only it seems unlikely we will see editors with perfect, context-sensitive, auto-wrap any time soon.
When I read text I prefer it to use the lessons
of typography and not be overly wide, lest my saccadic
motion leads my immersion and comprehension astray.
However when I read code I do not want to scan downwards to complete the semantics of a given expression because that will also break my comprehension and so when a line of code is long I'd prefer for it to remain long unless there are actually multiple clauses and other conditionally chained semantic elements that are more easily read aloneoof this looks awful on mobile, with extra line breaks
I don't know any way to force line breaks on HN without extra line breaks ... do you?
- [deleted]
- [deleted]
80 chars per line was invented when languages used shortened commands though. Nowadays 120 is more appropriate. Especially in Powershell. Not so much in bash where commands are short, 80 can stay alive there!
I’m very sure this is a myth. Like any good myth, it makes sense on the surface but holds zero water once you look close.
Code isn’t prose. Code doesn’t always go to the line length limit then wrap, and prose doesn’t need a new line after every sentence. (Don’t nitpick this; you know what I’m saying)
The rules about how code and prose are formatted are different, so how the human brain finds the readability of each is necessarily different.
No code readability studies specifically looking for optimal line length have been done, to my knowledge. It may turn out to be the same as prose, but I doubt it. I think it will be different depending on the language and the size of the keywords in the language and the size of the given codebase. Longer keywords and method/function names will naturally lead to longer comfortable line lengths.
Line length is more about concepts per line, or words per line, than it is characters per line.
The 80-column limit was originally a technical one only. It has remained because of backwards compatibility and tradition.
> It is the same reason a good blog layout sets a conservative maximum width for when it is read on a landscape oriented screen.
Except 99.9% of times it's becomes 50 characters with 32pt font which occupies ~25% of the horizontal space on a 43".
"Good" my ass.
The 80 char max line width convention makes no sense with modern monitor resolutions and ultrawides being very common.
If you don't have some level of arbitrary limit on line length, it becomes all that much easier to sneak in malicious code prefixed by a bunch of whitespace.
Linting and autoformats help here... just allowing any length of line in code is just asking to get pwned at some point.
> Of course software developers are still stuck with 80 column conventions
Speak for yourself, all my projects use at least 100 if not 120 column lines (soft limit only).
Trying to keep lines at a readable length is still a valid goal though, even without the original technical limitations - although the bigger win there is to keep expression short, not to just wrap them into shorter lines.
It did, but 80 columns also pretty closely matches the 50ish em/70ish character paragraph width that’s usually recommended for readability. I myself wouldn’t go much higher than 100 columns with code.
While 80 characters is obviously quite short, my experience is that longer line lengths result in much less readable code. You have to try to be concise on shorter lines, with better phrasing.
You can make harddrives to A: and B: just fine.
This will generally work with everything using the Win32 C api.
You will however run into weird issues when using .Net, with sudden invalid paths etc.
It really wouldn't be much of a conversation. Historical conventions are a thing in general. Just think of the direction of electron flow.
> even though we have 16x9 4K displays now
Pretty much no normal person uses those at 100% scaling though, so unless you're thinking of the fellas who use a TV for a monitor, that doesn't actually help so much:
- 100% scaling: 6 panels of 80 columns fit, no px go to waste
- 125% scaling: 4 panels of 80 columns fit, 64 px go to waste (8 cols)
- 150% scaling: 4 panels of 80 columns fit, no px go to waste
- 175% scaling: 3 panels of 80 columns fit, 274 px go to waste (34 cols)
- 200% scaling: 3 panels of 80 columns fit, no px go to waste
This sounds good until you need any additional side panels. Think line numbers, scrollbars, breakpoint indicators, or worse: minimaps, and a directory browser. A minimap is usually 20 cols/panel, a directory browser is usually 40 cols. Scrollbar and bp-indicator together 2 cols/panel. Line numbers, probably safe to say, no more than 6 cols/panel.
With 2 panels, this works out to an entire additional panel in overhead, so out of 3 panels only 2 remain usable. That's the fate of the 175% and 200% options. So what is the "appropriate" scaling to use?
Well PPI-wise, if you're rocking a 32" model, then 150%. If a 27" model, then 175%. And of course, given a 22"-23"-24" unit, then 200%. People of course get sold on these for the "additional screen real estate" though, so they'll instead sacrifice seeing the entire screen at once and will put on their glasses. Maybe you prefer to drop down by 25% for each of these.
All of this is to say, it's not all that unreasonable. I personally feel a bit more comfortable with a 100 col margin, but I do definitely appreciate when various files nicely keep to the 80 col mark, they're a lot nicer to work with side-by-side.
The fact that modern interactive command shells are based on virtual teletype terminals is just absurd when you think about it
Try explaining files to a kid these days
Windows can still run software from the 80's, backwards compatibility has always been a selling point for Windows, so I'd call that a win.
Didn't Microsoft drop 16 bit application support in Windows 10? I remember being saddened by my exe of Jezzball I've carried from machine to machine no longer working.
Microsoft has dropped 16-bit application support via builtin emulator (NTVDM) from 64-bit builds of Windows, whether it happens to be Windows 10 or earlier version of Windows, depends on user (in my case, it was Windows Vista). However, you can still run 16-bit apps on 64-bit builds of Windows via third party emulators, such as DOSBox and NTVDMx64.
> you can still run 16-bit apps on 64-bit builds of Windows via third party emulators, such as DOSBox and NTVDMx64.
Or Wine, which is less reliable but funnier.
Do you mean winevdm? https://github.com/otya128/winevdm
Wine itself doesn't run on Windows AFAIK.
> Wine itself doesn't run on Windows AFAIK.
It does, if you use an old enough version of windows that SUA is available :). I never managed to get fontconfig working so text overlapped its dialogue boxes and the like, but it was good enough to run what I needed.
Wine ran sort-of-fineish in WSL v1 and I'm pretty sure it'll run perfectly in WSL v2 (which is just a VM).
- [deleted]
- [deleted]
and Linux stopped supporting 32bit x86 I think around the same time? (just i386?)
Are you talking about CPU support? I installed a 32 bit program on basic linux mint just the other day. If I really need to load up a pentium 4 I can deal with it being an older kernel.
That's exactly what I mean, I wish Linux was more like NetBSD in its architecture support. It kind of sucks that it is open source but it acts like a corporate entity that calculates profitability of things. There is one very important reason to support things in open source: Because you committed to it, and you can. If there are practical reasons such as lack of willing maintainers (I refuse to believe out of all the devs that beg to have a serious role in kernel maintenance, none are willing to support i386 - if NetBSD has people, so too Linux), totally understandable.
You'd expect Microsoft to support things because it doesn't make money for them anymore or some other calculated cost reason, but Microsoft is supporting old things few people use even when it costs them performance/secure edges.
Well for now the kernel still supports it. And the main barrier going forward is some memory mapping stuff that anyone could fix.
Though personally, while I care a lot about using old software on new hardware, my desire to use new software on old hardware only goes so far back and 32 bit mainstream CPUs are out of that range.
I think eventually 32 bit hardware and software shouldn't be supported. But there are still plenty of both. We shouldn't get rid of good hardware because it's too old, that's wasteful. 16bit had serious limits but 32 bit is still valid for many applications and environments that don't need >3GB~ ram. For example, routers shouldn't use 64bit processors unless they're handling that much load, die size matter there, that's why they use Arm mostly, and that's why Arm has thumb mode (less instruction width = smaller die size). I'm sure the tiny amounts of money and energy saved by not having that much register/instruction width adds up when talking about billions of devices.
Open source isn't where I'd expect abandonware to happen.
> We shouldn't get rid of good hardware because it's too old, that's wasteful.
Depends on how much power it's wasting, when we're looking at 20 year old desktops/laptops.
> 32 bit is still valid for many applications and environments that don't need >3GB~ ram.
Well my understanding is that if you have 1GB of RAM or less you have nothing to worry about. The major unresolved issue with 32 bit is that it needs complicated memory mapping and can't have one big mapping of all of physical memory into the kernel address space. I'm not aware of a plan to remove the entire architecture.
It's annoying for that set of systems that fit into 32 bits but not 30 bits, but any new design over a gigabyte should be fine getting a slightly different core.
> For example, routers shouldn't use 64bit processors unless they're handling that much load, die size matter there
I don't think that's right, but correct me if I missed something. A basic 64 bit core is extremely tiny and almost the same size as a 32 bit core. If you're heavy enough to run Linux, 64 bit shouldn't be a burden.
It's very impressive indeed.
Linux goal is only for code compatibility - which makes complete sense given the libre/open source origins. If the culture is one where you expect to have access to the source code for the software you depend on, why should the OS developers make the compromises needed to ensure you can still run a binary compiled decades ago?
My original VB6 apps (mostly) still run on win11
Hmm. IME VB6 is actually a particular pain point, because MDAC (a hodgepodge of Microsoft database-access thingies) does not install even on Windows 10, and a line-of-business VB6 app is very likely to need that. And of course you can’t run apps from the 1980s on Windows 11 natively, because it can no longer run 16-bit apps, whether DOS or Windows ones. (All 32-bit Windows apps are definitionally not from the 1980s, seeing as the Tom Miller’s sailboat trip that gave us Win32 only happened in 1990. And it’s not the absence of V86 mode that’s the problem—Windows NT for Alpha could run DOS apps, using a fatter NTVDM with an included emulator. It’s purely Microsoft’s lack of desire to continue supporting that use case.)
> It’s purely Microsoft’s lack of desire to continue supporting that use case.
NTVDM leverages virtual 8086 mode which is unavailable while in long mode.
NTVDM would need to be rewritten. With alternatives like DOSBox, I can see why MSFT may not have wanted to dive into that level of backwards compat.
As I’ve already said in my initial comment, this is not the whole story. (I acknowledge it is the official story, but I want to say the official story, at best, creatively omits some of the facts.)
NTVDM as it existed Windows NT (3.1 through 10) for i386 leveraged V86 mode. NTVDM on Windows NT (e.g. 4.0) for MIPS, PowerPC, and Alpha, on the other hand, already had[1] a 16-bit x86 emulator, which was merely ifdefed out of the i386 version (making the latter much leaner).
Is it fair of Microsoft to not care to resurrect that nearly decade-old code (as of Windows XP x64 when it first became relevant)? Yes. Is it also fair to say that they would not, in fact, need to write a complete emulator from scratch to preserve their commitment to backwards compatibility, because they had already done that? Also yes.
[1] https://devblogs.microsoft.com/oldnewthing/20060525-04/?p=31...
ReactOS' NTVDM DLL wll work under XP-10 and it will run some DOS games too.
Wait, what's the story of the sailboat trip? My searches are coming up empty, but it sounds like a great story.
Yeah, I was surprised by the lack of search results when I was double-checking my post too, but apparently I wasn’t surprised enough, because I was wrong. I mixed up two pieces of Showstopper!: chapter 5 mentions the Win32 spec being initially written in two weeks by Lucovsky and Wood
> Lucovsky was more fastidious than Wood, but otherwise they had much in common: tremendous concentration, the ability to produce a lot of code fast, a distaste for excessive documentation and self-confidence bordering on megalomania. Within two weeks, they wrote an eighty-page paper describing proposed NT versions of hundreds of Windows APIs.
and chapter 6 mentions the NTFS spec being initially written in two weeks by Miller and one other person on Miller’s sailboat.
> Maritz decided that Miller could write a spec for NTFS, but he reserved the right to kill the file system before the actual coding of it began.
> Miller gathered some pens and pads, two weeks’ worth of provisions and prepared for a lengthy trip on his twenty-eight-foot sailboat. Miller felt that spec writing benefited from solitude, and the ocean offered plenty of it. [...] Rather than sail alone, Miller arranged with Perazzoli, who officially took care of the file team, to fly in a programmer Miller knew well. He lived in Switzerland.
> In August, Miller and his sidekick set sail for two weeks. The routine was easy: Work in the morning, talking and scratching out notes on a pad, then sail somewhere, then talk and scratch out more notes, then anchor by evening and relax.
(I’m still relatively confident that the Win32 spec was written in 1990; at the very least, Showstopper! mentions it being shown to a group of app writers on December 17 of that year.)
I had game partition mounted as subpath on a drive and it just not worked well with some apps.
Some apps (in this case Steam) don't run "what is is space in current path" (despise say GetDiskFreeSpaceExW accepting full path just fine), they cut it to the drive letter, which causes them to display space of the root drive, not the actual directory that they are using and in my case was mounted as different partition
In the 80s, running DOS 3.1 on an IBM Network, I was networking dual floppy PCs, and with testing, got through drive '!' '@' '#' '^' So I was able to use 26 floppies, 24 of them non local... It was all removed with the next release, 3.2, so I would make some bets about NT Networking and its NetBIOS roots.
I was inspired by the Dr Seuss, "On beyond Zebra."
I mean its a successful commercial project because it doesnt break things, at least not that often. You can run some really old software on windows. Its kind of taken for granted, but this is just not the norm is most industries.
As for baffling, I mean, I type in things like 'grep' everyday which is a goofy word. I'm not even going to go into all the legacy stuff linux presents and how linux, like windows, tries hard not to break userland software.
It’s not baffling at all. They strongly value maintaining backwards compatibility guarantees.
For example, Windows 11 has no backwards compatibility guarantees for DOS but operating systems that they do have backwards compatibility guarantees for do.
Enterprises need Microsoft to maintain these for as long as possible.
It is AMAZING how much inertia software has that hardware doesn’t, given how difficult each are to create.
They've stopped caring as much about backwards compat.
Windows 10 no longer plays the first Crysis without binary patches for instance.
Things that go through the proper channels are usually compatible. Crysis was never the most stable of games and IIRC it used 3DNow, which is deprecated - but not by Windows.
As a counter-anecdata, last week I ran Galapagos: Mendel's Escape with zero compat patches or settings, that's a 1997 3D game just working.
> Things that go through the proper channels are usually compatible.
But that's a pretty low bar - previously Windows went to great lengths to preserve backwards compatibility even for programs that are out of spec.
If you just care about keeping things working if they were done "correctly" then the average Linux desktop can do that too - both for native Linux programs (glibc and a small list of other base system libraries have strong backwards compatibility) as well as for Windows programs via Wine.
On paper maybe. In practice there's currently at least one case that directly affects me where Wine-patched Windows software still works on Windows thanks to said patch... but doesn't work under Wine anymore.
Theres a big difference between Enterprise-Level software and games.
Windows earns money mainly in the enterprise sector, so that's where the backwards-compatibility effort is. Not gaming. That's just a side effect.
Anecdotal, you can run 16bit games (swing; 1997) on Windows, only if you patch 2-3 DirectX related files.
The prototypical examples given in the past were for applications like Sim City, hardly bastions of enterprise software.
And with win11, Microsoft stopped shipping 32bit versions of the OS, and since they don't support 16bit mode on 64bit OSes, you actually can't run any 16bit games at all.
The 3.5mm audio jack is 75 years old, but electrically-compatible with a nearly 150-year-old standard.
Victorian teletypes can be hooked to a serial port with a trivial adapter, at least enough to use CP/M and most single-case OS'es.
Also, some programming languages have a setting to export code compatible with just Baudot characters: http://t3x.org/nmhbasic/index.html
So, you could feed it from paper tape and maybe Morse too.
Yeah speakers haven’t changed enough to make the 3.5mm connector obsolete.
Many new devices use a 2.5mm audio jack instead of the 3.5mm audio jack.
It's baffling tha[t] after 59 years , Unix is still stuck in a weird directory naming structure inherited from the the late 60s that no longer make[s] sense when nobody has floppy drives.
Unix pre-dates floppy drives, at least on PDP-11.
Unix just barely predates the PDP-11 itself.
PnP PowerShell also includes a PSDrive provider [0] so you can browse SharePoint Online as a drive. These aren't limited to local sources.
[0] https://pnp.github.io/powershell/cmdlets/Connect-PnPOnline.h...
> You can't access certificates in linux/bash as a file path for example
Fuse and p9 exist... If anyone wants certs by id in the filesystem, it will exist.
> You can't access certificates in linux/bash as a file path for example, but you can in powershell/windows.
sure you can, /usr/share/ca-certificates tho you do need to run 'update-ca-certificates' (in debian derivatives) to update some files, like hashed symlinks in /etc/ssl/certs
there is also of course /sys|/proc for system stuff, but yes, nowhere near as integrated as windows registry
ReactOS has a graphical NT OBJ browser (maybe as a CLSID) where you can just open an Explorer window and look up the whole registry hierarchy and a lot more.
It works under Windows too.
Proof:
https://winclassic.net/thread/1852/reactos-registry-ntobject...
Awesome!
Seems ReactOS holds some goodies even for long time Windows users!After [copying over .dll and importing .reg files], you will already be able to open these shell locations with the following commands: NT Object Namespace: explorer.exe shell:::{845b0fb2-66e0-416b-8f91-314e23f7c12d} System Registry: explorer.exe shell:::{1c6d6e08-2332-4a7b-a94d-6432db2b5ae6} If you want to add these folders in My Computer, just like in ReactOS, add these 2 CLSIDs to the following location: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\MyComputer\NameSpaceYep. NTVDM DLL too; it will run DOS binaries and some other stuff too. Also it has proper solitaires and such which you can just reuse by extracting the EXE files from the CAB file lying in the ReactOS live CD.
> You can't access certificates in linux/bash as a file path for example, but you can in powershell/windows.
I don't understand what you mean by this. I can access them "as a file" because they are in fact just files
$ ls /etc/ca-certificates/extracted/cadir | tail -n 5 UCA_Global_G2_Root.pem USERTrust_ECC_Certification_Authority.pem USERTrust_RSA_Certification_Authority.pem vTrus_ECC_Root_CA.pem vTrus_Root_CA.pemYou can access files that contain certificate information (on any OS), but you can't access individual certificates as their own object. In your output, you're listing files that may or may not contain valid certificate information.
The difference is similar to being able to do 'ls /usr/bin/ls' vs 'ls /proc/12345/...' , the first is a literal file listing, the second is a way to access/manipulate the ls process (supposedly pid 12345). In windows, certificates are not just files but parsed/processed/validated usage specific objects. The same applies on Linux but it is up to openssl, gnutls,etc... to make sense of that information. If openssl/gnutls had a VFS mount for their view of the certificates on the system (and GPG!!) that would be similar to cert:\ in powershell.
Linux lacks a lot of APIs other operating systems have and certificate management is one of them.
A Linux equivalent of listing certificates through the Windows virtual file system would be something like listing /proc/self/tls/certificates (which doesn't actually exist, of course, because Linux has decided that stuff like that is the user's problem to set up and not an OS API).
I _suspect_ they mean that certs imported into MMC in Windows can be accessed at magic paths, but...yeah linux can do that because it skips the step of making a magical holding area for certs.
there are magical holding areas in Linux as well, but that detail is up to TLS libraries like openssl at run-time, and hidden away from their clients. There are a myriad of ways to manage just ca certs, gnutls may not use openssl's paths, and each distro has its own idea of where the certs go. The ideal unix-y way (that windows/powershell gets) would be to mount a virtual volume for certificates where users and client apps alike can view/manipulate certificate information. If you've tried to get a internal certs working with different Linux distros/deployments you might be familiar with the headache (but a minor one I'll admit).
Not for certs specifically (that I know of) but Plan9 and it's derivaties are very hard on making everything VFS abstracted. Of course /proc , /sys and others are awesome, but there are still things that need their own FS view but are relegated to just 'files'. Like ~/.cache ~/.config and all the xdg standards. I get it, it's a standardized path and all, but what's being abstracted is here is not "data in a file" but "cache" and "configuration" (more specific), it should still be in a VFS path, but it shouldn't be a file that is exposed but an abstraction of "configuration settings" or "cache entries" backed by whatever thing you want (e.g.: redis, sqlite, s3,etc..). The windows registry (configuration manager is the real name btw) does a good job of abstracting configurations, but obviously you can't pick and choose the back-end implementation like you potentially could in Linux.
> The windows registry (configuration manager is the real name btw) does a good job of abstracting configurations, but obviously you can't pick and choose the back-end implementation like you potentially could in Linux.
In theory, this is what dbus is doing, but through APIs rather than arbitrary path-key-value triplets. You can run your secret manager of choice and as long as it responds to the DBUS API calls correctly, the calling application doesn't know who's managing the secrets for you. Same goes for sound, display config, and the Bluetooth API, although some are "branded" so they're not quite interchangeable as they might change on a whim.
Gnome's dconf system looks a lot like the Windows registry and thanks to the capability to add documentation directly to keys, it's also a lot easier to actually use if you're trying to configure a system.
Do note the 'ls' usage:
Now do the same without a convoluted hodge-podge of one-liner involving grep, python and cutting exact text pieces with regex.PS Cert:\LocalMachine\Root\> ls PSParentPath: Microsoft.PowerShell.Security\Certificate::LocalMachine\Root Thumbprint Subject EnhancedKeyUsageList ---------- ------- -------------------- CDD4EEAE6000AC7F40C3802C171E30148030C072 CN=Microsoft Root C… BE36A4562FB2EE05DBB3D32323ADF445084ED656 CN=Thawte Timestamp… A43489159A520F0D93D032CCAF37E7FE20A8B419 CN=Microsoft Root A… 92B46C76E13054E104F230517E6E504D43AB10B5 CN=Symantec Enterpr… 8F43288AD272F3103B6FB1428485EA3014C0BCFE CN=Microsoft Root C… 7F88CD7223F3C813818C994614A89C99FA3B5247 CN=Microsoft Authen… 245C97DF7514E7CF2DF8BE72AE957B9E04741E85 OU=Copyright (c) 19… 18F7C1FCC3090203FD5BAA2F861A754976C8DD25 OU="NO LIABILITY AC… E12DFB4B41D7D9C32B30514BAC1D81D8385E2D46 CN=UTN-USERFirst-Ob… {Code Signing, Time Stamping, Encrypting File System} DF717EAA4AD94EC9558499602D48DE5FBCF03A25 CN=IdenTrust Commer… DF3C24F9BFD666761B268073FE06D1CC8D4F82A4 CN=DigiCert Global …I always love how linux fans do like to talk without any experience nor the will to get the said experience.
This is nice! What I find hard to grapple with is, how do other concepts of the file system map to these providers, even more so to Alias, Environment, Function or Variable? Like creating an item, deleting an item, copying an item, viewing contents and properties like permissions, size, visibility of an item?
For the Certificate provider specifically: When I think certificates and hierarchy, I think signing hierarchy of issueing certs. But this is not what is exposed here, just the structure of the OS cert store without context. and moving items has much more implications that inside a normal data folder. Thus I prefer certlm/certmgr.msc as they provide some more of it.
Sometimes It feels as they crammed too much into that idea, a forced concept. https://superuser.com/q/1065812/what-is-psprovider-in-powers...
No, he meant access like virtual pseudo filesystem - /proc, /sys etc