>> SMTP "“didn’t win because it was ‘better,’” he argued, but “just because it was easier to implement."
Yes - and this is actually really important! It's true of most of the important early internet technologies. It's the entire reason "internet" standards won over "telco" (in this case ITU) standards - the latter could only be deployed by big coordinated efforts, while internet standards let individual decentralized admins hook their sites together.
Did any of the ITU standards win? In the end, internet swallowed telephones and everything is now VOIP. I think the last of the X standards left is X509?
> It's the entire reason "internet" standards won over "telco" (in this case ITU) standards - the latter could only be deployed by big coordinated efforts,
Anyone remember the promise of ATM networking in the 90's? It was telecom grade networking which used circuit switched networking that would handle voice, video and data down one pipe. Instead of carelessly flinging packets into the ether like an savage, you had a deterministic network of pipes. You called a computer as if it were a telephone (or maybe that was Datakit?) and ATM handed the user a byte stream like TCP. Imagine never needing an IP stack or setting traffic priority because the network already handles the QoS. Was it simple to deploy? No. Was it cheap? Nooohooohooohooo. Was Ethernet any of those? YES AND YES. ATM was superior but lost to the simpler and cheaper Ethernet which was pretty crappy in its early days (thinnet, thicknet, terminators, vampire taps, AUI, etc.) but good enough.
The funny part is this has the unintended consequences of needing to reinvent the wheel once you get to the point where you need telecom sized/like infrastructure. Ethernet had to adapt to deterministic real-time needs so various hacks and standards have been developed to paper over these deficiencies which is what TSN is - reinventing ATM's determinism. In addition we also now have OTN, yet another protocol to further paper over the various other protocols to mux everything down a big fat pipe to the other end which allows Ethernet (and IP/ATM/etc) to ride deterministically between data-centers.
> Ethernet had to adapt to deterministic real-time needs
Without being able to get too into the telco detail, I think the lesson was that hard realtime is both much harder to achieve and not actually needed. People will happily chat over nondeterministic Zoom and Discord.
It's both psychological and slightly paradoxical. Once you let go of saying "the system MUST GUARANTEE this property", you get a much cheaper, better, more versatile and higher bandwidth system that ends up meeting the property anyway.
> not actually needed
What you need is more that enough bandwidth.
Think of the difference between a highway with few cars versus a highway filled to the brim with cars. In the latter case traffic slows to a crawl even for ambulances.
It seems like it was just cheaper and easier to build more bandwidth than it was to add traffic priority handling to internet connectivity.
I saw a story once, which may well be completely made up, about why AT&T got out of the cell phone business. They had a research project, but reliability was an issue. They couldn't see a way to do better than 1 dropped call in 10,000. Their standard for POTS at the time was 1 in 2 billion.
Seeing that the tech would never be good enough, they sold off the whole thing for cheap. Years later, they bought it back for way, way more money because they desperately needed to get into the cell phone business that was clearly headed to the moon.
I totally understand the pride they had in the reliability of their system, but it turns out that dropped calls just aren't that big of a deal when you can quickly redial and reconnect.
Seems a little sus. AT&T basically created the cellular mobile phone, and built up an analog, then digital system (D-AMPS/TDMA). AT&T sort of sold out the mobile business in 2004 to Cingular (BellSouth) because TDMA was a dead end. They then bought BellSouth back in 2006 and carried on with CDMA.
Those old phones had a long range. It was hard to make small ones because the old AT&T towers were much farther apart, up to 40km. Meanwhile, their competitors focused on smaller coverage areas (e.g. 2km or less for PCS) and better tech (CDMA), and it seemed to pay off.
This is a minor detail, but the "AT&T" that bought BellSouth in 2006 was the AT&T formerly known as SBC which bought the husk of Ma Bell and rebranded itself, i.e. the AT&T we have today.
Yeah, big differences between an absolute guarantee and "we'll take as much as we can get"
ATM was superior in the context of a bill-by-the-byte telco-style network where oversubscribed links could be carefully planned. The "impedance mismatch" IP's of unreliable datagram delivery with ATM's guaranteed cell delivery created situations where ATM switches could effectively need unlimited buffer RAM to make their delivery guarantees even if the cells were containing IP datagrams that could just be discarded with no ill consequences.
There's likely an element of the "layering TCP on TCP" problem going on, too.
The classic popular treatment of the subject is: https://www.wired.com/1996/10/atm-3/
It was designed by people who were trying to digitally emulate 1920s copper-wire circuits at a time when the entire world was moving to packet-switched digital data. I remember visiting a large telco at the time and having to tell them about this new thing called ADSL that was going to steamroller them if they weren't careful. "Nooo... no, that's not real, you can't do that over a phone line, not possible. And even if it was it'll never take off, if anyone really wants a digital link they can go with our X.25 or ISDN offerings".
When I pointed out in a previous post how much X.400 sucked, even that never got anywhere near X.25. X.25 is the absolute zero on any networking scale, the scale starts with X.25 at -273degC and goes up from there.
atm did not have cell delivery guarantees. it did have per-connection qos negotiation that could include the loss probability as one of the many metrics that were supported. the only way to provide 'zero loss' is to implement hop-by-hop error detection and retransmission, which is only really done in HPC networks, and some satellite transport schemes where the loss is high and bursty and the latency is high.
however, actually building a functional routing infrastructure that supported QOS was pretty intractable. that was one of several nails in ATMs coffin (I worked a little on the PNNI routing proposal).
edit: I should have admitted that yes, loss does have a relationship to queue depth, but that doesn't result in infinite queues here. it does mean that we have to know the link delay and the target bandwidth and have per-flow queue accounting, which isn't a whole lot better really. some work was done with statistical queue methods that had simpler hardware controllers - but the whole thing was indeed a mess.
I was there for ATM, and I'm so freaking glad it lost. It's a prime example of "a camel is a horse designed by committee". A 53 byte cell with a 48 byte payload? Of course! What an excellent idea! We definitely want a 10% overhead on a ludicrously small packet, just so it has tolerable voice latencies if you scale it down to run on a 64Kb DS0, never mind that literally everything in the industry was scaling up to fatter pipes.
ATM was nifty if you had a requirement of establishing voice-style, i.e. billable, connections. No thanks. It was an interesting technology but hopelessly hobbled by the desire to emulate a voice call that fit into a standard invoice line.
If you’re primarily concerned with shuffling low latency voice around the place, and you want to do hardware forwarding on relatively inexpensive silicon, then that cell size is entirely sensible.
That approach of course didn’t age well when voice almost became a niche application.
note that it was 'tolerable latency without echo cancellation in France', most other places had long enough latency anyways that they needed to have it anyways. and of course now everything needs echo cancellation.
I think standards are important, and I'm sad that no one bothers anymore, but stuff like this and the inclusion of interlace in digital video for that little 3 year window when it might have mattered does really sour one on the process.
I'd forgotten about the French connection here.
BTW, I searched Kagi for "tolerable latency without echo cancellation in France" and saw your comment. Wow. I didn't realize web crawlers were that current these days.
Not The Silliest Contrivance to happen to video standards :P
Thus its acronym, A Technical Mistake. Or, from the telco side, A Tariffing Mechanism.
My college went all-in on ATM-over-fiber and wired all the dorm rooms with it. It was a PITA. Of course no computers came with ATM support and the cards cost $400+ each so the school had hundreds of cards that they would “lease” them out to students each year. There would be a huge “install depot” at the start of the year where students brought in their (desktop) computers and volunteers would open them up, install the cards, install drivers and configure them for our network.
For Linux heads, it was doubly annoying, as ATM was not directly supported in the kernel. You had to download a separate patch to compile the necessary modules, then install and run three separate system daemons, all with the correct arguments for our network, just to get a working network device. And of course you had to download all the necessary packages with another computer, since you couldn’t get online yet. This was the early 2000s, so WiFi was not really common yet.
Even once you got online, one of the admins would randomly crash every so often and you’d have to restart to get back online. It was such a pain.
Pretty sure TSN is unrelated to ATM determinism, and comes from a completely separate area (replacing custom field buses where timing and contention is more important than bandwidth). Some of ATM complexity came from wanting to deliver the same quality of experience as plesiosynchronous networks provided for voice (that's how it got the weird cell size).
Once those requirements dropped down (partially because people just started to accept weird echo) the replacement became MPLS and whatever you can send IP over where Ethernet sometimes shows as package around the IP frame but has little relation to Ethernet otherwise.
Not directly related but a consequence.
ATM semantics and TSN semantics are quite different, the closest overlap would be in AFDX (avionics full duplex ethernet) except AFDX creates static circuits
Was it actually superior though? The usual treatment is that packet switching works better at the scale of the internet. With voice, hogging a whole line works, but for the internet it makes more sense to slow everybody down when congestion occurs rather than preventing some people from connecting at all. I get why the telecoms would have you waste your bandwidth reserving a connection you don't need, and I get why they would try and sell that as a superior solution because of some nonsense about reliability, but I don't see it as providing much benefit to the user.
One reason I heard the internet works as well as it does is that it inverts the bell system. Where the bell system is a smart network with dumb edge devices. The internet is a dumb network with smart edge devices. The reason this is supposed to be better is that it is much much easier to upgrade the network.
And this sort of checks out, most of the complaints about the internet architecture is when someone starts putting put smart middle boxes in a load bearing capacity and now it becomes hard to deploy new edge devices.
> Instead of carelessly flinging packets into the ether like an savage, you had a deterministic network of pipes
I love this. Ethernet is such shit. What do you mean the only way to handle a high speed to lower speed link transition is to just drop a bunch of packets? Or sending PAUSE frames which works so poorly everyone disables flow control.
Wait, are you serious? This is how it works?
Yes: https://fasterdata.es.net/performance-testing/troubleshootin.... A simplistic TCP server will blast packets on the link as fast as it can, up to the size of the TCP receive window. At that point it’ll stop transmitting and wait for an ACK from the client before sending another window’s worth of packets.
To handle a speed transition without dropping packets, the switch or router at the congestion point needs to be able to buffer the whole receive window. It can hold the packets and then dribble them out over the lower speed link. The server won’t send more packets until the client consumes the window and sends an ACK.
But in practice the receive window for an Internet scale link (say 1 gigabit at 20 ms latency) is several megabytes. If the receive window was smaller than that, the server would spend too much time waiting for ACKs to be able to saturate the link. It’s impractical to have several MB of buffer in front of every speed transition.
Instead what happens is that some switch or router buffer will overflow and drop packets. The packet loss will cause the receive window, and transfer rate, to collapse. The server will then send packets with a small window so it goes through. Then the window will slowly grow until there’s packet loss again. Rinse and repeat. That’s what causes the saw-tooth pattern you see on the linked page.
Heh heh. If that shocks you, search engine for "bufferbloat" and prepare to be horrified.
I experienced this with a VDI project when we mistakenly got 25Gb links delivered to the hosts.
We were expecting to get some sort of unbelievably fast internet experience, but it was awful as the internet gateway was 1 Gb or something similar.
This is how old-school TCP figures out how fast it can send data, regardless of the underlying transport. It ramps up the speed until it starts seeing packet loss, then backs off. It will try increasing speed again after a bit, in case there's now more capacity, and back off again if there's loss.
You can achieve a bit of performance here by tuning it so it will never exceed the true speed of the link - which is only really useful when you know what that is and can guarantee it.
Anyone remember the incredible disrepute of the phone company in the 80s?
We just wanted our own stuff. We did not want to coordinate with a proprietary vendor to network or be charged by the byte to do so.
And for a while, telco engineers tried to retrofit Internet to their purposes.
I worked on a network that used RSVP ( https://en.wikipedia.org/wiki/Resource_Reservation_Protocol ) to emulate the old circuit-switched topology. It was kinda amazing to see how it could carve guaranteed-bandwidth paths through the network fabric.
Of course, it also never really worked with dynamic routing and brought in tons of complexity with stuck states. In our network, it eventually was just removed entirely in favor of 1gbit links with VLANs for priority/normal traffic.
I started my career at France Telecom's R&D lab in Caen, Normandy. They had their own home-grown X.400 email client, and even though they could have set up a SMTP server for free, they deliberately chose to MX to a paid SMTP to X.400 gateway out of OSI ideology.
It was complete garbage.
Another lab of theirs proudly made a Winsock that would use ATM SVCs instead of TCP and proudly made a brochure extolling their achievement "Web protocol without having to use TCP". Because clearly it was TCP hindering adoption of the Web /s
The Bellhead vs. Nethead was a real thing back then. To paraphrase an old saying about IBM, Telcos think if they piss on something, it improves the flavor.
One of the jobs I had applied out of college was to lead Schengen's central police database (think stolen car reports, arrest warrants etc) which would federate national databases. For some unfathomable reason, they chose X.400 as messaging bus for that replication, and endured massive delays and cost overruns for that reason. I guess I dodged a bullet by not going there.
WebPKI is derived from X.509, but I don't think X.509 lives on anymore. X.500 was stripped down to form LDAP, which is still in very heavy use today. There's still some X.400 systems in existence. I think some of the early cellphone generations may have used the ITU standards in the physical layer?
Of course, the biggest--and weirdest--success of the ITU standards is that the OSI model is still frequently the way networking stacks are described in educational materials, despite the fact that it bears no relation to how any of the networking stack was developed or is used. If you really dig into how the OSI model is supposed to work, one of the layers described only matters for teletypes--which were are a dying, if not dead, technology when the model was developed in the first place.
There's an entire book devoted to ripping up the OSI model: https://docs.google.com/document/d/1iL0fYmMmariFoSvLd9U5nPVH...
What an interesting read. Thank you for posting it.
Everyone who knows what the OSI model is should read at least some of this book.
X.509 absolutely lives on -- https://www.itu.int/rec/t-rec-x.509 last update was October 2024. However WebPKI uses PKIX which is fairly stubbornly stuck on RFC5280.
On the ITU side, they have made improvements including allowing a plain fully qualified domain name as the subject of a certificate, as an alternative to sequence of set of attributes.
If you mean the presentation layer, hard disagree. Not thinking about presentation creates problems. For example, Go treating ASCII headers as UTF-8 caused trouble. Only slightly not worrying about an HTTP/2 vs HTTP/1.1 mismatch caused trouble for reverse proxies.
Now I'm young enough not to have seen teletypes in an actual production use setting, but I've never heard anyone suggesting the presentation layer was for teletypes. That's just Google-level FUD.
No, it was the session layer.
No, LDAP was a student project from UMich that somehow gained mindshare because (a) it wasn't ISO, and (b) it cleverly had an 'L' in front of it. It's now more complex and heavyweight than the original DAP, but people think it isn't because of that original clever bit of marketing.X.500 was stripped down to form LDAPWell, it started off simpler, but, yes.
Doh! Of course it was easier to implement. IETF wants a working open source implementation before standardising.
Have you ever tried to implement an ITU standard from just reading the specs? It's hard. Firstly you have to spend a lot of money just to buy the specs. Then you find the spec is written by somebody who has a proprietary product, and is tiptoeing along a line that reveals enough information to keep the standards body happy (ie, has enough info to make it worthwhile to purchase the specification), and not revealing the secret sauce in their implementation.
I've done it, and it's an absolute nightmare. The IETF RFCs are a breath of fresh air in comparison. Not only can you read the source, there are example implementations!
And if you think that didn't lead to a better outcome, you're kidding yourself. The ITU process naturally leads to a small number of large engineering orgs publishing just enough information so they can interoperate, while keeping enough hidden so the investment discourages the rise of smaller competitors. The result is, even now I can (and do) run my own email server. If the overly complicated bureaucratic ITU standards had won the day, I'm sure email would have been run by a small number of CompuServe like rent seeking parasites for decades.
Given that general public uses social network services for electronic messaging today, and those don't even pretend they want to be interoperable, we've got parasites of a totally different class on top of the Internet infrastructure.
Remember jabber/xmpp? At least they tried to interoperate. Google Talk at the beginning had interoperability as its main feature, but Google quickly scrapped that.
UPDATE: some say that's because XMPP was too encompassing of a standard (if a format allows to do too much it loses usefulness, like saying that binary files format can store anything). IMO that's not the reason, they could just support they own subset. They scrapped interoperability for competition only IMO.
> IETF wants a working open source implementation before standardising.
I don't think that's IETF policy. Individual IETF working groups decide whether to request publication of an RFC, and the availability of open source implementations is a strong argument in favour of publication, but not a hard requirement.
If the IETF standards are sometimes useful, it's more a matter of culture than of policy.
A great example of this was PKIX, whose policy was "we'll publish it as a standard and someone else will have to figure out how to make it work". There are 20-year-old standards-track PKIX documents that have no known implementations.
I have been told that ITU specifications are deliberately confusing so that they can sell consulting services.
However, I think DER is good (and is better than BER, PER, etc in my opinion). (I did make up a variant with a few additional types, though.)
OID is also a good idea, although I had thought they should add another arc for being based on various kind of other identifiers (telephone numbers, domain names, etc) together with a date for which that identifier is valid (to avoid issues with reassigned identifiers) as well as possibility of automatic delegation for some types (so that e.g. if you register an account on another system then you can get a free OID from it too; there is a bit of difficulty in some cases but it might be possible). (I have written a file about how to do this, although I did not publish it yet.)
LDAP might have won over DAP, but it's still heavily based on the X.500-family of standards. Unlike SMTP (which is a completely different standard), LDAP is strongly based on DAP and other X.500 family standards.
Besides LDAP and X.509, you've got old standards that were very successful for a while. I'm perhaps a little bit too young for this, but I vaguely remember X.25 practically dominated large-scale networking, and for a while inter-network TCP/IP was often run over X.25. X.25 eventually disappeared because it was replaced by newer technology, but it didn't lose to any contemporary standard.
And if you're looking for new technology, CTAP (X.1278) is a part of the WebAuthn standard, which does seem to be winning.
I'm pretty sure there are other X-standards common in the telco industry, but even if we just look at the software industry, some ITU-T standards won out. This is not to say they weren't complex or that we didn't have simpler alternatives, but sometimes the complex standards does win out. The "worse is better" story is not always true.
The OP article is definitely wrong about this:
> “Of all the things OSI has produced, one could point to X.400 as being the most successful,
There are many OSI standards that are more successful than X.400, by the seer virtue of X.400 being an objective failure. But even putting that aside, there are X-family standards that are truly successful and ubiquitous.X.500 and X.509 are strong contenders, but the real winner is ASN.1 (the X.680/690 family, originally X.208/X.209).
ASN.1 is everywhere: It's obviously present in other ITU-T based standards like LDAP, X.509, CTAP and X.400, but it's been widely adopted outside of ITU-T in the cryptography world. PKCS standards (used for RSA, DSA, ECDSA, DH and ECDH key storage and signatures), Kerberos, S/MIME, TLS. It's also common in some common non-cryptographic protocols like SNMP and EMV (chip and pin and contactless payment for credit cards). Even if your using JOSE or COSE or SSH (which are not based on ASN.1), ASN.1-based PKCS standards are often still used for storing the keys. And this is completely ignoring all the telco standards. ASN.1 is everywhere.
I'll note that while X.509 certificates are deployed widely on the Internet, they are not deployed in the manner the ITU intended. There is no global X.500 directory and Distinguished Names are just opaque identifiers that are used to help find issuers during chain building. That hardly counts as a win for the ITU in my book.
And in some usages CN is just doesn't even looked up upon.
X.25 and other ITU specs won out massively in aviation, and they are just recently starting to go through the slow painful process of moving to IP. We'll probably see it hanging around for at least another 15 years in that sector.
Worse is Better: https://web.archive.org/web/20040619155500/http://www.jwz.or...
H.261-264 video codecs, depending on your definition of "win".
And you could add any number of the big standards group-based standards that a great deal of blood, sweat, and tears were poured into. Not universally the case, but more true than false.
> In the end, internet swallowed telephones and everything is now VOIP.
Using ITU voice codecs!
As x509 goes. I doubt many could explain it off hand with BER, DER and others being subset to ASN.1 and other obscura.
I’ve never been a fan
At the time, when there were so many different platforms still in existence, "easier to implement" was in fact a major component of "better".
It's not so much that SMTP won, it's that X.400 lost because it suuuuucked. Anyone who's ever had to work with that piece of s*t, as opposed to rhapsodising over what it could theoretically do, can tell you stories about this. It made Microsoft Mail and Lotus Notes look good in comparison. Notes actually did X.400, so imagine Notes but even suckier.
A lot of the IETF standards winning was vendors avoiding work even when paid for.
Another was NIH in considerable important places.
Yet another was that ITU standards promoted use of compilers generating serialization code from schema, and that required having that compiler. One common issue I found out from trying to rescue some old Unix OSI code was that the most popular option in use at many universities was apparently total crap.
In comparison, you could plop a grad student with telnet to experiment with SMTP. Nobody cared that it was shitty, because it was not supposed to be used long. And then nobody wanted to invest in better.
The critical part of that quote "Like a car with no brakes or seatbelts."
It doesn't seem to have worked out like that? You might as well say "like a car without a man walking in front of it with a red flag" https://en.wikipedia.org/wiki/Red_flag_traffic_laws
That's a partisan framing. Another framing could be that SMTP is the golf cart SMBs were asking for, not the car they were being sold.
Yes, the TCP/IP protocol stack beat the OSI protocol stack comprehensively, even down to four layers beating out seven unless you're so wedded to the Magic Number of Seven that you see Session as distinct from Application in the modern world, like how Newton was so wedded to seeing Seven Shades of Light in a spectrum he was sure to note indigo as distinct from violet in the rainbow.
(Presentation and Session are currently taught in terms of CSS and cookies in HTML and HTTP, respectively. When the web stack became Officially Part of the Officiously Official Network Stack is quite beyond me, and rather implies that you must confound the Web and the Internet in order to get the Correct Layering.)
https://computer.rip/2021-03-27-the-actual-osi-model.html - The Actual OSI Model
> I have said before that I believe that teaching modern students the OSI model as an approach to networking is a fundamental mistake that makes the concepts less clear rather than more. The major reason for this is simple: the OSI model was prescriptive of a specific network stack designed alongside it, and that network stack is not the one we use today. In fact, the TCP/IP stack we use today was intentionally designed differently from the OSI model for practical reasons.
> The OSI model is not some "ideal" model of networking, it is not a "gold standard" or even a "useful reference." It's the architecture of a specific network stack that failed to gain significant real-world adoption.