> In comparison with the previous Java service, the updated backend delivers a 40% increase in performance, along with improved scalability, security, and availability.
As is always the case with such rewrites, the big question is whether the improvements came from the choice of language or because they updated a crusty legacy codebase and fixed bugs/bottlenecks.
With the 90% reduction in memory consumption, I'd wager that most if not all the performance improvement came from that. In fact, it is a little surprising that hardware utilization only dropped 50%.
Reduced memory consumption for cloud applications was apparently also the primary reason IBM was interested in Swift for a while. Most cloud applications apparently sit mostly idle most of the time, so the number of clients you can multiplex on single physical host is limited by memory consumption, not CPU throughput.
And Java, with the JIT and the GC, has horrible memory consumption.
IBM is a huge and quite balkanized company and I don't think there was ever a centralized push towards Swift outside some excited parties. With that, I would note that circa 2018 there was a DRAM shortage in the industry and people started thinking more about memory conservation in datacenter workloads.
The thing about DRAM is that it isn't SRAM; cost matters. You struggle to find deployment environments that have less than 1 GB DRAM available per core, because at that point, ~95% the HW cost is typically CPU anyway. Shrinking that further is kind of pointless, so people don't do it. Hence, when utilizing 16 cores, you get at least 16 GB DRAM that comes with it, whether you choose to use it or not. If you use only 10% of that memory by removing the garbage from the heap, then while lower seems better and all that, it's not necessarily any cheaper in actual memory spending, if both fit within the same minimum 1 GB/core shape you can buy anyway. It might just under utilize the memory resources you paid for in a minimum memory per CPU shape, which isn't necessarily a win. Utilizing the memory you bought isn't wasting it.
Each extra GB per core you add to your shape, actually costs something. Hence every GB/core that can be saved results in actual cost savings. But even then, usually every extra GB/core is ~5% of the CPU cost. Hence, even when going from 10 GB/core (sort of a lot) to 1 GB/core, that only translates to ballpark ~50% less HW cost. Since they did not mention how many cores these instances have, it's hard to know what GB/core were used before and after, and hence whether there were any real cost savings in memory at all, and if so what the relative memory cost savings might have been compared to CPU cost.
Interesting. If IBM was trying to solve for memory consumption, why do you think they picked Swift over alternatives that might also have achieved lower memory consumption?
Swift is at a good middle ground of performance, safety and ease of use.
It has higher level ergonomics that something like rust lacks (as much as I like rust myself), doesn’t have many of the pitfalls of Go (error handling is much better for example) and is relatively easy to pickup. It’s also in the same ballpark performance as rust or C++.
It’s not perfect by any means, it has several issues but it’s quickly becoming my preferred language as well for knocking out projects.
What are those issues I wonder. I used for one project and not sure how deep the rabbit hole might be. Just wonder.
Which other sufficiently popular modern language that's more efficient than Java lacks a tracing GC?
Rust and Swift are pretty much the only two choices and Rust is arguably much more pain in the ass for the average joe enterprise coder.
This is why serverless is taking off on the opposite end of the spectrum (and why it’s so cheap)
You can share memory not only at the machine level, but between different applications.
I'd typically agree with your comment but ...
Given that they also experienced a 90% reduction in Memory Usage (presumably from Java GC vs Swift AOT memory management) - it seems more likely the gains are in fact from the difference in languages.
The JVM tends to use as much memory as it can for performance reasons. It is not a reliable indicator of how much memory it actually needs. Why spend resources on clearing memory if there's still unused memory left?
If memory is an issue, you can set a limit and the JVM will probably still work fine
Your comment is definitely true. However, if you study GC performance academic papers over the past three for four decades, they pretty much conclude GC overhead, amortized, can be on par with manual alloc/free[1], but usually the unwritten assumption is they have unbounded memory for that to be true. If you study how much memory in practice you need not to suffer a performance loss on an amortized basis, you'd arrive at 2-3x, so I'd claim it is fair to assume Java needs 2-3x as much memory as Swift/C++/Rust to run comfortably.
You can actually witness this to some degree on Android vs iPhone. iPhone comfortably runs with 4GB RAM and Android would be slow as dog.
[1]: I don't dispute the results, but I also like to note that as a researcher in Computer Science in that domain, you were probably looking to prove how great GC is, not the opposite.
Android doesn’t even run JVM.
> iPhone comfortably runs with 4GB RAM and Android would be slow as dog.
This has nothing to do with RAM. Without load, Android wouldn’t even push 2GB, it would be still slower than iPhone because of different trade-offs they make in architecture.
The point was GC cost in general, not which Java/JVM implementation you choose. Try comparing two Androids with the same chipset at 4GB vs 8GB RAM.
Anyhow, that was just an anecdotal unscientific experiment to give you some idea--obviously they are two different codebases. The literature is there to quantify the matter as I noted.
Android for a very long time lacked a quality JIT, AOT and GC implementation, and then each device is a snowflake of whatever changes each OEM has done to the device.
Unless one knows exactly what ART version is installed on the device, what build options from AOSP were used on the firmware image, and what is the mainline version deployed via PlayStore (if on Android 12 or later), there are zero conclusions that one can take out of it.
Also iOS applications tend to just die, when there is no more memory to make use of, due to lack of paging and memory fragmentation.
If it were such an easy problem to fix, don’t you think they would have done so rather than rewriting in Swift?
Having been in development for a long time, no.
Frankly, it just takes some motivated senior devs and the tantalizing ability to put out the OP blog post and you've got something management will sign off on. Bonus points you get to talk about how amazing it was to use Apple tech to get the job done.
I don't think they seriously approached this because the article only mentioned tuning G1GC. The fact is, they should have been talking about ZGC, AppCDS, and probably Graal if pause times and startup times were really that big a problem for them. Heck, even CRaC should have been mentioned.
It is not hard to get a JVM to startup in sub second time. Here's one framework where that's literally the glossy print on the front page. [1]
Yep, resume-driven development. I remember at a previous company a small group of people pushed a Go rewrite to speed everything up. The serious speed improvements came from re-architecting (elimination of a heavy custom framework, using a message queue instead of handling requests synchronously etc). They would have been better off fixing the original system so that everything could benefit from the improvements, not just the tiny bits that they carved off.
Then the next annual report talked about improved scalability because of this amazing technology from Google.
Resume driven development would be using some random ass Java framework to pad on resume. Apple using Apple technologies seems rather like corporate mandate.
If Apple does not dogfood their own technology for production systems what chance do they have to tell 3rd party users that Swift is ready for prime time.
Delving into Java arcana instead of getting first hand experience in developing in Swift would've been great opportunity wasted to improve Swift.
I agree if this was a brand new system.
However, they chose to replace an existing system with swift. The "arcana" I mentioned is start up options easily found and safe to apply. It's about as magical as "-O2" is to C++.
Sure, this may have been the right choice if the reason was to exercise swift. However, that shouldn't pretend like there was nothing to do to make Java better. The steps I described are like 1 or 2 days worth of dev work. How much time do you think a rewrite took?
Apple has explicitly stating that they want to try and move as much of their stuff to Swift as possible.
I’m sure you’re right, there must’ve been ways to improve the job of deployment. But if they wanted to reduce resource usage and doing it in Swift aligned with some other company goal it would make sense they might just go straight to this.
Once Amazon CEO was asked about new competitors trying to create cloud infrastructure fast. His reply was "You cannot compress experience"
Saving few weeks or months by learning 3rd party technology instead of applying and improving first party technology would be amateurish.
> However, that shouldn't pretend like there was nothing to do to make Java better.
This seems like constant refrain that Apple or anyone choosing their own tech over someone else's owe absolute fair shot to stuff they didn't choose. This is simply not the way world works.
Yes, there are endless stories companies spending enormous resources to optimize Java stack even up to working with Core Java team at Oracle to improve on JVM innards. But those companies are just (although heavy) user of core technology rather than developer of competing one. Apple is not one of those users, they are developers.
> Yes, there are endless stories companies spending enormous resources to optimize Java stack
And not what I'm advocating for. Sometimes rewrites are necessary.
What I'm advocating is exercising a few well documented and fairly well known jvm flags that aren't particularly fiddly.
The jvm does have endless knobs, most of which you shouldn't touch and instead should let the heuristics do their work. These flags I'm mentioning are not that.
Swapping g1gc for zgc, for example, would have resolved one of their major complaints about GC impact under load. If the live set isn't near the max heap size then pause times are sub millisecond.
> This seems like constant refrain that Apple or anyone choosing their own tech over someone else's owe absolute fair shot to stuff they didn't choose. This is simply not the way world works.
The reason for this refrain is because Java is a very well known tech, easy to hire for (which Amazon that you cite heavily uses). And Apple had already adopted Java and wrote a product with it (I suspect they have several).
I would not be saying any of this if the article was a generic benchmark and comparison of Java with swift. I would not fault Apple for saying "we are rewriting in swift to minimize the number of languages used internally and improve the swift ecosystem".
I'm taking umbridge to them trying to sell this as an absolute necessity because of performance constraints while making questionable statements on the cause.
And, heck, the need to tweak some flags would be a valid thing to call out in the article "we got the performance we wanted with the default compiler options of Swift. To achieve the same thing with Java requires multiple changes from the default settings". I personally don't find it compelling, but it's honest and would sway someone that wants something that "just works" without fiddling.
I remember the days when Apple developed their own JVM, ported WebObjects from Objective-C to Java, and even had it as the main application language for a little while, uncertain if the Object Pascal/C++ educated developers on their ecosystem would ever bother to learn Objective-C when transitioning to OS X.
Nothing at IBM is ever straightforward.
Decades ago, I was working with three IBM employees on a client project. During a discussion about a backup solution, one of them suggested that we migrate all customer data into DB2 on a daily basis and then back up the DB2 database.
I asked why we couldn't just back up the client's existing database directly, skipping the migration step. The response? "Because we commercially want to sell DB2."
You tune for what you have/can get. Machines with less memory tend to have slower CPUs. That may make it impossible to tune for (close to) 100% CPU and memory usage.
And yes, Apple is huge and rich, so they can get fast machines with less memory, but they likely have other tasks with different requirements they want to run on the same hardware.
No one gets promoted fixing bugs. Rewriting an entire system is a great way to achieve that.
But then you don’t get promo and “fun” work!
The typical rule of thumb is that getting good performance out of tracing GC requires doubling your memory usage, so a 90% reduction suggests that they made significant improvements on top of the language switch.
The 90% reduction doesn't necessarily have to be related only to GC.
In my experience Java is a memory hog even compared to other garbage collected languages (that's my main gripe about the language).
I think a good part of the reason is that if you exclude primitive types, almost everything in Java is an heap-allocated object and Java objects are fairly "fat": every single instance has an header of between 96 and 128 bit on 64-bit architectures [1]. That's... a lot. Just by making the headers smaller (the topic of the above link) you can get 20% decrease in heap usage and improvements in cpu and GC time [2].
My hope is that once value classes arrive [3][4], and libraries start to use them, we will see a substantial decrease in heap usage in an average java app.
[1] https://openjdk.org/jeps/450
[2] https://openjdk.org/jeps/519
The java GC approach is somewhat unique compared to other languages. There are multiple GCs and pretty much all of them are moving collectors. That means the JVM fairly rarely ends up freeing memory that it's claimed. A big spike will mean that it holds onto the spike's amount of memory.
Many other GCed languages, such as swift, CPython, Go, do not use a moving collector. Instead, they allocate and pin memory and free it when not in use.
The benefit to the JVM approach is heap allocations are wicked fast on pretty much all its collectors. Generally, to allocate it's a check to see if space is available and a pointer bump. For the other languages, you are bound to end up using a skiplist and/or arena allocator provided by your malloc implementation. Roughly O(log(n)) vs O(1) in performance terms.
Don't get me wrong, the object header does eat a fair chunk of memory. Roughly double what another language will take. However, a lot of people confuse the memory which the JVM has claimed from the OS (and is thus reported by the OS) with the memory the JVM is actively using. 2 different things.
It just so happens that for moving collectors like the JVM typically uses more reserved memory means fewer garbage collections and time spend garbage collecting.
A moving garbage collector is not that rare, other languages have it. I think C# has one for example, OCaml and SBCL too.
I know about the trade-offs that a moving GC does, but the rule is about double memory usage, not ten times more like a 90% reduction would seem to imply.
> Many other GCed languages, such as swift
Swift is not garbage collected, it uses reference counting. So, memory there is freed immediately when it is no longer in scope.
Reference counting is a form of garbage collection.
- [deleted]
People really should learn more about CS when discussing these matters.
Chapter 5, https://gchandbook.org/contents.html
If Java's 128-bit object headers are already fairly fat, then what adjective applies to CPython's? [] is about a whole cache line. Trivial Python objects are barely smaller.
I remember the 2x rule from 20 years ago - do you know if things have changed? If locality is more important now, tracing GC might never be as performant as reference counting. Either you use 2x the memory and thrash your cache, or you use less and spend too much CPU time collecting.
Java has had AOT compilation for a while, so traditional GC and its massive overhead are no longer a strict necessity. Even AOT Java will probably stay behind Swift or any other natively compiled language in terms of memory usage, but it shouldn't be that drastic.
As for performance and locality, Java's on-the-fly pointer reordering/compression can give it an edge over even some compiled languages in certain algorithms. Hard to say if that's relevant for whatever web framework Apple based their service on, but I wouldn't discount Java's locality optimisations just because it uses a GC.
For a while means since around 2000, althought toolchains like Excelsior JET and Websphere Real Time, among others, were only available to companies that cared enough to pay for AOT compilers, and JIT caches.
Nowadays to add to your comment, all major free beer implementations, OpenJDK, OpenJ9, GraalVM, and the ART cousin do AOT and JIT caches.
Even without Valhala, there are quite a few tricks possible with Panama, one can manually create C like struct memory layouts.
Yes it is a lot of boilerplate, however one can get around the boilerplate with AI (maybe), or just write the C declarations and point jextract to it.
> Java has had AOT compilation for a while, so traditional GC and its massive overhead are no longer a strict necessity.
You mean it does escape analysis and stack-allocates what it can? That would definitely help, but not eliminate the GC. Or are you thinking of something else?
Thinking about it more, I remember that Java also has some performance-hostile design decisions baked in (e.g. almost everything's an Object, arrays aren't packed, dynamic dispatch everywhere). Swift doesn't have that legacy to deal with.
Java also has a lot of culture making optimization-resistant code so there’s the question of whether you’re talking about the language itself or various widespread libraries, especially if they’re old enough to have patterns designed around aesthetics or now-moot language limitations rather than performance.
I’ve replaced Java code with Python a few times and each time even though we did it for maintenance (more Python devs available) we saw memory usage more than halved while performance at least doubled because the code used simpler functions and structures. Java has a far more advanced GC and JIT but at some point the weight of code and indirection wins out.
That's interesting. I did a line by line rewrite of a large Django route to Quarkus and it was 10x faster, not using async or anything.
That’s why I said “culture” - by all rights the JVM should win that competition. I wrote a bit more about the most recent one in a sibling comment but I’d summarize it as “the JVM can’t stop an enterprise Java developer”.
Enterprise developers and architects would be the same, regardless of the programming language.
I am old enough to have seen enterprise C and C++ developers.
Where do you think stuff like DCE, CORBA, DCOM has come from?
Also many of the things people blame Java for, where born as Smalltalk, Objective-C and C++ frameworks before being re-written in Java.
Since we are in a Apple discussion thread, here is some Objective-C ids from Apple frameworks,
https://github.com/Quotation/LongestCocoa
I also advise getting hold of the original WebObjects documentation in Objective-C, before its port to Java.
> Enterprise developers and architects would be the same, regardless of the programming language.
This is true to some extent but the reason I focused on culture is that there are patterns which people learn and pass on differently in each language. For example, enterprise COBOL programmers didn’t duplicate data in memory to the same extent not only due hardware constraints but also because there wasn’t a culture telling every young programmer that was the exemplar style to follow.
I totally agree about C++ having had the same problems but most of the enterprise folks jumped to Java or C# which felt like the community of people writing C++ improved the ratio of performance sensitive developers. Python had a bit of that, especially in the 2000s, but a lot of the Very Serious Architects didn’t like the language and so they didn’t influence the community anywhere near as much.
I’m not saying everyone involved are terrible, I just find it interesting how we like to talk about software engineering but there are a lot of major factors which are basically things people want to believe are good.
Are you saying you made Python code run twice as fast as Java code? I have written lots of both. I really struggle to make Python go fast. What am I doing wrong?> I’ve replaced Java code with Python a few times ... while performance at least doubled
More precisely, when deploying the new service microservice it used less than half as much CPU to process more requests per second.
This is not “Java slow, Python fast” – I expected it to be the reverse – but rather that the developers who cranked out a messy Spring app somehow managed to cancel out all of the work the JVM developers have done without doing anything obviously wrong. There wasn’t a single bottleneck, just death by a thousand cuts with data access patterns, indirection, very deep stack traces, etc.
I have no doubt that there are people here who could’ve rewritten it in better Java for significant wins but the goal with the rewrite was to align a project originally written by a departed team with a larger suite of Python code for the rest of the app, and to deal with various correctness issues. Using Pydantic for the data models not only reduced the amount of code significantly, it flushed out a bunch of inconsistency in the input validation and that’s what I’d been looking for along with reusing our common code libraries for consistency. The performance win was just gravy and, to be clear, I don’t think that’s saying anything about the JVM other than that it does not yet have an optimization to call an LLM to make code less enterprise-y.
Okay, I understand your point. Basically, you rewrote an awful (clickbat-worthy) enterprisey Java web app into a reasonable, maintainable Python web app. I am sympathetic. Yes, I agree: I have seen, sadly, far more trashy Java enterprisey apps than not. Why? I don't know. The incentives are not well-aligned.
As a counterpoint: Look at Crazy Bob's (Lee/R.I.P.) Google Guice or Norman Maurer's Netty.IO or Tim Fox's Vert.x: All of them are examples of how to write ultra-lean, low-level, high-performance modern Java apps... but are frequently overlooked to hire cheap, low-skill Java devs to write "yet another Spring app".
IMO the “Spring fever” is the most horrible thing that has happened to Java. There genuinely are developers and companies that reduce the whole language and its ecosystem to Spring. This is just sad. I’m glad that I have been working 15+ years with Java and never touched any Spring stuff whatsoever.
> but are frequently overlooked to hire cheap, low-skill Java devs to write "yet another Spring app".
Yeah, that’s why I labeled it culture since it was totally a business failure with contracting companies basically doing the “why hire these expensive people when we get paid the same either way?” No point at ranting about the language, it can’t fix the business but unfortunately there’s a ton of inertia around that kind of development and a lot of people have been trained that way. I imagine this must be very frustrating for the Java team at Oracle knowing that their hard work is going to be buried by half of the users.
It all depends, but one major advantage of the way the JVM GCs is related memory will tend to be colocated. This is particularly true of the serial, parallel, and G1GC collectors.
Let's say you have an object that looks like A -> B -> C. Even if the allocation of A/B/C happened at very temporally different times and inbetween different allocations, the next time the GC runs as it traverses the graph it will see and place in memory [A, B, C] assuming A is still live. That means even if the memory originally looks something like [A, D, B, Q, R, S, T, C] the act of collecting and compacting has a tendency to colocate.
That's the theory -- a compacting collector will reduce fragmentation and can put linked objects next to each other. On the other hand, a reference count lives in the object, so you're likely using that cache line already when you change it.
I don't know which of these is more important on a modern machine, and it probably depends upon the workload.
The problem is memory colocation not RC management. But I agree, it'll likely be workload dependent. One major positive aspect of RC is the execution costs are very predictable. There's little external state which can negatively impact performance (like the GC currently running).
The downside is fragmentation and the CPU time required for memory management. If you have an A -> B -> C chain where A is the only owner of the B and B is the only owner of C, then when A hits 0, it has to do 2 pointer hops to deallocate B and then deallocate C (plus arena management for the deallocs).
One of the big benefits of JVM moving style collectors is that when A dies, the collector does not need to visit B or C to deallocate them. The collector only visits and moves live memory.
> The downside is fragmentation and the CPU time required for memory management. If you have an A -> B -> C chain where A is the only owner of the B and B is the only owner of C, then when A hits 0, it has to do 2 pointer hops to deallocate B and then deallocate C (plus arena management for the deallocs).
I suspect this puts greater emphasis on functionality like value types and flexibility in compositionally creating objects. You can trend toward larger objects rather than nesting inner objects for functionality. For example, you can use tagged unions to represent optionality rather than pointers.
The cost of deep A->B->C relationships in Java comes during collections, which still default to be halting. The difference is a reference counting GC will evaluate these chains while removing objects, while a reference tracking GC will evaluate live objects.
So, garbage collection is expensive for ref-counting if you are creating large transient datasets, and is expensive for ref-tracking GC if you are retaining large datasets.
That is just GC part. Another big difference is Reference type (Java) vs Value Type (Swift).
The java runtime is a beast. Even the fact that another runtime is just capable to do a similar thing is impressive, disregard the fact that it might be better. Even being on par makes it interesting for me to maybe try it on my own.
The post notes that the user-facing app was "introduced in the fall of 2024," so presumably the services aren't that legacy.
You can learn a lot when writing V2 of a thing though. You've got lots of real world experience about what worked and what didn't work with the previous design, so lots of opportunity for making data structures that suit the problem more closely and so forth.
But did they write the backend from scratch or was it based on a number of “com.apple.libs.backend-core…” that tend to bring in repeating logic and facilities they have in all their servers? Or was it a PoC they promoted to MVP and now they’re taking time to rewrite “properly” with support for whatever features are coming next?
My $0.02 is that Java not having value types (yet), while Swift has, is a large reason for the efficiency gains.
As a C# dev, I guess I've just taken it for granted that we have value types. Learned something new today (that Java apparently does not).
It does for primitives.
For user defined stuff we’ve recently gained records, which are a step in that direction, and a full solution is coming.
What about structs?
Basically no.
Even records are not value-based types, but rather are classes limited to value-like semantics - e.g. they can't extend types, are expected to have immutable behavior by default where modification creates a new record instance, and the like.
The JVM theoretically can perform escape analysis to see that a record behaves a certain way and can be stack allocated, or embedded within the storage of an aggregating object rather than having a separate heap allocation.
A C# struct gets boxed to adapt it to certain things like an Object state parameter on a call. The JVM theoretically would just notice this possibility and decide to make the record heap-allocated from the start.
I say theoretically because I have not tracked if this feature is implemented yet, or what the limitations are if it has been.
Currently only via Panama, creating the memory layout manually in native memory segments.
Valhala is supposed to bring language level support, the biggest issue is how to introduce value types, without breaking the ABI from everything that is in Maven Central kind of.
Similar to the whole async/await engineering effort in .NET Framework, on how to introduce it, without adding new MSIL bytecodes, or requiring new CLR capabilities.
I'm not sure with the semantics of structs for C#.
What java is getting in the future is immutable data values where the reference is the value.
When you have something like
in java, effectively the representation of `c` is a reference which ultimately points to the heap storage locations of `a, b`. In C++ terms, you could think of the interactions as being `c->b`.class Foo { int a; int b; } var c = new Foo();
When values land, the representation of `c` can instead be (the JVM gets to decide, it could keep the old definition for various performance reasons) something like [type, a, b]. Or in C++ terms the memory layout can be analogous to the following:
struct Foo { int a, int b }; struct Foo c; c.a = 1; c.b = 2;
This seems like C# `record struct`
https://learn.microsoft.com/en-us/dotnet/csharp/language-ref...
However you can make use of Panama to work around that, even if it isn't the best experience in the world.
Create C like structs, in regards to memory layout segments, and access them via the Panama APIs.
I would have guessed it's boxed primitives.
Is that still a thing in 2025? There are so many third party libraries that offer primitive collections. Example: https://github.com/carrotsearch/hppc
If you're not specifically concerned about memory use, why would you use a third-party library?
Agree, this is almost always where the benefits come from. You get to write v2 of the software with v1 to learn from.
Yes! The post would have been much more informative if it did an in-depth analysis of where the performance gain comes from. But Apple being Apple, I don't think they'll ever want to expose details on their internal systems, and we probably can only get such hand wavy statements.
I suspect that didn’t fit into the goal of the blog post.
I don’t think it’s meant to be a postmortem on figuring out what was going on and a solution, but more a mini white paper to point out Swift can be used on the server and has some nice benefits there.
So the exact problems with the Java implementation don’t matter past “it’s heavy and slow to start up, even though it does a good job”.
Sure, maybe you can get money to have some businesses try out rewriting their line-of-business software in the same language versus in a different language and get some results.
My expectation is that if you put the work in you can get actual hard numbers, which will promptly be ignored by every future person asking the same "question" with the same implied answer.
If the "just rewrite it and it'll be better" people were as right as they often seem to believe they are, a big mystery is JWZ's "Cascade of Attention-Deficit Teenagers" phenomenon. In this scenario the same software is rewritten, over, and over, and over, yet it doesn't get faster and doesn't even fix many serious bugs.
Generally speaking, technological progress over thousands of years serves to validate this. Sure, in the short term we might see some slippage depending on talent/expertise, but with education and updated application of learnings, it's generally true.If the "just rewrite it and it'll be better" people were as right as they often seem to believe
For others that hadn’t head of CADT either: https://www.jwz.org/doc/cadt.html
I confess to having been part of the cascade at various parts of my career.
Imagine what rust or go could have achieved
Go is similar to Swift when it comes to mandatory costly abstractions.
It’s only Rust (or C++, but unsafe) that have mostly zero-cost abstractions.
Swift, Rust, and C++ all share the same underlying techniques for implementing zero-cost abstrations (primarily, fully-specialized generics). The distinction in Swift's case is that generics can also be executed without specialization (which is what allows generic methods to be called over a stable ABI boundary).
Swift and Rust also allow their protocols to be erased and dispatched dynamically (dyn in Rust, any in Swift). But in both languages that's more of a "when you need it" thing, generics are the preferred tool.
To an approximation, but stdlib and libraries will have a bias. In practice, abstractions in Rust and C++ more commonly are actually zero-cost than in Go or Swift.
This is not a bad thing, I was just pointing out that Go doesn't have a performance advantage over Swift.
Swift have them too now (non-Copyable types).
> It’s only Rust (or C++, but unsafe) that have mostly zero-cost abstractions.
This just isn't true. It's good marketing hype for Rust, but any language with an optimizing compiler (JIT or AOT) has plenty of "zero-cost abstractions."
I love to always see such comments. On JVM you use crap like Spring and over-engineer everything. 20 types, interfaces and objects to keep single string in memory.
JVM also like memory, but can be tailored to look okayish, still worse than opponents.
And I'm 100% sure you can do the same in Swift.
It's not technical, it's cultural. Different community conventions.
Sure, And you can also make beautiful code in php, or shot code in Java
It’s the history, standard libs, and all the legacy tutorials which don’t get erased from the net
Swift's limitations around reflection actually make it surprisingly difficult to create a typical Java-style mess with IOC containers and so forth.
Ever heard of Swift macros?
Do you know where Java EE comes from?
It started as an Objective-C framework, a language which Swift has full interoperability with.
https://en.wikipedia.org/wiki/Distributed_Objects_Everywhere
> Ever heard of Swift macros?
Yes, having lived on the daily build bleeding edge of Swift for several years, including while macros were being developed, I have indeed heard of them.
> Do you know where Java EE comes from?
Fully aware of the history.
The point stands: it is substantially harder with Swift to make the kind of spring style mess that JVM apps typically become (of course there are exceptions: I typically suggest people write Java like Martin Thompson instead of Martin Fowler). Furthermore, _people just don’t do it_. I imagine you could visualise the percentage of swift server apps using an IOC container using no hands.
First the amount of Swift server apps has to grow to a number that is actually relevant for enterprise architects to pay attention.
Then I can start counting.
That is the reason I used percentages, rather than absolute. For Java, every single app I’ve ever seen has been a massive shit show. For Swift, 0 of the 40-50 I’ve seen are.
For Swift to be a shit show on Fortune 500 Linux and Windows servers, someone has to start shipping them in volume, regardless of percentiles.
Any language can be a shit show, when enough people beyond the adoption curve write code in them, from the letcoders with MIT degree to the six week bootcamp learners shipping single functions as a package, and architects designing future proof architectures on whiteboards with SAFe.
When Swift finally crosses into this world, then we can compare how much of it has survived the world scale adoption exposure, beyond cozy Apple ecosystem.
same - in every single language with incompetent team