Somehow, with 12GB of RAM, I can't get my iPhone 17 Pro to keep more than a few safari tabs open without having them refresh when I come back from an app or two, and it makes me want to throw my phone across the train (Where the internet often cuts out!).
A lot of software has been squandering the massive hardware gains that have been made. I hope this changes when it becomes a lot harder to throw hardware at the problem.
I also wonder what this means for smartphone-esque devices like the Switch 2. If this goes on long enough I won't be surprised if they release a 'lite' model with less RAM/Storage and bifurcate their console capabilities, worse than what they did with 3DS > 2DS .
It's really nuts how much RAM and CPU have been squandered. In, 1990, I worked on a networked graphical browser for nuclear plants. Sun workstations had 32 mb memory. We had a requirement that the infographic screens paint in less that 2 seconds. Was a challenge but doable. Crazy thing is that computers have 1000x the memory and like 10,000x the CPU and it would still be a challenge to paints screens in 2 seconds.
Yes, the web was a mistake; as a distributed document reading platform it's a decent first attempt, but as an application platform it is miserable. I'm working on a colleague's vibe-coded app right now and it's just piles and piles of code to do something fairly simple; long builds and hundreds of dependencies... most of which are because HTML is shitty, doesn't have the GUI controls that people need built in, and all of it has to be worked around as a patch after the fact. Even doing something as simple as a sortable-and-filterable table requires thousands of lines of JS when it should've just been a few extra attributes on an HTML6 <table> by now.
Back in the day with PHP things were much more understandable, it's somehow gotten objectively worse. And now, most desktop apps are their own contained browser. Somehow worse than Windows 98 .hta apps, too; where at least the system browser served a local app up, now we have ten copies of Electron running, bringing my relatively new Macbook to a crawl. Everything sucks and is way less fun than it used to be.
We have many, many examples of GUI toolkits that are extremely fast and lightweight. Isn't it time to throw the browser away, stop abusing HTML to make applications, and design something fit for purpose?
> the web was a mistake;
It's not "the web" or HTML, CSS, or JavaScript. That's all instant in vanilla form. Any media in today's quality will of course take time to download but, once cached, is also instant. None of the UX "requires" the crap that makes it slow, certainly not thousands of lines to make a table sortable and filterable. I could do that in IE6 without breaking a sweat. It's way easier, and faster, now. It's just people being lazy in how they do it, Apparetnly now just accepting whatever claude gave them as "best in show".
> Isn't it time to throw the browser away, stop abusing HTML to make applications, and design something fit for purpose?
Not going to happen until gui frameworks are as comfortable and easy to set up and use as html. Entry barrier and ergonomics are among the biggest deciding factors of winning technologies.
Man, you never used Delphi or Lazarus then. That was comfortable and easy. Web by comparison is just a jarring mess of unfounded complexity.
There are cross platform concerns as well. If the option is to build 3-4 separate apps in different languages and with different UI toolkits to support all the major devices and operating systems, or use the web and be 80% there in terms of basic functionality, and also have better branding, I think the choice is not surprising.
In line with "the web was a mistake" I think the idea that you can create cross platform software is an equally big mistake.
You can do the core functionality of your product as cross platform, to some extend, but once you hit the interaction with the OS and especially the UI libraries of the OS, I think you'd get better software if you just accept that you'll need to write multiple application.
We see this on mobile, there's just two target platform really, yet companies don't even want to do that.
The choice isn't surprising, in a world where companies are more concerned with saving and branding, compared to creating good products.
>You can do the core functionality of your product as cross platform, to some extend, but once you hit the interaction with the OS and especially the UI libraries of the OS, I think you'd get better software if you just accept that you'll need to write multiple application.
Or you can use a VM, which is essentially what a modern browser is anyway. I wrote and maintained a Java app for many years with seamless cross platform development. The browser is the right architecture. It's the implementation that's painful, mostly for historical reasons.
But using a browser (or a VM) buys into the fallacy that your customers across different platforms (Windows, Mac, etc) want the same product. They’re already distinguished by choosing a different platform! They have different aesthetics, different usability expectations, different priorities around accessibility and discover ability. You can produce an application (or web app) that is mediocre for all of them, but to provide a good product requires taking advantage of these distinctions — a good application will be different for different platforms, whether or not the toolkit is different.
I've only done one platform gui work (python) but I'd guess this is stuff that is ripe for transpiling since a lot of gui code is just reusing the same boilerplate everyone is using to get the same ui patters everyone is using. Like if I make something in tkinter seems like it should be pretty straightforward to write a tool that can translate all my function calls as I've structured them into a chunk of Swift that would draw the same size window same buttons etc.
We get into transpiling and we essentially start to rebuild yet another cross platform framework. Starts with "read this filetype and turn it into this layout" and it ends up with "we'll make sure this can deploy on X,Y,Z,W..."
It'd be nice if companies could just play nice and agree on a standard interface. That's the one good thing the web managed to do. It's just stuck to what's ultimately 3 decades of tech debt from a prototype document reader made in a few weeks.
>It'd be nice if companies could just play nice and agree on a standard interface
They basically do though. Every cross platform native ported app I've used the GUI is the same layout. Well, except on macos the menu ribbon is on the topbar and windows it has its own ribbon layer in the application window. But that is it. All these frameworks already have feature parity with another. It is expected that they have these same functions and ui paradigms. Here's your button function. Here is where you specify window dimensions. This function opens a file browser. This one takes in user input to the textbox. I mean it is all pretty standardized and limited what you can expect to do in ui already.
There is a lot of stuff you can get done with the standard library alone of various languages that play nice on all major platforms. People tend to reach for whatever stack of dependencies is popular at the time, however.
I am not sure, it seems that cross platform Applciations are possible using something like python3/gtk/qt etc.
Cross platform GUI libraries suck. Ever used a GTK app under Windows? It looks terrible, renders terrible, doesn't support HiDPI. Qt Widgets still have weird bugs when you connect or disconnect displays it rerenders UIs twice the size. None of those kinds of bugs exist for apps written in Microsoft's UI frameworks and browsers.
The problem with cross platform UI is that it is antithetical to the purpose of an OS-native UI in its reason of existence. Cross platform tries to unify the UX while native UI tries to differentiate the UX. Native UI wants unique incompatible behavior.
So the cross platform UI frameworks that try to use the actual OS components always end up with terrible visual bugs due to unifying things that don't want to be unified. Or worse many "cross platform" UI frameworks try to mimic the its developer's favorite OS. I have seen way too many Android apps that has "cross platform" frameworks that draw iOS UI elements.
The best way to do cross platform applications with a GUI (I specifically avoid cross platform UI) is defining a yet another platform above a very basic common layer. This is what Web had done. What a browser asks from an OS is a rectangle (a graphics buffer) and the fonts to draw a webpage. Nothing else. Entire drawing functionality and the behavior is redefined from scratch. This is the advantage of Web and this is why Electron works so well for applications deployed in multiple OSes.
> Ever used a GTK app under Windows?
I have created and used them. They didn't look terrible on windows.
>What a browser asks from an OS is a rectangle (a graphics buffer) and the fonts to draw a webpage. Nothing else. Entire drawing functionality and the behavior is redefined from scratch. This is the advantage of Web..
I think that is exactly what Gtk does (and may be even Qt also) too..
I think it is just there there is not much funding going to those projects. Web on the other hand, being an ad-delivery platform, the sellers really want your browsers to work and look good...
There's loads of funding. But the ones funding Qt and GTK aren't parties interested in things like cohesion or design standards. They just needed a way to deliver their product to the user in a faster way than maintaining 2-3 OS platform apps. Wanting that shipping velocity by its nature sacrifices the above elements.
The remnants of the dotcom era for web definitely helped shape it in a more design contentious way, in comparison. Those standards are created and pushed a few layers above that in which cross platform UI's work in.
Here is Bleachbit, a GTK3-based disk cleanup utility. It is a blurry mess and GTK3 Window headers are completely out of style and behavior with Windows.
https://imgur.com/a/ruTGUaF#ntnfeCJ
https://imgur.com/yGhgkz2 -> Comparison with another open source app Notepad3 under Windows.
> I think that is exactly what Gtk does (and may be even Qt also) too..
The problem is they half-ass it. Qt only does it with QML. Qt Widgets is half-half and it is a mess.
Overall these do not invalidate my point though. If you want a truly cross-platform application GUI, you need to rewrite the GUI for each OS. Or you give up and write one GUI that's running on its own platform.
> I think it is just there there is not much funding going to those projects. Web on the other hand, being an ad-delivery platform, the sellers really want your browsers to work and look good...
Indeed, Google employs some of the smartest software developers and ones with really niche skills like Behdad Esfahbod who created the best or the second best font rendering library out there. However, Qt has a company behind (a very very incompetent one, not just the library but operating a business). I have seen many commercial libraries too, they are all various shades of terrible.
I see your point. Thanks for the screenshots.
Visual Basic solved that. The web is in many ways a regression.
Visual Basic (and other 90s visual GUI builders) were great simple options for making GUI apps, but those GUIs were rather static and limited by today's standards. People have now gotten used to responsive GUIs that resize to any window size, easy dynamic hiding of controls, and dynamic lists in any part of the GUI; you won't get them to come back to a platform where their best bet at dynamic layout is `OnResize()` and `SubmitButton.Enabled = False`.
> Visual Basic (and other 90s visual GUI builders) were great simple options for making GUI apps
Yes, they were comfortable and easy to set up (and use), particularly when compared to web development.
> a platform where their best bet at dynamic layout is `OnResize()` and `SubmitButton.Enabled = False`
This is a great description of what web coding looked like for a very long time, _especially_ when it started replacing RAD tools like VB and Delphi. In fact, it still looks like this in many ways, except now you have a JSX property and React state for disabling the button, and a mess of complex tooling, setup and node modules just to get to that base level.
The web won not because of programmer convenience, but because it offered ease of distribution. Turns out everything else was secondary.
> This is a great description of what web coding looked like for a very long time
React is over a decade old, and as far as I remember, desktop apps using embedded browsers (Electron) started becoming dominant after it came out.
The ease-of-distribution advantage is huge, but web technologies are big outside the Web too, where it doesn't apply.
(Besides my main point, idiomatic web UIs don't implement resize handlers for positioning each element manually, but instead use CSS to declaratively create layouts. Modern GUI libraries with visual builders can also do this, but it was decidedly not the norm in the 90s. Also, modern dynamic GUIs generally don't use a static layout with disabled parts, but hide or add parts outright. That kind of dynamicity is hard to even conceptualise with a GUI builder.)
Microsoft invented AJAX when building Outlook for the web back in 2000. GMail was released in 2003 and Google Docs in 2006. Around this time, even enterprise giants like SAP started offering web UIs. This is the shift from RAD to web I'm talking about.
The current idiomatic way of doing web layouts was, back then, almost entirely theoretical. The reality was a cross-browser hell filled with onResize listeners, in turn calling code filled with browser-specific if statements. Entire JavaScript libraries were devoted to correctly identifying browsers in order for developers to take appropriate measures when writing UI code. Separate machines specifically devoted to running old versions of Internet Explorer had to be used during testing and development, in order to ensure end user compatibility.
In short: The web was not in any way, shape or form more convenient for developers than the RAD tools it replaced. But it was instant access multi-platform distribution which readily allowed for Cloud/SaaS subscription models.
Electron happened more as an afterthought, when the ease of distribution had already made web UIs, and hence web UI developers, hegemonic. Heck, even MS Office for the web predates React, Electron, and something as arcane as Internet Explorer 9.
Things have gotten much better, but we're still having to reinvent things that just existed natively in VB6 (DataGrid, anyone?) - and at the cost of increasingly complex toolchains and dependencies.
I feel that flutter is the first right step for this, it felt like a breath of fresh air to work with compared to the webstack.
Are they not? Gui libraries are like button(function=myFunction). This isn't rocket surgery stuff here at least the gui tooling I've used.
Pretty much any non-web GUI framework I tried so far has either been terrible to set up, or terrible to deploy. Or both. Electron is stupidly simple.
ImGUI is the single exception that has been simple to set up, trivial to deploy (there is nothing to deploy, including it is all that's needed), and nice to use.
Tkinter is easy too.
Except ImGUI’s missing what I consider essential features for macOS: proper multitouch support (two finger panning, pinch to zoom).
> Isn't it time to throw the browser away, stop abusing HTML to make applications, and design something fit for purpose?
We had Flash for exactly that purpose. For all its flaws, it was our best hope. A shame Apple and later Adobe decided to kill it in favor of HTML5.
The second best bet was Java Applets, but the technology came too early and was dead before it could fly off.
Some may mention WebAssembly, but I just don't see that as a viable alternative to the web mess that we already have.
The death of Flash was squarely the fault of Adobe's neglect. Apple simply refused to tolerate their garbage.
> Isn't it time to throw the browser away, stop abusing HTML to make applications, and design something fit for purpose?
Great. How do you get all the hardware and OS vendors to deploy it for free and without applying their own "vetting" or inserting themselves into the billing?
It works with free software GNU/Linux repos.
? On iOS?
In general.
> The web was a mistake
I wouldn't say that. The web had done way more good than harm overall. What I would say is that embedding the internet (and its tracking and spyware and dark patterns that have gain prominence) into every single application that we use is what is at fault.
The web browser that we built in 1990 was all on-premise obviously. And it had a very different architecture than HTTP. There were two processes. One used TCP/IP to mirror the plant computers model into memory on the workstation. The other painted the infographics and handled the user navigating to different screens. The two processes used shared memory to communicate. It was my first job out of university.
The Internet and its consequences have been a disaster for the human race. They have greatly increased the surveillance we endure for those of us who live in "advanced" countries, but they have destabilized society, have made life unfulfilling, have subjected human beings to indignities, have led to widespread psychological suffering and have inflicted severe damage on the natural world. The continued development of technology will worsen the situation. It will certainly subject human beings to greater indignities and inflict greater damage on the natural world, it will probably lead to greater social disruption and psychological suffering, and it may lead to increased physical suffering even in "advanced" countries.
You know, or something.
Was doing sortable-and-filterable tables in the browser without a server round-trip 20 years ago using XML/XSLT and not thousands of lines of JS but something on the order of dozens.
So true. I still do that in an enterprise app that I wrote.
I know that Chrome pulling the plug on XSLT in the browser is imminent - so how are you refactoring?
Back in PHP days you had an incentive to care about performance, because it's your servers that are overloaded. With frontend there's no such issue, because it's not your hardware that is being loaded
Nah, some fixes to HTML would go a long way to address these issues.
I agree we need in built-in controls, reasonably sophisticated, properly style-able with CSS. We also need typed JS in the browser, etc
These feel like all the things a proper "Web 3.0" should have solved. We have decades of lessons learned that we could apply with a soft reboot to how we envision the web.
Instead it's just piling on a dozen layers of dependencies. Webassembly feels like the only real glimmer of what the "next generation" could have been like.
> sortable-and-filterable table
Just use jquery and this plugin, 7kB minified:
https://github.com/myspace-nu/jquery.fancyTable/blob/master/...
That would be the thousands of lines of JS that they are complaining about. Except if it depends on jquery, that's even more lines.
the web is great as an application platform.
what's not great are the complexity merchants, due to money & other incentives etc that ship to the web.
there's better web frameworks that are lighter, faster than react - but due to hype etc you know how that goes
When I use my work PC under Win 11, I endlessly notic all the lag on basically everything. Click and email in outlook at it takes 3 seconds to draw in... thats a good 12 billion cycles on a single core to do that. Multiply that by hundreds/thousands of events across all events on the system and I wonder how many trillions of cycles are wasted on bloat everyday.
My 17 year old core 2 duo should not feel faster on a lean linux distro than modern hardware and yet it does. Wild to see and somewhat depressing.
I see old videos (Computer chronicles a good example) of what could be done on a 486 for instance. While you can see the difference in overall experience, it isnt that extreme of a difference, the 486 being released 37 years ago...
And in 1990 people were complaining about the same thing [1].
[1] Why Aren’t Operating Systems Getting Faster As Fast as Hardware? https://web.stanford.edu/~ouster/cgi-bin/papers/osfaster.pdf
You are describing Wirth’s Law.
I turn off images and it is way faster. If you couple that with local DNS overwrites, it is even faster. The tech is there on desktop at least.
Resources have certainly been squandered, but there are also a lot of apples vs. oranges comparisons that overlook advances in UX/DX and security.
> Crazy thing is that computers have 1000x the memory and like 10,000x the CPU and it would still be a challenge to paints screens in 2 seconds.
It's not though, is it? Even browsers are capable of painting most pages at over 60 FPS. It's all the other crappy code making everything janky.
I was trying to upload a 300mb video via the local police's web interface, a very important matter. I had to set my phone screen to stay on for 30 minutes and then leave the web browser open without touching it. Disabling all power saving measures makes not difference. This was the only way I could get it to finish uploading. I'm on a pixel 8 pro with grapheneos. Same thing in both Firefox and vanadium. I don't think it runs out of ram, the system is just too trigger happy. The battery still doesn't last all day anyway.
Try the coffee quick settings tile to keep the foreground app open and the screen on.
My iPhone 8 just stopped working 2 months back (phone works but the microphone used in phone calls no longer works) so by chance my good friend gave me his pixel 8 that was only a couple few months old. It got a pink line down the screen that comes and goes which if you press in one spot can usually make it go away but he is a business owner and he can't risk the screen going from line to not working for a day as a missed communication could cost him thousands. So he said here take it and he got a new one. Seems like this pink line is common and a defect in some screens.
Anyways I wanted to say I also have a pixel 8 but with stock OS and my battery typically lasts a full day with average usage. My iPhone 8 previously even with a replacement battery was lucky if it lasted more then 5 hours. I had to charge that thing multiple times a day.
The iPhone 8 was released in 2017 and the Pixel 8 is from 2023.
That pink line issue is covered under a repair program btw at least here in Europe
(I have the exact same issue)
> A lot of software has been squandering the massive hardware gains that have been made. I hope this changes when it becomes a lot harder to throw hardware at the problem.
Considering how many people are so averse to programming that they use LLMs to generate code for them? Not very likely IMO. I would like to see it happen, but people seem allergic to actually trying to be good at the craft these days.
I am more worried about memory and cycles being squandered by the underlying libraries on the device itself. Not a lot you can do to optimize those.
(I'm looking at you, Liquid Glass. I would love to get back to a vintage, "flat" UI. I'll allow for anti-aliasing, Porter-Duff compositing, but that's where I draw the line.)
I think we aren't far from AI being able to solve this sort of problem too.
Imagine you are Apple and can just set an LLM loose on the codebase for a weekend with the task to reduce RAM usage of every component by 50%...
From everything I’ve seen, LLMs aren’t exactly known for writing extremely optimized code.
Also, what happens to the stability and security of my phone after they let an LLM loose on the entire code base for a weekend?
There are 1.5 billion iPhones out there. It’s not a place to play fast and loose with bleeding edge tech known for hallucinations and poor architecture.
> LLMs aren’t exactly known for writing extremely optimized code.
They are trained on everything, and as a result write code like the Internet average developer.
The average developers suck. The distribution is also unbalanced. It is bulkier on the low-skill side.
Great UIs are written by above average or even exceptional developers. Such experience is tied to the real-life reasoning and combining unique years-long human experience of interacting with the world. You need true general intelligence for that.
That is the point I was making, but I suppose that may not have been clear. Thanks for expanding.
- [deleted]
- [deleted]
Is that really how it works - everything is just weighted equally? I would hope there would be at least some kind of tuning, so <well-regarded-codebase> gets more weight than <random-persons-first-coding-project>? If not, that seems like an opportunity. But no idea how these things are actually configured.
>write code like the Internet average developer
Before post training (GPT3 2020 class models). Post training makes it no longer act like the average.
If you ask an LLM to code whatever, it definitely won’t produce optimized code.
If you direct it to do a specific task to find memory and cpu optimization points, based on perf metrics, then it’s a completely different world.
You can also tell it the optimization to implement.
I asked Claude to find all the valid words on a Boggle board given a dictionary and it wrote a simple implementation that basically tried to search for every single word on the board. Telling it to prune the dictionary first by building a bit mask of the letters in each word and on the board and then checking if the word is even possible to have on the board gave something like a 600x speedup with just a simple prompt of what to do.
That does assume that one has an idea of how to optimize though and what are the bottlenecks.
Can we assume at this point if the problems are well known, the low hanging fruit has already been addressed? The Boggle example seems like a pretty basic optimization that anyone writing a Boggle-solver would do.
iOS is 19 years old, built on top of macOS, which is 24 years old, built on top of NeXTSTEP, which is 36 years old, built on top of BSD, which is 47 years old. We’re very far from greenfield.
They kind do if you prompt them, I had mine reimplement the Windows calc (almost fully feature complete) in rust running with 2mb RAM instead of 40mb or whatever the win 11 version uses as a POC.
A handwritten c implementation would most likely be better, but there is so much to gain from just slaughtering the abstraction bloat it does not really matter.
LLMs are trained on currently existing code.
I feel like my 3GS was way better about resuming where I left off than any fancy new iPhone I’ve had in the past few years.
Big name apps like Facebook, YouTube, Apple Music, Apple Podcasts seem totally disinterred in preserving my place.
YouTube being the worst where I often stack a bunch of videos in queue, pause to do something else for a while and when I return to the app the queue has been purged.
YouTube will literally resume back to exactly where I was, then seemingly noticing that I switched back to it, go ahead and close the video I was watching. With all sorts of animations too, it's not just a case of having showed a cached screenshot. YouTube seems to intentionally forget where in a video I was, often after having been paused in the background for only a minute or two.
Why??
See if turning off your ad blocker makes a difference. I've noticed that sometimes YouTube has parts of the site the apparently can look to ad blockers like they are part of an ad (maybe intentionally to annoy people with ad blockers?).
I'm talking about the YouTube app.
Likely some kind of complex refresh operation that kicks off when entering the foreground and takes a few seconds to complete before overwriting your state.
translation: cancer
YouTube on TVs will often keep closed captioning on when switching accounts, then notice that CC is on and turn it off. Even though every account in the household always has CC turned on.
I feel like that's definitely a choice for Facebook at least - there's no technical reason the app couldn't remember at least the post you were looking at. I think they literally don't care if you were halfway through reading something when you flicked out of the app and go back in - refreshing the page and showing you all new stuff is probably measurably "better for 'engagement'" by whatever silly metrics they use.
It’s been a while since I worked for a bigger company (not meta) but the problem there was you would have a team who was responsible for feature A and a team responsible for feature B, and if there was any weird interop between the two it just never got resolved because there wasn’t an owner. There was no internal incentive to fix the problem. It wasn’t deliberate but it was structural
I find myself saving a ton of stuff to my Watch Later list, because I can’t trust the Back button when using YouTube. This issue exists on the phone, web, and AppleTV. YouTube just likes to randomly refresh everything. It’s the most annoying “feature”.
Youtube/Google just make these shitty small annoying decisions just to make the iOS experience that little bit more annoying than it has to be.
Case in point — Youtube background play doesn’t pause when Siri makes an announcement, so if you’re listening to something you get two voices over each other.
I gave it the benefit of the doubt and figure it must be some kind of iOS thing, until I was listening to Audible one day and it paused automatically. So it’s just a google thing, not a third-party apps thing.
i have the same issue with the Youtube queue — this is something that could easily be persisted, but they just choose not to.
I feel like this might be intentional to a certain degree, at least on YouTube or Facebook.
If you switched off the app while looking at a certain post or watching a certain video, that's a negative engagement indicator, so the app wants to throw you back into the algorithmic feed to show you something new instead.
Conveniently, if you're watching a youtube video with an ad, switch apps and youtube reloads, you have to watch the ad again
You guys have ads on youtube?
Ad blockers don't work anymore, at least not with the version YT serves me. If it thinks that I have an ad blocker active (false positives happen too), it will only show a black rectangle and not even load the comments.
Strange.
On PC, I use Firefox with the uBlock Origin extension and I see no ads on Youtube.
Same with my pocket supercomputer: Firefox works great on Android, including for Youtube. And it uses extensions like the PC version does. No ads there, either.
On the BFT in the living room, I have a Google-manufactuered Google TV device. It runs SmartTube, and displays no ads on Youtube.
I even have an iPad that I use primarily for watching Youtube videos. For that, I stay completely within the confines of the walled garden and use Safari with the AdBlock add-on. And: If you're guessing that I'm about to write that have no ads on Youtube there either, then you're right. There's no ads on Youtube with that device, either.
Am I doing this wrong?
Maybe my perspective differs from that of some others, but it seems to all work very well for me here in 2026. (There's been some ups and downs with this over the years, but it all finds its way back to exactly what I wrote above, anyway.)
I also use Firefox with uBlock Origin. It worked flawlessly until some time this January. It happened with the switch to a new version of the video player which changed the design and behaviour. I'd be curious if you're still on the old version or something else is different.
Roughly in that timeframe YT also successfully blocked downloads with yt-dlp for a bit. Seems like they're trying harder now because of AI scrapers.
On PC, it looks like I'm using this, from the end of January: https://github.com/gorhill/uBlock/releases/tag/1.69.0
And also this, from a couple of weeks ago: https://www.firefox.com/en-US/firefox/147.0.4/releasenotes/ (with Linux, but that probably doesn't matter at all)
And that's about it. I recently pruned some other Firefox extensions while troubleshooting ompletely unrelated issues, and all that's left is uBlock Oorigin, Dark Reader, and BitWarden.
Seriously, I've had no recent issues with Youtube ads at all and certainly none in January or February of this year. It's been smooth-enough for me on all of the platforms I mentioned before (and I use them all quite a lot, except perhaps for the BFT).
I wonder what's different on your end?
Huh, I think I figured it out. It now works again after removing the extension "Return YouTube Dislikes" which I had just kept around because why not.
Turns out if both this and uBlock are active, YouTube will refuse to work. But only uBlock works just fine.
Nice!
Welcome back to the club.
Also, if none of the methods helps: Google has completely remove advertising in YouTube videos in Russia.
So you don't even need an ad blocker, just a sponsor block.
By the way, this (Not an extension, but a login from a Ru ip's) removes ads from all other Google services.
try firefox, librewolf, waterfox, chromium. In these browsers I had ublock origin (lite for chromium), adguard and NoScript (And/Or Privacy Badger) on my phone and PC, I didn't see any ads at all. I use the unhook and enhancer extensions with them)
It's more common than you might think.
[dead]
Too slow to edit. But also now playing just seems to go away after a while. Why isn’t this written to some nonvolatile place and just preserved? It feels like it must be on purpose but I wonder what the purpose is.
I assume the purpose of the Now Playing clearing after a while is the idea that when people start a "new session" with their device it should be "clean". Like, if Now Playing didn't randomly disappear then for most people it would always be on, indicating some paused music or podcast playback. It would also never give a chance for that elusive "start playing" experience that shows up in its place sometimes to recommend that I listen to one of four songs/podcast episodes.
Even system apps like Photos have completely given up on state restore. I'm deep in an album comparing a photo to something on the web? Sorry, Safari needs all that RAM, Photos all is kicked out, and Photos can't possibly remember you were inside an album (despite, you know, all the APIs Apple specifically has to manage this [0]). They USED to care about these things and made it seamless enough that you weren't supposed to know that the the app was killed in the background, but they just don't seem to care anymore
[0] https://developer.apple.com/documentation/SwiftUI/restoring-...
Brave is a very good YouTube app. You can download videos for offline viewing, build a local playlist of said videos, and bypass ads all in one go.
NewPipe as well.
Now is bad too, but my recollection is that the iPhone 3G-era task killer was EXTREMELY aggressive and required "tricks" to keep your state in the one app you could run
On a tangent how about those sweet app updates with patch notes reading bug fixes every week or so from the likes of Xiaomi and Anker weighing in at 600-700mb.
It's all gone to $hit, efficiency is gone it's just slop on top of more slop.
iOS I think has really aggressive background task killing, and it also drives me insane. I know they do it for battery life but I'm about ready to switch to Android, and would have a long time ago if I that didn't also mean replacing my watch, headphones, etc.
Is it too much to ask for me to manage my own background processes on my phone? I don't want the OS arbitrarily deciding what to pause & kill. If it actually does OOM, give me a dialog like macOS and ask me what to kill. Then again, if a phone is going OOM with 12GB of RAM there's a serious optimization problem going on with mobile apps.
> iOS I think has really aggressive background task killing, and it also drives me insane. I know they do it for battery life but I'm about ready to switch to Android, and would have a long time ago if I that didn't also mean replacing my watch, headphones, etc.
Android does all sorts of wacky stuff with background tasks too... Although I don't feel like my 6 GB Android is low memory, so maybe there's something there, but I also don't run a lot of apps, and I regularly close Firefox tabs. Android apps do mostly seem well prepared for background shenanigans, cause they happen all the time. There's the AOSP/Google Play background app controls, but also most of the OEMs do some stuff, and sometimes it's very hard to get stuff you want to run in the background to stay running.
I dunno about watches, but Airpods work fine with Android, as long as you disconnect them from FindMy cause there's no way to make them not think they're lost (he says authoritatively, hoping to be corrected).
On Android of course it depends on the configuration. I am running LineageOS 23 on an older device with 6GB of RAM as well and it would kill basically anything (making e.g. paying with a credit card a pain when you have to switch to the bank app to confirm a transaction). Had to adjust few variables for ZRAM control and now it's seamless.
iOS doesn't have aggressive background task killing except for memory pressure. It suspends apps for battery life; it only kills them under memory constraints. If you don't want apps dying and tabs closing, use apps that use less memory. iOS does not have swap out of a desire to avoid unnecessary NAND wear (and to avoid the performance impact), so it must more aggressively kill things.
So I have safari and I can’t switch to my email? Both native apps and sometimes I lose the state of safari if I move more than 10s away? I have to keep switching between the 2 apps to keep alive my safari tab? Insanity
I recently started learning how to do iOS apps for work and the short answer is: you don't.
Apple seemingly wants all apps to be static jpegs that never need to connect to any data local or remote, and never do any processing. If you want to do something in the background so that your user can multitask, too damn bad.
You can run in the background, for a non-deterministic amount of time. If you do that, iOS nags your user to make it stop. If you access radios, iOS nags your user to disable it.
It's honestly insane. I don't know why or how anyone develops for this platform.
Not to mention the fact that you have to spend $5k minimum just to put hello world on the screen. I can't believe that apple gets away with forcing you to buy a goddamn Mac to complile a program.
You can get a brand new Mac for < $600
People develop for iOS because iOS users spend more money. End of story.
Depends on where you live. I haven't seen one for less than $1000, and that's for a five-year old model soon going out of support. Seems like a waste of money.
No Mac Minis there?
I've never felt nagged. Every time I get one of those popups, which isn't too often, I think "neat, good to know."
It's inconvenient that apps can't do long-running operations in the background outside of a few areas, but that's a design feature of the platform. Users of iOS are choosing to give up the ability to run torrent clients or whatever in exchange for knowing that an app isn't going to destroy their battery life in the background.
> If you do that, iOS nags your user to make it stop. If you access radios, iOS nags your user to disable it.
These are features, because we can't trust developers to be smart about how they implement these. In fact, we can't even trust them not to be malicious about it. User nags keep the dveloper honest on a device where battery life and all-day availability is arguably of utmost importance.
> you have to spend $5k minimum just to put hello world on the screen.
Now that's just nonsense.
You don’t have to spend 5K, cmon.
Very specific complaint that has nothing to do with the amount of ram you have, that’s a software choice in iOS. Kinda a tangent for a top comment.
I had a China phone with amazing specs but it KEPT KILLING EVERYTHING.
Hardware is pretty useless if the software that drives it is useless. I don't know it probably works better in China all I know is that I went back to good old Samsung.
It's a pervasive Chinese phone problem. I've used many and they all have "Battery saving" features on by default, which means killing background apps after a while apparently. Battery life is great, but newly installed apps sometimes don't work as they should.
The market demands must be different there. I've disabled "battery optimisation" for all the apps I need to stay open (and some apps even prompt me to disable it!), and I don't have any issues in daily use.
That kind of aggressive process termination will be becoming less common since Android introduced freezer [1] optimization to put a background process to a completely unscheduled state.
[1] https://source.android.com/docs/core/perf/cached-apps-freeze...
Chinese apps are less optimized than western
if you run out of smartphone battery you are in much bigger trouble in China than in west since it's necessary to function almost everywhere, which is why they have rental powerbanks stand literally in every restaurant and every small grocery shop, you are never further than like 5 minutes walk from one in urban area
btw you can always put app to protected/not optimized list which usually solve problems with most of the western apps on Chinese phones (essential Chinese apps like WeChat are on the list by default)
> some apps even prompt me to disable it!
That's social engineering to get themselves more background network activity. I wouldn't trust such an app.
well health tracker which stops tracking when it's battery optimized isn't very useful, is it?
It's because of a lack of centralized push message service. Here we have google cloud messaging to handle notifications for all apps. They don't have google in china, so each app establishes it's own connection to it's servers and that kills the battery. If you buy the global version of those phones it's often better.
I have had many chinese phones (Huawie, Oppo, Xiaomi) over the years and the things they choose to kill in the background is odd. Web browsers and almoat any kind of banking app will be killed in minutes if not seconds. VLC... Depends on the day could be minutes or days. No idea why that one.
Hard to tell if it something I am doing or not. I will say with all these phones and everything google turned off I typically get 3-4 days per charge but that really depends on what your usage is.
I really dont understand that at all. Web Pages are mostly static, you would think the iPhone would cache websites reasonably well.
I remember on Android I dont recall the app name specifically, but it would let me download any website for offline browsing or something, would use it when I knew I might have no internet like a cruise.
Heck there used to be an iOS client for HN that was defunct after some time, but it would let you cache comments and articles for offline reading.
It's the js that does it, because so many webpages are terribly optimized to integrate aggressive ad waterfalls into them. Or have persistent SPA framework's doing continually scope checks.
That being said, there's no reason the Safari context shouldn't be able to suspend the JS and simply resume when the context is brought back to the foregrown. It's already sandboxed, just stop scheduling JS execution for that sandbox.
Sorr of related. On my laptop running linux, Firerox with youtube will get progressively slower if you keep sleeping and waking up the laptop. It is as though the JS is struggling to keep up with adjusting to the suspend and wake cycle. This never happened on Windows/macos systems so it could just be a linux thing.
I've encountered the exact same issue. What helped me somewhat, at least with the video playback, is opening videos in embed player.
I coded an extension that adds a context menu for opening videos in embed mode. https://addons.mozilla.org/pl/firefox/addon/youtube-open-as-...
Obviously it depends on what you're consuming, but popular sites are rarely static web pages.
Safari suspends backgrounded tabs. I think that's what we're observing here rather than strictly memory pressure.
Web pages that make sense are mostly static. But these days articles need to load each paragraph dynamically, so in order to save 3kb in case you wouldn't finish the article you need to download 5mb of js to do that, plus a bunch of extra handshakes.
> and it makes me want to throw my phone across the train (Where the internet often cuts out!).
Spotted the German lol
The general problem is that many people don't bother testing their apps outside of their office wifi with low latency, low jitter, low packet loss and high bandwidth. Something like persisting the state when the OOM/battery-save killer comes knocking onto some cloud endpoint? Perfectly fine on wifi... but on a mobile connection that might just be EDGE, cut entirely because the user is just getting a phone call and the carrier does not do VoLTE, or be of an absurd latency? Whoops. Process killer knocks a -9 and that's it, state be gone.
Side note: Anyone know of a way to prevent the iPhone hotspot from disassociating with a MacBook when the phone loses network connectivity? It's darn annoying, I counted having to reconnect twenty times on a train ride less than an hour.
Android Firefox with ad blockers - life changing.
Mine (Android Firefox) does it when I have a YouTube video paused and do something else for a bit. Whenever I stop watching a video, I have to screenshot it so I know the timestamp to try to get back to later :-/
App battery usage is unrestricted, so it's not that.
In fairness, back in 2017 I bought a OnePlus 5T with 12G of RAM.
That's almost a decade ago.
Phones RAM progression has stagnated for a LONG time, during that time I doubt that webpages have become lighter, so yeah I'm not surprised by what you are saying.
I am on my $110 android device from 2022 (4GB RAM), and I have never faced the browsing related issues that you mentioned. My phone came with stock android 11 ROM with no bloats, so that might've helped too I guess.
what phone is that bro?
> I hope this changes when it becomes a lot harder to throw hardware at the problem
Maybe, but I have terrible news for you about how much easier it just became to throw software at a problem
Removing docking functionality could possibly reduce RAM usage by never enabling 4K screen output. This would be similar to the switch lite.
Although, for a $450 device that doesn’t need to make much of a profit on its own, I also don’t think they’re heavy on memory in the first place (12GB). You can buy top quality Chinese Android handhelds with more RAM and better Qualcomm processors than the Switch 2 for about the same price, and those companies are making $0 in software royalties (e.g., AYN Thor Max is $450 with a 16GB/1TB configuration).
> Removing docking functionality could possibly reduce RAM usage by never enabling 4K screen output. This would be similar to the switch lite.
Every version of the Switch 1 had 4GB of RAM, they didn't cut that on the Lite. Going back and patching every game to ensure it ran on less RAM it was originally designed for would have been a nightmare.
> (e.g., AYN Thor Max is $450 with a 16GB/1TB configuration).
AYN just announced that the Thor will get a price increase soon for obvious reasons.
https://www.reddit.com/r/SBCGaming/comments/1rf5gxq/to_thor_...
Oh yeah, I accidentally implied the switch lite cut down RAM when it didn’t.
Of course the Thor Max will have a price increase, but also, obviously 16GB/1TB is a massively bigger bill of materials than the Switch 2’s 12GB/256GB configuration.
And I forgot to mention that Nintendo has far more pricing leverage in terms of their volume.
It’s not just mobile safari, safari on desktop does the same thing even with lots of memory available. Whatever they’re doing to limit a tabs resources needs to go, it’s so frustrating.
I wonder if that's "App Nap"? Because there is a toggle in the Debug menu under Miscellanous Flags -> Disable App Nap on Safari Desktop
Enable debug with:
$ defaults write com.apple.Safari IncludeInternalDebugMenu -bool YES
Believe I tried that a while back and it didn’t help. Safari will reload a page as you’re using it, including filling out forms which of course it won’t save what you entered.
That tab refreshing thing really bugs me with fan fiction. If I think I might want to reread a story someday I'll download it, because if you read fan fiction you learn that many authors come back and fiddle with their earlier stories, sometimes even replacing the entire old story with chapter 1 of a complete rewrite. Even in the rare case that they actually do eventually finish the rewrite it is often not as good as the original.
AO3 HTML downloads have the story in one long HTML file. When reading that on iPad that stupid refresh can move you to the top which is pretty damned annoying.
For that very particular situation I do have a workaround, but it involved adding some JavaScript to the download HTML. If anyone else is reading downloaded AO3 HTML and would like this I've put it on pastebin.com. Get saveplace.js [1] and ao3book.css [2] and add this at the end of the head of your AO3 download:
Saveplace does two things.<script type="text/javascript" src="saveplace.js"></script> <link rel="StyleSheet" href="ao3book.css" type="text/css"/>First, to address the tab refresh problem, whenever you change your position in the story it waits until you've stopped at a new position for a bit and then records the new position in parameters on the URL. After a refresh happens it looks for those parameters and restores the last saved position.
Second, to make the story easier to read it hides all but the first chapter, adds buttons to move forward and back by chapter, and adds a dropdown to select chapters. It also adds a button to switch between night and day mode. The day/night mode setting is saved in local storage.
Feel free to use this in anything of your own. The chapter navigation stuff is tied to AO3's HTML, but that would be easy to delete leaving just the position saving/restoring. This is in the public domain in places where it is possible to put things in the public domain. If one of us is somewhere that isn't possible you can use it under the MIT No Attribution license (MIT-0).
Calibre can convert HTML to ePub, which you can then use reader apps for. Those are much better at remembering your place.
AO3 also allows downloads of different formats including epub. I often download the epub (or use fichub for other sites) and read on the Epub Reader app. If I want to read on my Kindle app or my physical Kindle, I'll send the epub to my Amazon library via email.
Oh, indeed, that's premium brand experience right there for you: all the basic stuff is broken, would you like to more apple services to go?
I know this article is about RAM but I truly hate how little storage the iPhone ships with their phones. I guess everyone is using iCloud but I refuse to store my personal data on the cloud. I’m constantly down to 2-3 GB on my phone. I have just 128 GB of storage that’s not upgradable. What a shame.
My in-laws have probably discarded at least five or six Apple devices on that account. Typically they get used devices, with a good number of years of updates remaining, but the updates are pointless when iOS grabs 50% for it self and the actual update, resulting in a device that you may not be able to update even if you uninstalled everything.
The devices themselves are fast enough to run everything, you just can't update and eventually apps stop being available to the old iOS version they run.
Tin foil hat theory is icloud subscriptions is why image capture hasn't been updated in years and still crashes with big transfers. Not that I'd expect them messing with it at this point would generate a more useful tool.
- [deleted]
Settings > Apps > Safari > Reading List: Automatically Save Offline
“Save webpages to read later in Safari on iPhone” https://support.apple.com/guide/iphone/save-pages-to-a-readi...
You're just adding a step that doesn't fix the primary issue (you can already manually save any page you want without adding it your reading list). Someone should be able to go to their translate app, then their photo galley, and back to Safari without it needing to refresh the context.
That doesn’t save the current dynamic state of the page. It’s at most useful for static content, but even on a Wikipedia page you’ll lose your current state of expanded/collapsed sections and hence your reading position.
Wasn't the 2DS just a 3DS minus the lenticular screen, and especially minus the front-facing camera that did face tracking to improve the quality of the 3D?
My understanding was that market research showed a lot of users were turning off the 3D stuff anyway, so it seemed reasonable to offer a model at lower cost without the associated hardware.
> My understanding was that market research showed a lot of users were turning off the 3D stuff anyway
It was also because young children weren't supposed to use the 3D screen due to fears of it affecting vision development. You could always lock it out via parental controls on the original, but still that was cited as a reason for adding the 2DS to the lineup.
https://www.ign.com/articles/2013/08/28/nintendo-announces-2...
> Fils-Aime said. “And so with the Nintendo 3DS, we were clear to parents that, ‘hey, we recommend that your children be seven and older to utilize this device.’ So clearly that creates an opportunity for five-year-olds, six-year-olds, that first-time handheld gaming consumer."
This is why I miss Windows Phone. My $35 Lumia with 512 MB of RAM was infinitely smoother and faster than the 2GB Samsung Galaxy flagship phone I had, and of comparable fluidity to the so-much-more-expensive iPhones with 2GB RAM.
Chinese retro handheld companies started to quietly remove specific information about RAM speed etc. You can even get different hardware per batch.
That is what happens when people learn to code and very little value is given to algorithms and data structures, regardless of the programming language.
That and using SPAs for static sites.
I feel its because of iOS aggressive RAM saving feature rather than the lack of RAM.
I know this because I still get some of my web pages refreshed even if the browser is literally the only app that is running.
IOS or safari issue then, I also have 12GB ram on my S25+, with 25 open tabs, and I quickly did a test, there was non that were un-loaded that I had to reload
It happened a lot on my previous phone with only 4GB ram though
It’s more likely related to choices involving making the battery last long.
Back in the day, I was running AutoCAD on a 386 PC. Now, a single Firefox tab consumes 500MB of memory. That is progress for us.
Memory uses power, this is a major factor in why aggressively stopping things helps.
There is a strong argument modern mobile goes too far for this.
With dram, you have to refresh every cell within a periodic interval. Usually this is handled in hardware. It would be a crazy optimization if unused pages weren’t refreshed. There would have to be a decent amount of circuitry to decide that.
I'm not suggesting it exists, but I could plausibly see something where the range to refresh could be changed at runtime. If you could adjust refresh on your 8 GB phone in 1 GB intervals (refresh up to 1/2/4/8 GB etc; or refresh yes/no for each 1GB interval), the OS could be sure to put its memory at low addresses, and the OS could do memory compaction into lower addresses and disable refresh on higher addresses from time to time. Or ... I think there's apis for allocations for background memory vs foreground memory; if you allocate background memory at low addresses and foreground memory at high addresses, then when the OS wants to sleep, it kills the process logically and then turns off refresh on the ram ... when it wants to use it again later, it will have to zero the ram cause who knows what it'll have.
I don't work at that kind of level, so I dunno if the juice would be worth the squeeze (sleep with DRAM refresh is already very low power on phone scales), but it seems doable.
- [deleted]
I can't imagine the iphone is entirely powering down memory. Otherwise just unallocating memory won't change the power consumption.
Those aren’t the only two possibilities though.
What other possibility are there? By what mechanism are you suggesting that iPhones save power by keeping RAM usage low?
Do you have any source that the iphone is turning RAM on and off?
This is an argument for having less memory on a hardware level. But once the DRAM is there, it uses power, whether or not it stores useful data or useless data.
There's a reason why we say unused RAM is wasted RAM.
Powering down unused physical RAM is absolutely a thing on some systems. For one thing, it's required if you ever want to support physical memory hotplug. The real issue however is that the gain from not doing DRAM refresh is clearly negligible: it's no more than the difference between putting a computer to sleep (ACPI S3), or putting a phone to sleep in airplane mode - and powering it off.
And you're saying Apple is doing that on the iPhone?
This is nonsense, at least on iOS. Apps get killed due to total system memory usage, not for power -- they only get suspended to save power.
RAM not filled with cached video ads and tracking scripts is wasted RAM!
The fact that the current iphone is how much more performant than a 3gs and we are doing what exactly different with it? Still scrolling instagram, text, whatsapp, maps, shitty mobile web, literally nothing has changed about how we use these devices. Nothing. These things should be like camels and have the battery last for weeks by this point. The hell is all that power even going toward? These phones are like Hummers. Just wasteful.
That is an Apple problem and keep in mind that iPhone doesn't do multi-task, the fact that you are having problems with 12GB is not surprised to me.
I have to use a Macbook M4 at work with 24GB, I have an AMD Lenovo Ryzen7 with 32GB running Linux Mint Cinnamon. It is infuriating how slow this Macbook is, even to shut it down is slow asf.
macOS is not different than Windows, I cannot wait for COB to get back to my Linux laptop.
I have a personal 16 GB M4 Macbook Air and my wife’s work computer is a 24 GB M4 Macbook Pro. My laptop runs circles around her work’s.
Companies install so many invasive shit in the name of security theater and employee control that there is lots of waste going on.
24GB is not enough, it will keep swapping, compressing etc. I had such device at work. 32GB is a night and day difference. That said my workflows are such that I need at least 128GB now...
How did we get here? Quake ran well with 16MB of ram.
Am I too much of an idealist to hope that AI leads to less buggy software? On the one hand, it should reduce the time of development; on the other hand, I'm worried devs will just let the agents run free w/o proper design specs.
The message with AI from execs is that you have to go fast (rush!). Quality of work drops when you rush. You forget things, don’t dwell on decisions and consequences, just go-fast-and-break-things.
> The message with AI from execs is that you have to go fast (rush!). Quality of work drops when you rush.
Sure, but otherwise, the competition will be first to market, and the exec may lose their bonus. So, the exec keeps their bonus, and when the tech debt collapses, the exec will either have departed long ago or will be let go with a golden parachute, and in the worst case an entire product line goes down the drain, if not the entire company.
The financialization and stonkmarketization of everything is killing our society.
Considering how many companies that have adopted AI led to disastrous bugs and larger security holes?
I wouldn't call it an idealist position as much as a fools one. Companies don't give a shit about software security or sustainable software as long as they can ship faster and pump stocks higher.
The average LLM writes cleaner, better-factored code than the average engineer at my company. However, I worry about the volume of code leading to system-scale issues. Prior to LLMs, the social contract was that a human needs to understand changes and the system as a whole.
With that contract being eroded, I think the sloppiness of testing, validation, and even architecture in many organizations is going to be exposed.
The social contract where I work is that you’re still expected to understand and be accountable for any code you ship. If you use an LLM to generate the code, it’s yours. If someone is uncomfortable with that, then they are leaning too hard on the LLM and working outside of their skill level.
It might actually turn out like that. A lot of bloat came from efforts to minimize developer time. Instead of truly native apps a lot of stuff these days is some react shaped tower of abstractions with little regard for hardware constraints.
That trend might reverse if porting to a best practice native App becomes trivial.
Considering that AI still can't even reliably get basic programming tasks correct, it doesn't seem very likely that turning it loose will improve software quality.
I honestly think the memory shortage kills the possibility of a Switch 2 Lite.
Nintendo can't realistically take memory budget away from developers after the fact. The 2DS cut the 3D feature from the 3DS, but all games were required to be playable in 2D from day 1, so no existing games broke on the cost-reduced 2DS.
but think of all your battery life gains