Every year or so there's a new article about some new spectacular storage medium. Crystals, graphene, lasers, quartz, holograms, whatever. It never materializes.
Demonstrating this stuff is possible isn't the hard part, it seems. Productionizing it is. You have to have exceedingly fast read and write speeds: who cares if it can store an exabyte if it takes all month to read it, or if you produce data faster than you can write it? It has to be durable under adverse conditions. It has to be practical to manufacture the medium and the drives. You probably don't want to have to need a separate device to read and a device to write. By the time most of these problems are worked out, most of these technologies aren't a whole lot better than existing tech.
Stick this on the "Wouldn't it be nice if graphene..." pile.
It took 15 if not 20 years to commercialize even such obvious, low-tech thing as radio telegraph, which can literally be built form common house supplies. It happened about 60 years after Maxwell predicted the electromagnetic waves theoretically.
Red LEDs were invented / discovered in 1920s, became commercially successful as indicators in 1960s. Optical fibers were invented in 1920s or so, became a commercial success in 1980s.
Certain things just take time. Do not dismiss a good physical effect, they are much more rare than so-called good ideas.
It's a good physical effect that doesn't obviously solve any real problems. Consider: 5D optical storage is thirty years old and SOTA transfer speed is about as many megabits per second. By the time it's fast enough to even approach a speed that's commercially useful, all of the other tech we have will have continued to progress. That's not to mention how fragile the quartz disks are. It's a real physical effect that doesn't solve problems.
We already have zero retention energy storage. The phenomenon in this paper isn't even all that new by the author's own admission—that's how it got to the fifty third revision. The tier 2 setup described here is purely speculative. Producing a single square centimeter of pure "fluorographane" (sic?) is still a task that would be exceedingly challenging for a research lab. And it's not clear how much energy it would take to read and write the data, or support the hardware necessary to do it at a speed that's makes it uniquely useful. Even if all of these problems are solved, and the cost is made reasonable, it's still completely unclear if it would be substantially better than what we have today.
> all of the other tech we have will have continued to progress
These other techs also came from somewhere, often brewing slowly for decades until the conditions align for a commercial success. This one may be, or may not be, one of these slow-brewing techs of the future, suddenly displacing the SOTA of the (future) day.
Hell, the steam turbine was in principle invented in like 300 AD, and first commercialized (or, rather, militarized) by the end of 19th century. Leonardo da Vinci invented the helicopter, the submarine, and the tank, in principle; it took about 500 years for them to become large commercial and military successes.
It feels a little disjointed to compare old tech. Computing tech iteration cycles and adoption rates seem more interesting than things at the dawn of communications technology.
Communication technologies have been evolving for billions of years
Amazing how technology existed before sentient life!
It doesn't take long to commercialize feasible new tech in this day and age. If someone invented an electromagnetic hovercar tomorrow, it will be available for sale next week and regulations will follow after.
Waymo has cars that drive themselves and are dramatically safer than people in most conditions and yet they're only in select cities.
Do you just think Google hates money, or does this only work for hover cars
> Waymo has cars that drive themselves
With the help of “remote assistance”, that is. Which is probably one of the reasons for the limited rollout.
Perhaps you could clarify your definition of "remote assistance", or describe scenarios.
It’s been widely reported, and there were US Senate hearings about it. See e.g.: https://philkoopman.substack.com/p/waymo-tap-dances-about-re...
I don't know the costs and logistics of such an operation. Maybe you do?
> It doesn't take long to commercialize feasible new tech
“Feasible” is doing some heavy lifting there. The whole point of the comment you replied to is that it can take a long time for some new physical technique to become commercially feasible.
What advantage would hovering have?
No Street Infrastructure needeed to drive anywhere (kinda).
Ok, and where does the energy to consistently keep a weight in the air come from and is it really worth spending?
I know flying cars are some sort of futuristic trope, yet I cringe at it every time I see it. They always assume magical infinite power. In the real the reason we do not have flying cars is the same why you don't use a drone as a coat hanger at home: It is just more practical to use a mechanical solution that holds your coat for infinite time without any energy use or noise/heat emissions and it is much cheaper.
Lifting stuff against gravity is not free, but a piece of wood, a brick or a rubber wheel does a pretty good job at it. One way to do it is magnets, but that means you need even more complicated roads.
We are living on a warming planet where only the naive and the evil pretend that energy use is something only the poor have to think about. We all have to think about it.
My entire answer exists in the context of a hpyothetical we were not discussing the realism of it.
It was in the context of if someone would invent this technology. To which then the question was what advantage does this have.
Now going and posing the question wheter this is realistic or feasible is making this argumentation circular.
- [deleted]
What do you mean “we don’t have flying cars”? What are helicopters then?
Deathmachines that in their mechanical hubris angered the gods?
>I know flying cars are some sort of futuristic trope, yet I cringe at it every time I see it. They always assume magical infinite power.
No, they assume magical anti-gravity technology. "magical infinite power" implies they're basically a hovercraft, forcing air downwards to hover. Without a shroud, even with infinite energy available, this means constantly blasting high-speed air all around the vehicle, which has some really obvious practical problems. It works for drones because they're small and lightweight and not near the ground and not even that close to each other.
>Lifting stuff against gravity is not free
It's close to free when you have magical anti-gravity technology. Similarly, traveling to other star systems hundreds of lightyears away in a couple days isn't so hard when you have magical FTL propulsion technology that somehow warps spacetime.
smoother ride, no need for wheels so no road friction and fewer parts that wear, no need for shock absorbers as well, no need for roads clean of snow and ice which would make them both more practical and safer.. if we're talking star trek hovering, not rotor blade / hovercraft noisy shit with rotating parts that waste a ton of energy.
Aha, and which of the fundamental forces in our universe would it work with?
You asked what advantage would it have over rolling rubber, not how would one do it (you wouldn't with current understanding of physics and energy density/portability). Any at advantage vehicle like that is still in the realm of scifi.
Yes but it collides with our understanding of physics as well. Floating anything with significant weight within an air atmosphere requires constant power, you will at least have to profuce an upwards force that is equivalent to the downwards force. Depending on how efficiently that force is transfered you may need much more. A wheel made of rubber or steel (trains are freakingly efficient!) does give you that much cheaper.
Now theoretically one could envision some energy form that is so abundant it doesn't matter anymore that you constantly fight gravity, sure. But what most people seem to imagine is some magical tech that decouples the vehicle from the force of gravity, while still coupling it to the planet (or whatever the next relevant relevant frame of reference is) somehow. This kind of magical tech makes sense in films or scifi books, but if we just collect together what it would need to be, it is hard to envision any actual potential mechanism short of "we live in a Matrix and we lesrn to control the program".
That's a good question. When (if) we figure out how to practically travel at FTL speeds with a "warp drive", we might figure out the answer to this question too.
To be honest I think FTL is likelier than magical "sticks you to a fixed point in space relative to a rotating planet"-technology.
Sure you can do that with pushing air and a global positioning system, so if eventually we invent an eventual anti-gravity drive or something that may be used for the same thing. But wether such an entirely fictional device could be then made to (1) fit into a car sized vehicle and (2) be powered by whatever the most powerful mobile energy source is at that time and (3) become affordable to anyone outside of the 0.01% is another question.
>To be honest I think FTL is likelier than magical "sticks you to a fixed point in space relative to a rotating planet"-technology.
I disagree. I'm no physicist, but given how gravity seems to be related to the structure of spacetime according to Einsteinian physics, and a lot of FTL ideas seem to center on the idea of "warping" spacetime, I suspect the two are highly related, and if FTL is possible at all, it'll be also related to artificial gravity.
The only technologies that are commercialised quickly today are the ones that can be commercialised quickly. The ones that can't won't be for decades yet.
In short, if a tech takes 40 years to be commercialised it would have been invented some time in the 80s.
> who cares if it can store an exabyte if it takes all month to read it
To be fair, if I'm reading an exabyte in a month, my hardware's pushing >3 Tbps, which I'd be very happy with.
Plus just put 32 in stripping RAID if you really need to read an exabyte a day
Eh, that doesn't math out. It's the bandwidth per storage density (or ultimately per price) that matters.
If you have great cost per byte but your bandwidth per byte is bad enough that the price per byte doesn't make up for it then you have an issue.
They've started making hard drives with multiple heads because of this issue, they increased density to the point where it's not useful to continue adding density if it doesn't come with more bandwdith.
*RAED
Or maybe RAEND
RAVED is more likely. These things aren't cheap.
What is RAVED?
I read it as "Redundant Array of Very Expensive Disks".
But if you need 1eb, waiting a whole month for it isn't great. You'd be better off with 720 1pb devices taking an hour in parallel.
Yes it causes problems in this increasingly narrow situation.
Massive storage that takes a month to fully read is acceptable in a wide variety of use cases. If it's cheaper than hard drives it'll get a huge amount of users.
It's notable that 'time to read/write entire device' has been creeping up for any storage device you can buy off the shelf for the past ~40 years.
Reading a floppy disk took around 30 secs for example. A whole CD took 5 mins. My whole 1TB SSD takes 10 mins.
A modern hard drive (36TB @ 280MB/s) can take more than a day. If you treat a bank of tapes as one device this can get even more extreme.
Interesting, this is my first time consciously thinking about this trend.
Perhaps the needs for read/write speed are bounded (before processor, etc. becomes the limiting factor), while more capacity is only limited by price. Or maybe increasing density of storage inherently means a tradeoff with I/O speed (AFAIK, NAND flash needs to rewrite lots of data just to make a single write? Atom-scale interactions have side effects)
In long term archival use cases this is less of an issue. Especially if it’s many exabytes we’re talking about, needing to be stored for decades.
But I 100% agree with your main point about possibility vs productionisation.
Well, yeah. It takes a heck of a long time to pull something out of the lab, let alone theory, into the real world, and there's a ton of ways that it can die along the way. But you do need people to be pursuing these things to actually get something into production, else there really would never be any progress. To me this reaction feels a bit of a misunderstanding about why it's worth discussing these ideas at all: it's not meant to be a forecast of where technology is definitely going in the future, it's a potential direction that some people think is worth pursuing, and even if the odds are low for any given idea it doesn't make them worthless. (I've worked for near 10 years to turn something that 'worked in the lab' when I joined into an actual product, for example, and it's still not quite standing on its own feet in production yet).
I'm not familiar enough with the space to know how this idea rates compared to alternative options at similar levels of development: the density is obviously extreme (but probably not the biggest advantage), and it makes sense to me that the underlying physics could work robustly, but the practicalities of how you read and write seem pretty difficult (and I think the paper kind of glosses over this: read caching and defect mapping could be trickier than it implied. Accessing the tape from both sides also seems like it will make the engineering more difficult).
I have no idea if this is practical but I remember when flash memory was this suspicious semi-science fiction thing too. There are probably some people on this site that remember the same for DRAM. There have been loads of things in between that didn't make it. Some of them were semi-crackpot, some actually went into production like bubble memory and Optane. Few of them have met the sweet spot of the market in a way that let them move from a niche to a dominant form of memory, but still I wouldn't discount that it's possible to invent a new form of memory that will take over the world!
Most kinds of memory devices are based on old principles of making a memory device, which are applied to new materials.
I do not think that any new memory device principles have been invented after WWII. Already by 1940, the inventor of DRAM, John Vincent Atanasoff, had enumerated almost all principles that can be used to make a memory device.
The first DRAM of Atanasoff was made with discrete capacitors, then 5-years later von Neumann proposed to use iconoscope cathode-ray tubes instead, which were used for a few years, before being replaced by magnetic core memories. The Intel company was formed for the commercialization of the first (1-kbit) DRAM integrated circuit made with MOS transistors.
The memory described in TFA is in principle equivalent with a memory made with mechanical toggle switches or latching relays with mechanical latching, where the 2 stable states are maintained by elastic forces and you can toggle the state if you apply a force great enough on the switch.
Reducing a mechanical bistable device to the size of a few atoms reaches the possible limit of memory density. As described in the parent article, this device should be able to store information safely and it should be able to switch is state quickly.
The difficulties are not in the memory cell itself, but in how to enable fast and accurate reading and writing. While the memory cell itself may have the minimum size permitted by the atomic structure, there is no way to miniaturize to the same extent any kind of reading and writing interfaces, so that they could be incorporated in the memory cell, like in an SRAM cell.
Therefore the only solution that can preserve the high cell density is to have a read/write head that is shared by a great number of cells, i.e. which must be moved in order to access different cells.
So the memory, at least within some block, must have mechanical access, so it must be implemented as a tape or a disc. Multiple heads could be used to increase the read/write speed, like also for magnetic memories.
So I do not think that there is much to criticize in this paper, it makes sense and it identifies a new material that is suitable for implementing a known kind of memory cell at an atomic scale, even if it is unlikely that a practical memory based on this concept will become possible any time soon.
Microsoft has worked for many years on their glass memory devices, which have much more important advantages, and they are still far from being able to sell such devices, mainly due to the cost of the required lasers, for which there is a chicken-and-egg problem, they are very expensive because they are produced in very small quantities and they cannot be incorporated in a device intended for mass production, because they are too expensive.
>Microsoft has worked for many years on their glass memory devices, which have much more important advantages, and they are still far from being able to sell such devices, mainly due to the cost of the required lasers, for which there is a chicken-and-egg problem
So what's the deal here? I've tried reading about these devices, but MS's web pages are a little sparse on info and nothing's changed much in years. I guess the lasers used in BluRay burners aren't powerful enough?
> You probably don't want to have to need a separate device to read and a device to write.
I don’t think this would bother the average enterprise in the least. We used to have entire rooms dedicated to tape libraries that housed dozens of tape drives and thousands of tapes each.
The read and write speed are absolutely critical but having to utilize multiple devices isn’t anything new at all.
Used to? We absolutely still do. LTO is a widely used format, and as far as I'm aware, it is "picking up more steam" each year.
I didn’t mean to imply that tape is dead despite the 40 years of insert new technology claiming they’ve finally killed tape.
I more meant we no longer have room sized libraries unless the cloud providers have commissioned something custom and not available to the public. I believe the last installed powderhorn I’m aware of was decommissioned almost a decade ago now.
https://www.iscgroupllc.com/products/storagetek/storagetek-p...
In terms of capacity, LTO sales are increasing. In terms of tape count and drive count, there's been a steady decline.
I don't think there are public numbers. No doubt IBM knows. I do expect that trend to reverse this year if true.
https://www.lto.org/2025/07/lto-tape-technology-shipments-sc...
https://www.theregister.com/2025/07/23/lto_2024_tape_shipmen...
Unless something is wrong with these numbers, it's simple enough to do rough math to get tape count and compare with historical numbers.
Claims of 5.7 or 5.8 million drives over the lifetime of the format can also be compared to older data to see a slowdown.
- [deleted]
It doubles design, development, and manufacturing cost, potentially doubling your supply chain. It's not a problem for the consumer.
Basically you just ignore the hyped up press releases, this just accompanies most semi-cool/exciting papers. The scientists probably know this isn't going to be some new storage that will become widespread but its just part of the game to sell the story like this and the administration wants this.
> who cares if it can store an exabyte if it takes all month to read it > You probably don't want to have to need a separate device to read and a device to write
Are you only thinking about home consumer applications?
I’m not sure what the GP is thinking, but I would love a cheap-ish exabyte storage even if it takes a month to read fully. Damn, I’d gladly take it even if the speed is comparable to an SSD! (Though the price would be a question of course.)
> Every year or so there's a new article about some new spectacular storage medium. Crystals, graphene, lasers, quartz, holograms, whatever. It never materializes.
Of course, wouldn't you expect that for a fairly mature technology that you'd get tons of false starts from competing tech before eventually getting one breakthrough that completely changed everything? I mean, you could have written a comment that was perfectly analogous to your paragraph above about how AI and neural networks never really amounted to much for about 50-60 years until, all of the sudden, they did (and even if you think AI may currently be overhyped, it's undeniable that in the past 5 years that AI has had an effect on society probably much greater than all the previous history of AI put together).
I prefer to read this academic paper as "Oh, this is a really interesting approach, I wonder what its limitations are" vs. interpreting at as a "this new storage tech will change the world!!!" announcement. I feel like the first approach leads to generally more curiosity, while the second just leads to cynicism and jadedness.
In fairness, i assume any headline that emphasizes some excessively large storage density is probably at best something useful for archiving and not a replacement for an SSD. If they were targeting latency they would lead with those numbers not the density.
Very large, fast, read-only memory now has an incredible use-case: NN weights.
And that's described in this article is the opposite of fast! All of these technologies are
I remember my father showing me one of those articles when I was a kid about a postal stamp size, thin and lightweight new memory system. I remember we were as doubtful then as you are now. A few years later I remember that moment while switching the micro SD card of a camera… Sometimes this breakthroughs turns out to be exactly as they are told
I remember reading the same stories about 5D optical storage. In 1996. It's still the same vaporware.
Flash, on the other hand, had made steady incremental progress from the time it was first described until it was fully commercialized.
The fact that most of the world's data is still stored on little spinny disks, considering how many times in the last 40 years we've seen this story, is criminal.
Aren't lasers driving the current 32TB+ HDD tech?
yeah but that wasn't a straight upgrade, either. HAMR has all sorts of tradeoffs.
1 exabyte/month is 380GB/s, which would be pretty epic in my opinion!
Every article like this there is someone that points this out. Not hard to do but sure is reliable.
And the failure of the technologies to deliver is equally reliable!
The hard work would be maintaining a database of ideas which were similarly hyped over the past (say) couple centuries - including details on if/when each idea worked out, or fell out of hype-space, or was proven useless.
From that, you might be able to draw useful conclusions. Well...you'd also need correction factors for how profitable the hype itself was, over time, in the various scientific & technical fields.
The business model would be selling db access to VC's, R&D managers, and other folks making decisions about real money.