This is basically a Pytorch library for executing computations over dynamic ranges that exceed Float64's limits, including on GPUs.
I can see how it could be useful when you really need it. Thank you for sharing it on HN.
I tried the sample code for estimating Lyapunov exponents in parallel. It worked on the first try, and it was much faster than existing methods, as advertised. It's nice to come across something that works as advertised on the first try!
The high-dynamic-range RNN stuff may be interesting to others, but it's not for me. In my book, Transformers have won. Nowadays it's so easy to whip-up a small Transformer with a few lines of Python, and it will work well on anything you throw at it.
To the best of our knowledge, this is the first time anyone has successfully trained a non-diagonal RNN computed in parallel, via prefix scan, without requiring any form of stabilization. We abstained from claiming as much out of an abundance of caution.
Hmm... you may be right. I don't think I've seen that before either.