Semi-relatedly, this is why I'm quite liberal about throwing errors while developing programs. During development it makes sure I'm reminded of my slipups when I inevitably forget some invariant that needed to be upheld, and then towards the end of the process any throws remaining are easily found and vetted. And in production, the closer to the source the error occurs the easier it is to fix; if you're especially unlucky, the error doesn't occur until after state has been persisted and read back, in which case you not only need to fix the bug but also have to figure out how to handle and/or remediate the bogus state written to your database.
I used to code this way, then I started to write tests, as I was writing my code. I could ensure the behavior was expected with the test. If the code wasn't easily testable, I would refactor it right way and that would generally leave me with easier to read and follow code as well. The end benefit being that if I wanted to change the behavior later, now I knew if I was breaking anything. With everything tested, very little made it to production that wasn't working.
It's not about being easily testable, it's about being in an invalid state for an unexpected reason, you can't test that trivially.
You can try to avoid it by architecture or abstractions, but it only works to some extent.
Testing isn't trivial, it can be downright difficult. But if you test for those invalid states, the reasons are expected.