My dad (an engineer not a mathematician) would use Newton-Raphson[1] to solve basically any problem that wasn’t very obviously linear. When I was a kid, some of my first programming memories were my dad getting me and my brother to implement Newton-Raphson in basic on an HP85a, getting me to implement Newton-Raphson in rpn on an HP calculator, debugging my dad’s (genuinely revolting) basic program[2] which wouldn’t run but (who would have guessed?) used Newton-Raphson to compute something or other.
He basically learned the one numerical root-finder and how to evaluate basic second derivatives and he was set for life on all the problems a career in chemical and process engineering could throw at someone.
[1] https://sheffield.ac.uk/media/31988/download?attachment
[2] He learned to program in FORTRAN and lived by the maxim that a determined FORTRAN programmer can write FORTRAN in any language.
The most brilliant software developer (an EE PhD) that I have ever worked with has been using the singular value decomposition (SVD) to solve an enormous number of linear algebra and numerical computing problems in engineering software. The SVD seems to be useful for many engineering computations if you know how to apply it.
It’s the optimal low-rank approximation of a matrix in some norm-2 sense, it has those nice beautiful orthogonal factors on both, and it is often quite easy to compute. Why wouldn’t you use it, right? If you are going to have one trick, truncated SVD is a good pick.
Does he like Halko, Martinsson, and Tropp’s randomized SVD? It is pretty slick.
What are some interesting things he has used the SVD to solve?
Model reduction (for control systems), numerical precision control for matrix operations, eigenvalue computation, etc. as far as I understand.
My dad was also an Engineer. He was also a Fortran fan.
Once he asked me to explain OOP. After I explained the basics he said it was useless and never looked back.
Where OOP shines is implementing user interfaces. Most engineers doing math can ignore objects. But when a programmer has to implement a partitionable window with scroll bars and menus, constructing complicated objects built of simpler objects is a mess without OOP.
Get your dad a copy of the classic Scientific and Engineering C++: An Introduction With Advanced Techniques and Examples by John Barton and Lee Nackman. It was written to introduce C++ to Fortran programmers using examples from scientific/mathematical domains. The fact that it is old (from 1994) makes it better suited for folks from Fortran (or other languages) since there is none of the complexity of "Modern C++" to confuse them. Check reviews on Amazon etc.
Unfortunately he has left our presence and is now probably getting to grips with the universe being mostly hacked together with Perl.
Nope ;-) He would tell us that the world is a simulation implemented using Fortran Coarray SPMD.
I guess as a typical engineer (not CS person type of engineer or software engineer) it is easy to think that. One might be working with machines or buildings and so on, all which require _calculation_ of processes. Those are typical cases for "just write a correct function", possibly one that takes many things into account. For such scenario OOP is truly useless and only over-complicates the matter. However, when we get to simulations, where maybe there is no known formula, or the precise calculation would be too expensive, then OOP can make sense. Doesn't have to, but could.
OOP is for problems that require complex modeling, indeed if you require just complex calculation it is useless.
Seems like a common theme of every veteran to dress the way used to in their prime for the rest of their lives, listen to the same music, watch the same movies, etc. and to use the same belief systems as well. On the one hand, if it worked for them, why not? There's no incentive to change. Heck, it is very much the definition of conservatism. Old men who don't change is so common that it borders on proverbial.
Very rarely, however, do you see a brilliant mind like Richard Feynman, a man who was so open to new ideas and out of the box thinking. Even in old age. Seeing someone, in good faith, question what they believe in light of new knowledge is very rare. Now that is a special thing.
Chad Dad
Fortran has OO features these days. It is nice.
I am kind of dumb and old-school so I wrote a bunch of code using macros to handle multiple precisions. If I could go back in time, I’d definitely just use object oriented code. In either case, though, “we can try a mixed precision implementation, I automatically generated single precision” is an incredibly liberating thing to be able to say as a computational scientist!
Relatedly, I've found Newton-Raphson is a great example of an algorithm where Knuth's "I have only proven it correct, not tried it" rears its head very prominently. The obvious implementations can work flawlessly on toy examples and then fail miserably on real-world examples.
There's a reason why numeric analysis is still actively studied by research mathematicians. If we could just throw something as simple as newton's method at any nonlinear problem, we'd only need people to learn this once in school and everyone could solve everything.
But I'm not sure I'd recommend going into the field. There's something demoralizing about doing research on something which already has dozens of valid and successful methods, of which you are trying to create a slightly more optimized version.
>something which already has dozens of valid and successful methods, of which you are trying to create a slightly more optimized version
If you ever get a PhD, you'll find that this is pretty much all of academia.
I am so into optimizing fast polynomial multiplication I assure you there is nothing that will demoralize me from creating a slightly more optimized version.
It's not for a field for everyone.
But that "slightly more optimized version" may mean "one that does not quietly produce disastrously incorrect results for some input values".
I should probably qualify my statement more. There are certainly new and interesting problems in numerics. And even going from say O(n) convergence to O(n log n) convergence can lead to whole new classes of problems you can solve. I don't want to discourage anyone who loves numerics.
What I was trying to say was that as a graduate student you might be given a problem that already has many really good and smart solutions and be essentially told to find a better solution than all of these. How this goes will depend a lot on the specific problem, your advisor, etc.
Yeah for that you need Euler’s method… I mean of course Runge-Kutta. … By that I’m of course referring to rk4. … I mean, you have a point.
Joking aside I think having a few basic numerical methods like say Newton, rk4, brent root, monte carlo simulation etc in your general toolbox of techniques you know how to do can make you unreasonably effective in a wide range of situations. Just yesterday I had to solve a problem in a relatively small space so I first used a brute force method to check all the feasible solutions and having got the full list of actual solutions out, figured out the analytical solution. It meant I could be very confident that my analytical solution was correct when I had it.
If you know some basic composition methods, Euler, RK and all the higher order methods can be easily constructed as you need them. But there are still many applications where you for example want to use symplectic methods instead. If you know about those+composition, you can at least solve more or less all of classical newtonian dynamics. But if you go to quantum mechanics or field theory (or quantum field theory), you need to enter a whole other world of numerics to get real world usage out of it.
PDEs are PDEs, regardless of where they come from Newtonian or quantum. Would you care to elaborate why you think quantum requires a new kind of numerical analysis?
For field theory, you can still go some way using normal finite-difference approaches, but you have entered a huge can of worms regarding stability. For quantum physics, the problem starts well before you even get to writing a solver, since (at least for QFT) you are actually dealing with operator-valued distributions rather than normal fields - and that in extremely high (even infinite) dimensional spaces. That means you actually need to solve a path integral instead of PDEs if you want to do any sort of actual numerics, which comes with its very own can of worms. And even if the numeric discretisation is at least mathematically valid, you still need to solve the damn thing over a huge configuration space (depending on your lattice size). Even with purely statistical methods and modern supercomputing, you're quickly running into the limit of what can be achieved in reasonable time for comparatively simple systems. But nobody in e.g. lattice QCD uses normal PDE solvers.
I fear not the man who practiced 1000 kicks but I fear the man who practiced one kick 1000 times
This applies to musicians. E.g. early blues guitarists only had a few licks but they played them so many times they honed them into perfection.
I would also toss differential evolution in the ring as a widely applicable and easy to implement technique.
Nobody has enough memory or patience for third order derivatives so Newton's method (aka Newton Raphson) it is.
The modern world of autodiff actually makes real second and third order derivatives fairly cheap to compute.
The stuff that is actually used most commonly, only uses first order derivatives though (gradient descent, Levenberg-Marquardt, Kalman filters...)
I could be wrong, but "memory" and "patience" sounded like they were referring to machine memory and patience waiting for a slow algorithm, which is what you would expect from any derivative more involved than a Jacobian for nontrivial problems, even when doing tricks like vjp or vhp.