I find infinitesimals more intuitive than the formal 'limits-based' approach. I'm currently studying my old degree material but using a fairly interesting book:
"Full Frontal Calculus: An Infinitesimal Approach" by Seth Braver.
I like his readable style. His poetic intro finally gave me an intuition why infinitesimals might be useful, compared to the good old reals:
"Yet, by developing a "calculus of infinitesimals" (as it was known for two centuries), mathematicians got great insight into `real` functions, breaking through the static algebraic ice shelf to reach a flowing world of motion below, changing and evolving in time."
If you are going to use infinitesimals, though, it requires some additional doing for notation. The standard notation for higher-order derivatives (and partial derivatives) needs to be modified in order for them to work (but they do work great once you do this).
Instead of the second derivative being "d^2y/dx^2" it is "(d^2y/dx^2) - (dy/dx)(d^2x/dx^2)" and the differentials can be manipulated just like any other entity. Additionally, you can infer this notation by simply applying the quotient rule to the first derivative (which is a quotient of infinitesimals).
See more:
"Extending the Algebraic Manipulability of Differentials" ( 10.48550/arXiv.1801.09553 )
"Total and Partial Differentials as Algebraically Manipulable Entities" ( 10.48550/arXiv.2210.07958 )
You're the author of this paper? Johnathan Bartlett?
If so, I used your calculus textbook to pass calculus at WGU. I had passed calculus in high school and university a long time ago, but when I finally decided to finish my degree I had to take it again, and got to choose my own text book; I liked your textbook best, I can see it sitting on my bookshelf right now.
https://www.amazon.com/Calculus-Ground-Jonathan-Laine-Bartle...
Indeed! I'm glad you enjoyed the book! I hope you wrote it a nice Amazon review :)
I did. Glad to know you saw it.
On the topic, do you know any approaches to infitesimals/differentials that do cotangents and pullbacks as primitives?
In practice, I always end up needing to work in cotangents, but deriving them is always roundabout in terms of the limit definition of pushforwards. Never found a nice way to swap which is primary and which is secondary, but it feels like there should be a clean view of it that way somewhere.
How I like to think about it is that given an expression with a derivative dy/dx, we can always insert an arbitrary variable s that varies with both x and y, so that we can obtain an ordinary quotient (dy/ds)/(dx/ds) by the chain rule, and manipulate it normally with no qualms about what it means. As you say, second (and higher) derivatives can be calculated with the quotient rule.
What I did in my book to keep everything algebraic but not introduce weird notation is just set the derivative equal to a variable. So, say m = dy/dx. Then, the second derivative is just dm/dx.
The advantage to the revised notation is that you can describe things that are difficult or impossible to describe in the other notation. For example, you can legitimately look at d^2y/d^2x (note the placement of the 2 on the denominator to see how this is different). This is a valid ratio under my system but invalid under the standard system (though I actually consider my system to be the standard system just with prior mistakes corrected).
My recollection from real analysis was that I liked sequential continuity a lot (https://en.wikipedia.org/wiki/Continuous_function#Sequences_...).
Sequences form a nice beginner-friendly monad (`bind` is the diagonal nth from nth), and lifting a real function over a real sequence is just `fmap`! (This is the same notion of sequence that https://clash-lang.org/ uses for sequential circuits, but it skips the monad because circuits are first order.)
Convergent sequences are like ordered binary tree sets, they do also form a monad, but one in a sub-category: sequentially continuous functions are precisely those that are in the domain of the underlying functor! :)
It's a tradeoff. You can have excluded middle in your logic or infinitestimals in your extended reals. For mathematicians dealing with all the wild stuff coming out of studying infinities in the calculus, getting rid of excluded middle was a non-starter, so the system based on limits was created. If non-constructible proofs via contradiction aren't useful to you, as in physics, then you can certainly use infinitesimals.
That's interesting - I read something similar in Bell's 'A primer of infinitesimal analysis' where he said the price for 'Smooth World' infinitestimals is giving up the Law of Excluded Middle (LEM).
Don't really understand why (he said something about unconstrained use of LEM allows discontinuous functions...)
Is there any link to Brouwer's Intuitionism where LEM is rejected too (?!)
Ah it's all an interesting can of worms...
https://ncatlab.org/nlab/show/real+numbers+object you can definitely have real numbers without the infinitesimals in constructive math, however.
Side comment: anyone interested in calculus via infinitesimals may also be interested in taking a look at Radically Elementary Probability Theory by Ed Nelson: https://web.math.princeton.edu/~nelson/books/rept.pdf
I predict that infinitesimal/hyperreals will become mainstream in math one day the same way the 'complex' number i is now taught in school. Having probability ε instead of 0 just makes more sense (e.g. for hitting a number on an interval).> infinitesimals more intuitive than the formal 'limits-based' approach.
> why infinitesimals might be useful
Ok, I'll ask. Why might they be useful? Is there any situation that you know of where infinitesimals can better attack a problem than the old-fashioned Calculus?
Infinitesimal calculus is the old-fashioned calculus! It was what Newton and Leibniz invented. Limits only came into play later when mathematicians wanted a more robust foundation. But then Robinson proved that infinitesimals were perfectly rigorous. IMO, non-standard analysis is more intuitive than limit-based calculus.
Ok, it might be more intuitive. But in terms of applications, is there any example where there's any advantage of using infinitesimal calculus or non-standard analysis?
Yes, any time you have to reduce something to a point for analysis in any geometric problem.
You can also vary infinitesimals and utilize them not just in nonstandard analysis, but in fractional calculus, such as for inferring stock market motions.
They have helpful applications in physics, especially field theory.
*
I can imagine, a long time from now, many elegant mathematical constructs simplified by the use of, e.g. infinitesimals, Clifford algebras, category theory, etc. There's a lot of complicated ideas that are nicely simplified, and are even more intuitive, easy to teach the fundamentals of, rather than the standard approach.
I think it's important to understand that the canonical calculus approach came from rather mechanical questions in analysis and proofs, and the math is layered with that, as well as the notational conveniences of forms of calculus commonly used for electromagnetism, classical mechanics, etc. There's a lot of legacy syntax there, and we just live with it, but it's not optimal. Infinitesimals are a way to go back to applications and to better syntax.
Solving an ODE by separation of variables. With the limit definition this is just a notational trick, requires additional proof to justify, and confuses students. With infinitesimals, the separation dy = v * dx is a rigorous statement, making the logic of the method immediately obvious.
I think Gian-Carlo Rota would disagree with you. See Section 7 of
Rota is complaining that differentials are introduced as an ad hoc technique, seemingly breaking the rules. If differentials are taught from the beginning, i.e. y + dy = f(x + dx) where f'(x) is a convenience function, this is not an issue. Of course, there are other issues with teaching differentials, namely why products of differentials vanish. On the other hand, limits aren't rigorously justified in an introductory course either.
I think that's why I'm redoing my old Physics problem sets - but using the infinitesimal approach this time. To see is it more useful. So far the gains are modest but I find it easier to 'reason' about some of the calculations.
The author Seth Braver has two nice examples of reasoning with infinitesimals in the book intro - the first few chapters are available for free: https://www.bravernewmath.com/
Time will tell if the study will pay off. In later years of the Physics degree I ended up doing lots of algebraic manipulation without much understanding. Maybe because I had no intuitive 'feel' for the Calculus and it all felt like symbol manipulation ... As another commenter said, somehow infinitesimals allowed the giants like Newton & Lebnitz to work their way to some amazing results (especially about motion...)
> somehow infinitesimals allowed the giants like Newton & Lebnitz to work their way to some amazing results
I'm not that familiar with Leibnitz's work, but Newton understood calculus from many different angles. I heard this (probably apocryphal) saying by Feynman that you truly understand something if you understand it in three different ways.
Newton was like that with calculus, and he probably understood it in more than three ways. In particular he presented the world the theory of gravity using only geometry. Just take a look at [1], and see if you find anything that looks like limits, derivatives or integrals. You only see geometrical figures.
Newton was great at manipulating polynomials. He introduced what we call nowadays the "Newton-Raphson" method via an example of finding a root of a cubic polynomial. He never mentioned derivatives or tangents or slopes, or anything that we would now associate with calculus.
Of course, we know that Newton knew the binomial formula, some people wrongly think he invented it. What he did was that he generalized it to non-integer powers, so he could calculate the infinite series of things like sqrt(1+x) or sqrt(1-x^2). From here it doesn't take that long to derive the series for sine and cosine, especially if your name is Newton, and he did the arcsine and arctan for good measure too. (And from here he calculated many more digits of pi than anyone before him, by a good margin).
And Newton was intimately familiar with interpolation. Even today we have the concept of Newton interpolating polynomial [2]. Interpolation was indispensable in those times, even Briggs used it in his logarithmic tables which he published in 1617. Here's a quote from [3]: "Briggs’ quinquisection is actually a special case of Newton’s formula seen from a different vantage point". But "Newton was apparently unaware of Briggs’ work on finite differences and subtabulation".
[1] https://www.gutenberg.org/cache/epub/28233/pg28233-images.ht...
[2] https://en.wikipedia.org/wiki/Newton_polynomial
[3] https://inria.hal.science/inria-00543939/PDF/briggs1624doc.p...
Totally agree about Newton and his 3 ways. I remember reading in Burton's History of Mathematics:
"Newton developed 3 different versions of his calculus, apparently searching for the best approach; or maybe each version served a different purpose.
- 'Infinitesimals': largely a geometric approach, - 'Fluxions': a kinematic approach, - 'Prime and ultimate ratios': his most rigorous, "algebraic" approach.
The 3 methods weren't always kept apart when solving problems. See: DT Whiteside, Mathematical Papers Isaac Newton."
You might enjoy Tristan Needham's book on Visual Differential Geometry where he really dives into Newton's geometric approach.
Thanks for the other links... must go through them. Lots of gold there.
Nice, I didn't know about this book. Did you try the Keisler book too, "Elementary Calculus: An Infinitesimal Approach"? See https://people.math.wisc.edu/~hkeisler/calc.html
Thanks for the tip... it seems to mention Robinson's 'hyperreals' too...!