DRMacIver's Notebook
The thing I called the Feynmann style relates to Tim Gowers’s The Two Cultures of Mathematics, where he suggests that there are two cultures of mathematics: Theory building and problem solving. The latter tends to get organised not along the lines of general big ideas and broad theorems, but instead along heuristics and guiding principles.
Gowers refers to the areas of mathematics that are primarily problem-solving as combinatorial, but I feel like this kind of problem solving is one that it doesn’t seem right to refer to as combinatorial - it’s more… calculation?
There’s a similar sense of being guided by heuristics and general ideas though.
For example, one general idea is “replace annoying terms with integrals over some new variables, then swap out the variable”.
Suppose we didn’t know what \(\sum\limits_{n = 1}^\infty (-1)^{n - 1} \frac{1}{n}\) was. How would we deal with this?
(Note: There’s all sorts of playing fast and loose with convergence in this post that you can shore up later with some proper calculation but I’m not actually going to do. That’s very common in this sort of proof).
Well, that \(\frac{1}{n}\) is an annoying term. Lets get rid of it with. A classic way of doing this is to replace it with \(\int\limits_0^1 x^{n - 1} dx\).
We can now do the computation as follows:
\[\begin{align} \sum\limits_{n = 1}^\infty (-1)^{n - 1} \frac{1}{n} & = \sum\limits_{n = 1}^\infty (-1)^{n - 1} \int\limits_0^1 x^{n - 1} \\ & = \sum\limits_{n = 0}^\infty (-1)^n \int\limits_0^1 x^n \\ & = \int\limits_0^1 \sum\limits_{n = 0}^\infty (-x)^n \\ & = \int\limits_0^1 \frac{1}{1 + x} \\ & = \ln(1 + x) \\ \end{align}\]
Roughly the steps here are:
- Try replacing tricky terms with integrals over simpler terms
- Use standard sums that you already know the answer for
- Try swapping sums and integrations
Another useful calculational heuristic is “try changing the variable”.
e.g. what’s the limit as \(n \to \infty\) of \(n \ln (1 + \frac{1}{n})\)?
Well, let \(x = \frac{1}{n}\). This expression is now \(\frac{ln(1 + x)}{x} = \frac{ln(1 + x) - \ln(1)}{x}\) as \(x \to 0\). i.e. it’s the derivative of \(\ln\) at \(1\), i.e. \(1\).
It’s hard to explain exactly what the thought process is here. It’s like solving a puzzle - you have a bunch of known tricks that you think might work and you try to apply them all. If you were to mechanize the process then it wuold end up looking like a brute force solver for the problem, but by using intuition you can kinda guide the way.
I think maybe one part of the split between problem-solving and theory building is how much of what you end up building escapes the head of the mathematician building it: Problem-solving skills are much harder to teach to another person than theory is (once that other person has build the skill of acquiring theory, which is also hard to teach)