Comments by "EebstertheGreat" (@EebstertheGreat) on "Mathologer" channel.

  1. 1700
  2. 45
  3. 25
  4. 12
  5. 11
  6. 10
  7. 8
  8. Hero's formula for the area of a triangle is one of those things introduced as a curiosity in a math textbook way back in middle school that I never really got a handle on. To a sixth grader, that is an impressively complex formula for an ancient to have discovered, and no proof was forthcoming. Through high school I saw it a couple more times, always in passing, a sort of neat oddity that seems compact but rarely gets used in practice. It was really neat to see an intuitive proof and motivation after all these years. That said, it doesn't exactly seem useful. Even if you somehow do know the lengths of a triangle but not its angles, this formula is still not the fastest way to find the area. Typically, if you're doing this by hand, you will either have a table of square roots (for Hero's method) or of logs and logs of sines (for the law of sines method). That method is still faster, because you skip all the multiplication steps. If you want to compute the area of the triangle with a computer, you can use Newton's method to get the square root, and I assume Heron's formula really is faster. But the thing is, you basically never know all the side lengths of a triangle (and nothing else) before trying to find its area. Rather, you probably have coordinates, in which case the shoelace formula is by far the fastest. So like, what is this formula actually good for? Is it just a novelty like the quartic formula? If it's never used, then no, I don't think it should be taught as part of a standard curriculum. The brief mentions in books for interested students are probably enough. There is so much I want to add to the math curriculum, and the curriculum is already packed as it is. It's hard to justify cramming in more random formulas to teach, prove, and memorize. (BTW, although the phrase "Heron's formula" is seen pretty often in mathematical texts, in pretty much all other contexts in English, "Hero" is far more common than "Heron." Similarly, we say "Plato" rather than "Platon." The practice of Latinizing ancient Greek names is pretty standard in English. In classical Latin, the nominative singular would be "HERO," and the genitive singular would be "HERONIS." Since the Latin stem is still Heron-, the English adjective would be "Heronic" rather than "Heroic." Again, that's like the adjective "Platonic" rather than "Platoic. Other examples include "Pluto/Plutonic" and "Apollo/Apollonic." Admittedly, there are some exceptions, like the word "gnomon.")
    8
  9. 7
  10. 6
  11. 6
  12. +Karma Peny I haven't read all of your comments (as they are very long), but one mistake you make right away is to claim that "the whole point of algebra is that we can perform manipulations without knowing what value is going to be plugged into any particular variable." That is not only not "the whole point" of algebra, it isn't even true! For instance, consider the function f(x) = (x²+x)/x. You might be tempted to cancel out a factor of x and conclude that f(x) = x+1, but in fact that is true only for x≠0. If x=0, then that rational function has no defined value. The domain of f just doesn't include 0. We can, however, define a new piecewise function g(x) = f(x) when x≠0 and g(x) = 1 when x=0. This new function is indeed g(x) = x+1, and it is the only function that analytically continues f over all real x. The same thing is happening here. The function s(z) = Σ n^(-z) is defined only for Re[z] > 1, since the partial sums diverge elsewhere. However, we can analytically continue this function to all complex z≠1, and one method of computing values of this unique analytical continuation is given in the video. This function is ζ(z), and it is here that we finally get ζ(-1) = -1/12. These are provable, basic, indisputable mathematical facts. If you don't agree with the conclusions, there are three possibilities: 1. Mathematicians are mistaken on this point. 2. Mathematicians are lying on this point. 3. Mathematicians know something about math that you don't. Please take a minute to seriously consider possibility 3.
    5
  13. 5
  14. 4
  15. 4
  16. 4
  17. 4
  18. 3
  19. 3
  20. 3
  21. 3
  22. 3
  23. 3
  24. 3
  25. 3
  26. 3
  27. 2
  28. 2
  29. 2
  30. 2
  31. 2
  32. 2
  33. 2
  34. 1
  35. 1
  36. 1
  37. 1
  38. 1
  39. 1
  40. 1
  41. 1
  42. 1
  43. 1
  44. 1
  45. 1
  46. 1
  47. 1
  48. 1
  49. 1
  50. 1
  51. 1
  52. 1
  53. 1
  54. 1
  55. 1
  56. 1
  57. 1
  58. 1
  59. 1
  60. 1
  61. 1
  62. 1
  63. There are indeed no uniform probability distributions on N that satisfy the usual axioms of probability distributions. However, there are some that exist by weakening one or more axioms, usually countable additivity. Countable additivity says that the probability of any countable set of disjoint events is equal to the sum of the probabilities of each event on its own. So for instance, in a Poisson distribution on a variable X where P(X=n) = 1/(e·n!), we can say that the probability that X is even is P(X even) = P(X=0) + P(X=2) + P(X=4) + · · · = 1/e (1 + 1/2 + 1/4! + · · · ) = cosh(1). This is sensible and does lead to the conclusion you posted above: that such a distribution over all natural numbers cannot be uniform. After all, if it were uniform, P(n) would be 0 for each n, yet P(n≥0) must be 1, which is definitely not the sum of 0 + 0 + 0 + · · ·. By weakening the requirement of countable additivity to finite additivity, we can define a generalized probability distribution (sometimes called a "probability charge") that does have most of the properties we want. Specifically, it will still be true that P(X=n) = 0 for all n, but it will no longer be required that the total probability be the sum of 0 + 0 + 0 + · · ·. Instead, the total probability can simply be 1, and the probabilities of other countably infinite sets of outcomes can similarly be larger than expected by conventional rules. Because we still require finite additivity, that means that finite sums work normally. For instance, P(X is even) + P(X is odd) = P(X is even or odd) = 1, and P(X=1 or X=2) = P(X=1) + P(X=2) = 0 + 0 = 0. We can define a "uniform" generalized distribution in a number of ways, but usually we require that the probabilities of subsets are equal to their asymptotic densities. So for instance, we would want P(X is even) = P(X is odd) = 1/2, P(X is divisible by 5) = 1/5, P(X is prime) = 0, etc. This distribution seems pretty reasonable at first, but picking at the edges reveals some very strange behaviors. For instance, in a paper by Kadane, Schervish, and Seidenfeld titled "Is Ignorance Bliss?" the authors ask us to imagine a gambler who has bet on one of two mutually exclusive and equally likely outcomes, A and B, one of which must be true. If A is true, he will lose $1, and if B is true, he will win $1. Suppose further that a random number will be selected according to one distribution if A is true and according to a different distribution if B is true. Specifically, the gambler knows that if A is true, then the number n will be selected with probability P(n|A) = (1/2)ⁿ, but if B is true, then the number n will be selected according to the uniform distribution described above, meaning P(n|B) = 0, whatever the n. Thus at the moment, before n is selected, both A and B are equally likely, and the gambler has an expected value of $0 for his bet. But after n is selected, no matter what it is, Bayes' Theorem shows that P(B|n) = 0. Therefore, the gambler should actually pay money not to know what number was selected, since if he does find out the number, he is guaranteed to lose.
    1
  64. 1
  65. 1
  66. 1
  67. 1