Comments by "EebstertheGreat" (@EebstertheGreat) on "Another Roof" channel.

  1. 596
  2. 103
  3. 30
  4. 10
  5. 7
  6. 7
  7. 6
  8. 4
  9. 4
  10. 3
  11. 3
  12. 3
  13. 3
  14. I have a bunch of miscellaneous notes about this video. First, at least once, Roof (sorry if I forgot your name) makes the mistake of using 1 as the base case instead of 0 (he points this out in a text overlay). This is an easy mistake to make, because in fact, the natural numbers are sometimes defined to exclude zero (and were originally defined this way). Whether to call the positive integers or the nonnegative integers the "natural numbers" has long been a matter of disagreement, and even today many textbooks do not include 0 in the natural numbers. In these books, base cases are written out with 1, but otherwise there is no meaningful difference. It does eventually lead to slightly different wording for things like the Fundamental Theorem of Arithmetic, but basically there is no difference. The motivation for doing it that way is probably continuity with Peano's axioms in their original form, though I also saw a Real Analysis book that seemed to take that approach just because of the elegant way it allowed it to write the definition of natural numbers and succession: Let 0 := Ø, N be an infinite set (called the set of natural numbers ) such that 0 ∉ N and there exists a bijection S: {0} ∪ N → N . Remarkably, that is all you need. Or maybe it's not that remarkable, because it does make sense that the only structure one needs for counting is that each number be followed by a new number. And the only thing that makes 0 special is that it has no predecessor. (0's other properties show up in the definitions of addition and multiplication.) A serious downside of this approach is that to be totally rigorous, one needs to prove the existence of a Dedekind-infinite set (i.e. you have to prove that there exists such a set N and map S with the stated properties, which is not the case in some theories), but the book ignored that, because it was above the target 300-level undergrad course. Second, the principle of induction cannot be avoided in any of these proofs, because it is fundamental in every respect. Addition and multiplication are defined inductively to start with. The natural numbers themselves are defined inductively. (Roof did not actually give a definition of the natural numbers in any video so far, but roughly speaking, they are all and only the numbers that can be reached by using the successor on 0 over and over again. A more rigorous definition is kind of subtle, which is why I like the simple one I gave above.) At the highest level, the process of induction is usually justified these days by well-ordering, though it can equivalently be justified by well-foundedness. The definition I gave above is well-founded with respect to succession, because every natural number is a successor of something, but 0 is not the successor of anything, so every natural number reduces to S(S(...S(0)...)), with some finite number of applications of S(). Proofs that use the Well-Ordering Principle are arguably circular, since the proof of the Well-Ordering Principle itself uses a form of mathematical induction (that relies on well-foundedness). At any rate, suffice it to say that there are many ways to prove this principle applies to the natural numbers, and to the ordinal numbers, and even to sets ordered by inclusion. Third, I think the waffling over the predecessor is sort of unnecessary. All you have to say is that "every number is either 0 or the successor or some number, so I'll define this for 0 and then I'll define it for successors." That's the reason there are two parts to each definition anyway. Fourth, if we really wanted to be pedantic, there are an almost endless number of other details we could dig into. Arithmetic is complicated. For instance, how do we know that if 3 = S(2), that means 7×3 = 7×S(2)? It's clearly correct, but how do we prove it? We need some general principle of substitution into equality, which is not exactly hard to prove, but it is another case where intuition can certainly get in the way. And what about numbers 10 and larger? If we really wanted to be strict about it, we would have to go through all the work of defining positional notation and proving various properties about it just to even introduce these numbers. Interestingly, one can use this to rigorously prove the convergence and correctness all of the algorithms used in grade school for decimals (or for numbers in arbitrary bases), like long multiplication and division. I don't think I have ever seen a book go through the painstaking detail of all of these steps (except the dreaded Principia Mathematica ), but in principle, they are all there, in math proof heaven, making up the underlying structure of arithmetic.
    3
  15. 2
  16. 2
  17. 2
  18. 2
  19. 2
  20. 2
  21. 2
  22. 2
  23. 1
  24. 1
  25. 1
  26. 1
  27. 1
  28. 1
  29. 1
  30. Your Powerball calculations assumed that each jackpot winner would receive $246 million. But when the jackpot gets that high, many people play, so the probability of sharing the prize becomes substantial. In the Powerball, if multiple people simultaneously win the jackpot, the jackpot is evenly split between them. So really, the expected value of your ticket is less than you stated (particularly if you choose commonly-played numbers). On the other hand, sometimes the jackpot is more than twice that size, so in those rare cases, a ticket could really have a positive expected value . . . before taxes. Because once you realize that state lottery winnings in the US are subject to federal income tax, it becomes practically impossible for such a scenario to occur. In most states, even state taxes can apply, in spite of the fact that it is the state itself paying you your winnings. And it's actually worse than that, because in the US, lottery jackpots are not paid as lump sums but as escalating monthly payments. The present value of the annuity is not really as high as they claim, so you actually get much less than stated. IMO that should be illegal, since it is literally, factually untrue, but that's how it is. (Of course, once you consider a more reasonable logarithmic utility function, it becomes even more stupid to play the lottery, since each dollar you risk losing today is a lot more valuable to you than each dollar you win at the end of your hypothetical large jackpot. However, this utility calculation doesn't apply the same way to lottery pools.)
    1
  31. 1
  32. 1
  33. 1
  34. 1
  35. Shockingly, when I was a kid, I had exactly the same thoughts with tapping my fingers and developed exactly the same fidget. I would also go methodically through every single combination, before moving on to other tapping patterns, maybe tapping out rhythms, and then my mind wandering elsewhere. I don't have autism and don't fidget that much anymore, so I don't really do it as an adult, but it was something I did all the time as a kid in lower and middle school. And I never told anyone about it, because it never even occurred to me as something to mention. I also explored all possible sequences for n=5, and I paid special attention to the ones where you could get stuck. With only n=5 fingers, you don't get stuck very often. I also looked at n < 5, noticing that you can get stuck in the cases n=3 and n=4 and would always get stuck when n=2. As the number of fingers increased, it seemed like you got stuck less and less often, but I couldn't rule out that n=5 was somehow unique. When I explored n > 5 by adding one or more fingers from my left hand, I wasn't trying to count all the possible ways to do it correctly. I was mostly interested in how often you got stuck if you tapped at random. I had no method except picking fingers arbitrarily, like a really slow Monte Carlo simulation. At other times, particularly with n=6, I would try to include every single possible combination, though I didn't count them. It seemed like there was some obvious pattern that ought to present itself. Certainly the more fingers you add, the more often you get stuck (which is hardly surprising), but I couldn't figure out any pattern beyond that, so at some point I dropped the question. I had almost forgotten about it until I saw this video.
    1
  36. This is potentially an issue in the philosophy of science. In practice, most scientific reasoning appears to follow the logic of the fallacy of affirming the consequent. This fallacy has the form "If A then B, B, therefore A." It is clearly fallacious. For instance, if it floods, then the ground will be wet. This morning the ground is wet. But it did not flood this morning. Because the ground was wet for a different reason, in this case rain. But most scientific reasoning really does seem to work like this. A lot of weight is given to models which produce novel predictions. If those predictions end up according with new observations, we say that the theory has been corroborated. But in what sense? Certainly not the deductive one. The most famous 20th century treatments of this question come from Karl Popper and Thomas Kuhn. Popper claims that assigning epistemic certainty to scientific conclusions is just too much to ask, but that this is still an accurate description of how science works. Popper focuses on falsifiability as the demarcating criterion for science. A scientific theory asserts some conclusion that can be proved false. If you prove it false, then the theory is discarded. (This is not a fallacy but a proper application of modus tollens.) For instance, "if the Newtonian model of gravity is correct, then the perhelion of Mercury will precess only as much as is calculated. It actually precesses 0.43" per year more than that. Therefore the Newtonian model of gravity is not correct." If you fail to prove a theory false, then the theory is corroborated, in the sense that it has survived a possible attack. The theory is probably still wrong, but it has shown itself to be more useful than other theories checked so far. So in that respect, it is a good theory. This is even reflected in the focus on p-values in experiments. I can say "this result is significant with p < 0.01" if and only if the following is true: If the hypothesis is in fact false (i.e. the null hypothesis is true), and I repeat the experiment many times, then I should expect to get such an extreme result less than 1 time in 100. But you can't get rid of that "if the null hypothesis is true" part. It does not say how probable the conclusion is overall. If I conduct an experiment that convincingly shows something very implausible, say that the Moon is made of cheese, that thing is still probably not true; it's more likely that something went wrong with my experiment. The low p-value is still much higher than the prior probability of the Moon being made of cheese. So experiments really only ever test models against each other, not against any sort of ground truth. Thomas Kuhn does not accept Popper's falsificationism. He points out that the way most scientists work in practice does not resemble the idealized version of science Popper presents. Kuhn asserts that the nature of a scientific model depends on the cultural assumptions present when it is being created. In particular, Kuhn claims that science proceeds in conceptual leaps, where an older paradigm is discarded in favor of a newer one. Kuhn says that these "paradigm shifts" are basically non-rational, and that there is no good way to compare paradigms against each other. But within a single paradigm, two theories can be compared on their merits. So to Kuhn, Copernicus's model was correctly rejected in its time, and accepting a heliocentric model required a non-rational paradigm shift to a new kind of thinking. I haven't read much of Kuhn's work, so I might not be explaining his ideas all that well. A lot of people have cited Kuhn as an inspiration who don't represent his opinions very well, so take this witha grain of salt. At any rate, this is a serious philosophical quandary. All our conclusions about the real world are merely models which provide plausible explanations of what we experience. Induction is a problem on its own, but even if you can resolve that, the reliance on models is inescapable. Science doesn't really prove the world is round. It just proves that a model containing a round world survives potential attacks better than any effective model with a flat Earth yet created.
    1