Comments by "EebstertheGreat" (@EebstertheGreat) on "Another Roof"
channel.
-
596
-
I think the British pronunciation of "upsilon" and "epsilon" highlights the etymology of the names. Long ago, the Greeks forgot the names for ε, υ, ο, ω, and some other letters. So they just called them by their sound, the way we do vowels in English, like we just call E "ee." But as some of these vowels started to sound like each other, they added words to the names to distinguish them, like "little o" for omicron and "big o" for omega, similar to the way some Spanish-speakers call b and v "be larga" and "ve corta" or similar. In the case of ε and υ, "psilon" meant "bare," on its own, so an epsilon is a bare e. This distinguished it from αι, which had the same sound. Similarly, upsilon is a bare u, as opposed to ου.
103
-
21:35 But the Poles already had a bomba, designed by Marian Rejewski, on which the bombe was directly based. And the version of the bombe actually built was after an important refinement by Gordon Welchman. So it wasn't just Turing's idea; it was a group effort, like you assumed. The only reason the Poles couldn't crack the beefed up code on their own is that they did not have enough bombas. So when the invasion began, they sent all their work to the British. Later, the British sent their designs to the Americans, who built a ton of these things in Dayton and did most of the codebreaking work of the war. It was more a matter of resources than technique. Americans certainly didn't come up with the idea, and they were only able to break more codes because they had more money to build machines. But the same is true of the British to some extent.
Granted, the cryptographic principle on which the bombe worked was more different from the bomba than the principle on which the American machines worked differed from the bombe, but it's a matter of degree. Polish machines could do a version of a known-plaintext attack, and their method worked on its own, with enough machines. I really think the Poles should get most of the credit here.
30
-
10
-
7
-
7
-
6
-
4
-
4
-
3
-
3
-
3
-
3
-
I have a bunch of miscellaneous notes about this video.
First, at least once, Roof (sorry if I forgot your name) makes the mistake of using 1 as the base case instead of 0 (he points this out in a text overlay). This is an easy mistake to make, because in fact, the natural numbers are sometimes defined to exclude zero (and were originally defined this way). Whether to call the positive integers or the nonnegative integers the "natural numbers" has long been a matter of disagreement, and even today many textbooks do not include 0 in the natural numbers. In these books, base cases are written out with 1, but otherwise there is no meaningful difference. It does eventually lead to slightly different wording for things like the Fundamental Theorem of Arithmetic, but basically there is no difference. The motivation for doing it that way is probably continuity with Peano's axioms in their original form, though I also saw a Real Analysis book that seemed to take that approach just because of the elegant way it allowed it to write the definition of natural numbers and succession:
Let 0 := Ø, N be an infinite set (called the set of natural numbers ) such that 0 ∉ N and there exists a bijection S: {0} ∪ N → N .
Remarkably, that is all you need. Or maybe it's not that remarkable, because it does make sense that the only structure one needs for counting is that each number be followed by a new number. And the only thing that makes 0 special is that it has no predecessor. (0's other properties show up in the definitions of addition and multiplication.) A serious downside of this approach is that to be totally rigorous, one needs to prove the existence of a Dedekind-infinite set (i.e. you have to prove that there exists such a set N and map S with the stated properties, which is not the case in some theories), but the book ignored that, because it was above the target 300-level undergrad course.
Second, the principle of induction cannot be avoided in any of these proofs, because it is fundamental in every respect. Addition and multiplication are defined inductively to start with. The natural numbers themselves are defined inductively. (Roof did not actually give a definition of the natural numbers in any video so far, but roughly speaking, they are all and only the numbers that can be reached by using the successor on 0 over and over again. A more rigorous definition is kind of subtle, which is why I like the simple one I gave above.) At the highest level, the process of induction is usually justified these days by well-ordering, though it can equivalently be justified by well-foundedness. The definition I gave above is well-founded with respect to succession, because every natural number is a successor of something, but 0 is not the successor of anything, so every natural number reduces to S(S(...S(0)...)), with some finite number of applications of S(). Proofs that use the Well-Ordering Principle are arguably circular, since the proof of the Well-Ordering Principle itself uses a form of mathematical induction (that relies on well-foundedness). At any rate, suffice it to say that there are many ways to prove this principle applies to the natural numbers, and to the ordinal numbers, and even to sets ordered by inclusion.
Third, I think the waffling over the predecessor is sort of unnecessary. All you have to say is that "every number is either 0 or the successor or some number, so I'll define this for 0 and then I'll define it for successors." That's the reason there are two parts to each definition anyway.
Fourth, if we really wanted to be pedantic, there are an almost endless number of other details we could dig into. Arithmetic is complicated. For instance, how do we know that if 3 = S(2), that means 7×3 = 7×S(2)? It's clearly correct, but how do we prove it? We need some general principle of substitution into equality, which is not exactly hard to prove, but it is another case where intuition can certainly get in the way. And what about numbers 10 and larger? If we really wanted to be strict about it, we would have to go through all the work of defining positional notation and proving various properties about it just to even introduce these numbers. Interestingly, one can use this to rigorously prove the convergence and correctness all of the algorithms used in grade school for decimals (or for numbers in arbitrary bases), like long multiplication and division. I don't think I have ever seen a book go through the painstaking detail of all of these steps (except the dreaded Principia Mathematica ), but in principle, they are all there, in math proof heaven, making up the underlying structure of arithmetic.
3
-
2
-
2
-
2
-
2
-
@bethhentges It certainly existed, but there are all kinds of ligatures and symbols that didn't make ASCII, like ×, ÷, †, “, », §, ¶, —, £, ¢, ½, °, ©, •, etc. It seems very odd that @ did, given how rare it was. It might have been seen as important for ledgers, receipts, and financial documents, though from what I can tell, it was not common even there (it has become slightly more common since). I guess it was in BCDIC though, and that's how it ended up in ASCII, but that just pushes the question back further. Presumably some Hollerith machines used it way back when and it just got grandfathered in.
Backslash is another character that was (and is) rarely used outside of computing. Apparently it was added to ASCII for compatibility with ALGOL though, so that at least answers that question.
2
-
2
-
One way to justify the non-primality of 1 is to categorize natural numbers by their order in terms of divisibility. We say a | b ("a divides b") iff there is a natural number n so that an = b. Then we say a < b iff a | b and not b | a. Then you can verify that < is a strict partial order over N, and moreover, that 1 is the least element and 0 is the greatest element. That is to say, for any natural number n, 1 | n and n | 0. For any prime p, we have 1 < p, but there is no other number n for which n < p. So if you draw a graph of <, the 0th level is 1, the 1st level is all the primes, the 2nd level is all the semiprimes, etc., and the ωth level is just 0. Then "composite" just means it's on the (k > 1)th level. In particular, 0 is composite.
2
-
2
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Your Powerball calculations assumed that each jackpot winner would receive $246 million. But when the jackpot gets that high, many people play, so the probability of sharing the prize becomes substantial. In the Powerball, if multiple people simultaneously win the jackpot, the jackpot is evenly split between them. So really, the expected value of your ticket is less than you stated (particularly if you choose commonly-played numbers). On the other hand, sometimes the jackpot is more than twice that size, so in those rare cases, a ticket could really have a positive expected value
. . . before taxes. Because once you realize that state lottery winnings in the US are subject to federal income tax, it becomes practically impossible for such a scenario to occur. In most states, even state taxes can apply, in spite of the fact that it is the state itself paying you your winnings. And it's actually worse than that, because in the US, lottery jackpots are not paid as lump sums but as escalating monthly payments. The present value of the annuity is not really as high as they claim, so you actually get much less than stated. IMO that should be illegal, since it is literally, factually untrue, but that's how it is. (Of course, once you consider a more reasonable logarithmic utility function, it becomes even more stupid to play the lottery, since each dollar you risk losing today is a lot more valuable to you than each dollar you win at the end of your hypothetical large jackpot. However, this utility calculation doesn't apply the same way to lottery pools.)
1
-
1
-
1
-
1
-
1
-
Shockingly, when I was a kid, I had exactly the same thoughts with tapping my fingers and developed exactly the same fidget. I would also go methodically through every single combination, before moving on to other tapping patterns, maybe tapping out rhythms, and then my mind wandering elsewhere. I don't have autism and don't fidget that much anymore, so I don't really do it as an adult, but it was something I did all the time as a kid in lower and middle school. And I never told anyone about it, because it never even occurred to me as something to mention.
I also explored all possible sequences for n=5, and I paid special attention to the ones where you could get stuck. With only n=5 fingers, you don't get stuck very often. I also looked at n < 5, noticing that you can get stuck in the cases n=3 and n=4 and would always get stuck when n=2. As the number of fingers increased, it seemed like you got stuck less and less often, but I couldn't rule out that n=5 was somehow unique. When I explored n > 5 by adding one or more fingers from my left hand, I wasn't trying to count all the possible ways to do it correctly. I was mostly interested in how often you got stuck if you tapped at random. I had no method except picking fingers arbitrarily, like a really slow Monte Carlo simulation. At other times, particularly with n=6, I would try to include every single possible combination, though I didn't count them. It seemed like there was some obvious pattern that ought to present itself. Certainly the more fingers you add, the more often you get stuck (which is hardly surprising), but I couldn't figure out any pattern beyond that, so at some point I dropped the question. I had almost forgotten about it until I saw this video.
1
-
This is potentially an issue in the philosophy of science. In practice, most scientific reasoning appears to follow the logic of the fallacy of affirming the consequent. This fallacy has the form "If A then B, B, therefore A." It is clearly fallacious. For instance, if it floods, then the ground will be wet. This morning the ground is wet. But it did not flood this morning. Because the ground was wet for a different reason, in this case rain.
But most scientific reasoning really does seem to work like this. A lot of weight is given to models which produce novel predictions. If those predictions end up according with new observations, we say that the theory has been corroborated. But in what sense? Certainly not the deductive one.
The most famous 20th century treatments of this question come from Karl Popper and Thomas Kuhn. Popper claims that assigning epistemic certainty to scientific conclusions is just too much to ask, but that this is still an accurate description of how science works. Popper focuses on falsifiability as the demarcating criterion for science. A scientific theory asserts some conclusion that can be proved false. If you prove it false, then the theory is discarded. (This is not a fallacy but a proper application of modus tollens.) For instance, "if the Newtonian model of gravity is correct, then the perhelion of Mercury will precess only as much as is calculated. It actually precesses 0.43" per year more than that. Therefore the Newtonian model of gravity is not correct." If you fail to prove a theory false, then the theory is corroborated, in the sense that it has survived a possible attack. The theory is probably still wrong, but it has shown itself to be more useful than other theories checked so far. So in that respect, it is a good theory.
This is even reflected in the focus on p-values in experiments. I can say "this result is significant with p < 0.01" if and only if the following is true: If the hypothesis is in fact false (i.e. the null hypothesis is true), and I repeat the experiment many times, then I should expect to get such an extreme result less than 1 time in 100. But you can't get rid of that "if the null hypothesis is true" part. It does not say how probable the conclusion is overall. If I conduct an experiment that convincingly shows something very implausible, say that the Moon is made of cheese, that thing is still probably not true; it's more likely that something went wrong with my experiment. The low p-value is still much higher than the prior probability of the Moon being made of cheese. So experiments really only ever test models against each other, not against any sort of ground truth.
Thomas Kuhn does not accept Popper's falsificationism. He points out that the way most scientists work in practice does not resemble the idealized version of science Popper presents. Kuhn asserts that the nature of a scientific model depends on the cultural assumptions present when it is being created. In particular, Kuhn claims that science proceeds in conceptual leaps, where an older paradigm is discarded in favor of a newer one. Kuhn says that these "paradigm shifts" are basically non-rational, and that there is no good way to compare paradigms against each other. But within a single paradigm, two theories can be compared on their merits. So to Kuhn, Copernicus's model was correctly rejected in its time, and accepting a heliocentric model required a non-rational paradigm shift to a new kind of thinking. I haven't read much of Kuhn's work, so I might not be explaining his ideas all that well. A lot of people have cited Kuhn as an inspiration who don't represent his opinions very well, so take this witha grain of salt.
At any rate, this is a serious philosophical quandary. All our conclusions about the real world are merely models which provide plausible explanations of what we experience. Induction is a problem on its own, but even if you can resolve that, the reliance on models is inescapable. Science doesn't really prove the world is round. It just proves that a model containing a round world survives potential attacks better than any effective model with a flat Earth yet created.
1