Comments by "MC116" (@angelmendez-rivera351) on "99% of People CAN'T Find This Mistake" video.

  1. 0:20 - 0:30 What type of object does x represent? Simply calling it a variable does absolutely nothing. 0:30 - 0:38 This implies x is the element of a near-semiring, or some similar structure. 0:39 - 1:26 For an arbitrary element x of a near-semiring, x can only be guaranteed to be always expressible as a sum of 1s for all x in the near-semiring if the near-semiring is the free near-semiring generated by {0, 1}, which is actually the semiring of natural numbers {0, 1, 1 + 1, ...}. In other semirings, the statement just made by the video is false. For example, in the ring of Gaussian integers, 2 + 3·i cannot be written as a sum of 1s. In the ring of algebraic numbers, sqrt(2) cannot be written as a sum of 1s. 2:27 - 3:00 Once again, this assumes x is an element of the natural numbers. If x is an element of some other semiring that is not a subsemiring of the natural numbers, then this statement is false. 3:00 - 5:05 Many comments to the video have indicated that you cannot differentiate using this rule, because the definition of the derivative is only applicable to the real numbers, and here, x is restricted to the natural numbers. Technically, this is true, but it does not genuinely address the issue at hand. Why do I say that? Because given a function f[m] : N —> N such that f[m](x) = x^m, there is nothing stopping me from defining an operator D such that D{f[m]} = m·f[m – 1] for m > 0, and D{f[0]} = 0. Whether this operator deserves to be called "the derivative" or not is a discussion that is not relevant to the argument in the video. Choosing to define such an operator is perfectly valid, and so this is not where the proof goes wrong. The proof goes wrong later. 5:05 - 6:17 This is where the proof goes wrong. Let g : N —> N be such that g(x) = x + ••• + x (x times). D{g}(x) is not equal to 1 + ••• + 1 (x times). Why not? Because the (x times) part was ignored. Instead, D{g}(x) = 1 + ••• + 1 (x times) + x + ••• + x (1 times) = x + x = 2·x. Another way to see the mistake is that x^2 = x·x, and so, if D{x^2} = 2·x, then D{x·x} = 2·x as well, but instead, in the proof, it is assumed that D{x·x} = x·D{x} = D{x}·x = x, which is incorrect. D{x·x} = x·D{x} + D{x}·x = 2·x. This also highlights why saying x·x = x + ••• + x (x times) is a mistake. Yes, for natural x, this is correct, if sloppy and imprecise, but the moment you bring the operator D into this, it changes things, because we are no longer dealing with x as a natural number, we are dealing with functions. x is not a function, and neither is x^2: these are natural numbers in their own right. However, the functions f[1] and f[2] defined earlier, such that f[1](x) = x and f[2](x) = x^2, do indeed represent functions, and so, it is meaningful to talk about D{f[1]} and D{f[2]}. However, it is not meaningful to say f[2] = f[1] + ••• + f[1], or anything silly like that. f[2] cannot be written as a sum of f[1]s only. f[2] = f[1]·f[1], but this cannot be written as a sum, since f[1] is not a natural number, it is a function. This incorrect proof shows that the way schools teach symbolic phrases in mathematics is very misleading. Schools teach students to treat expressions such as sin(x) or x^2 as functions, but this is just incorrect: sin(x) and x^2 are just numbers, in most instances. You can define functions g and f[2] such that g(x) = sin(x) and f[2](x) = x^2, but one most not conflate f[2] with x^2 or g with sin(x). g is a function, sin(x) is just a number. f[2] is a function, x^2 is just a number. Since they are different types of mathematical objects, how you do mathematics with them changes quite significantly. You cannot take derivatives of numbers. You can take derivatives of functions. You can multiply numbers, and often, though not always, those products can be written as sums. You can multiply functions, but unlike with numbers, these products can almost never be written as sums, unless one of the factors is a constant function whose output is a natural number. The distinction seems pedantic after you have already been seeing this misuse of notation for years, but despite how it may seem, I think this paradox demonstrates how important it actually is. 6:17 - 7:05 While you can definitely conclude 2 = 1 at this stage, division is actually not mathematically valid, because we are dealing with natural numbers here, and division is not an operation you can perform with natural numbers. If we were talking about rational numbers, that would be a different story. Instead, what you need to realize is that you have a functional equation here, 2·f[1] = 1·f[1]. Since f[1] is not the 0 function, it is scalar-cancellable, and so 2 = 1. Again, realizing that we have functions, rather than just numbers, is important. 7:14 - 9:57 This entire explanation is misleading. x is a number, not a function, so even calling it a constant is not quite accurate. It is, however, variable. Variables and functions refer to different things. A variable just refers to some quantity that could be different: it could be any of many of the same type. The crucial issue is understanding that writing x·x as a sum of x's is acceptable (though imprecise, and should not really be done either way), but differentiating numbers is nonsense. Instead, you want to realize that f[1](x) = x for all natural x, and so x·x = f[1](x)·f[1](x) = (f[1]·f[1])(x) = f[2](x), thus f[1]·f[1] = f[2], which can be differentiated (in the strange sense in which I have clarified earlier), but unlike with x·x, f[1]·f[1] cannot be written as a sum of f[1]'s, because f[1] is a function, not a natural number. So, technically, there are multiple mistakes in the proof.
    4
  2. 2
  3. 2
  4. 1
  5. 1
  6. 1
  7. 1
  8. 1
  9. 1
  10. 1
  11. 1