Comments by "MC116" (@angelmendez-rivera351) on "The Math Sorcerer"
channel.
-
8
-
This equation is actually a better example of a homogeneous equation. Given y' – y/Id = –(y/Id)^2, the better substitution here is z = y/Id, equivalent to Id·z, hence Id·z' + z = y', hence y' – z = Id·z'. Thus, Id·z' = –z^2, and this is separable. Hence z = 0, or –z'/z^2 = 1/Id, hence 1/z = ln(–Id) + A if Id < 0, 1/z = ln(Id) + B if Id > 0, so z = 1/[ln(–Id) + A] if Id < 0, z = 1/[ln(Id) + B] if Id > 0, hence y = Id/[ln(–Id) + A] if Id < 0, y = Id/[ln(Id) + B] if Id > 0. This definitely works better using initial conditions, though. If –z'/z^2 = 1/Id. Integrate over [x, –1] and over [1, x], respectively. We have that 1/z(–1) – 1/z(x) = ln(–1/x) for x < 0 and 1/z(x) – 1/z(1) = ln(x) for x > 0, which translates to 1/z(x) = 1/z(–1) + ln(–x) for x < 0 and 1/z(x) = 1/z(1) + ln(x) for x > 0. Hence, for every x, y(x) = 0, or, for every x < 0, y(x) = x/[ln(–x) – 1/y(–1)], and for every x > 0, y(x) = x/[ln(x) + 1/y(1)]. More concretely, what we have is, for every x < 0, y(x) = y(–1)·x/[y(–1)·ln(–x) – 1], and for every x > 0, y(x) = y(1)·x/[y(1)·ln(x) + 1], or for every x, y(x) = 0. Even more succintly, for every x, y(x) = 0, or y(x) = y[sgn(x)]·x/{y[sgn(x)]·ln(|x|) + sgn(x)}. As for the latter, notice that lim y (x —> 0) = 0. Since y' – y/Id = –(y/Id)^2, we have that lim y' – y/Id (x —> 0) = lim –(y/Id)^2 (x —> 0), and since lim y/Id (x —> 0) = 0, we have that lim y' (x —> 0) = 0. So, if we define y(0) = 0, then for every nonzero real x, y(0) = 0, y(x) = y[sgn(x)]·x/{y[sgn(x)]·ln(|x|) + sgn(x)}, and with y defined in this fashion for every real x, y is continuously differentiable everywhere, and includes y(x) = 0 in the case that y[sgn(x)] = 0. This is what thr video misses.
5
-
The reason we use degrees, rather than hours, is because 360 hours are meant to denote not a full rotation of the Earth, but rather, a half-period of the lunar cycle. 360 hours are 15 intervals of 24 hours, which is half a lunar month. As such, one hour is 15°, which is trigonometrically a very important and fundamental constant.
With all of that being said, I have learned that thinking of degrees as a unit of measurement of angles is not the conceptually appropriate way to think of them, if only because, a ratio of two quantities with the same dimensionality is dimensionless, and thus, is numerically independent of the units it is measured in: there should not exist such a thing as different units for a dimensionless quantity, mathematically speaking. So what is actually going on instead? Well, what is going on is that degrees are a scale factor for the trigonometric functions. Writing sin(1°) is the same thing as writing sin(π/180). In other words, sin(x°) = sin(π/180·x). So when you change from degrees to radians, you are not changing units of measurements. What you are doing instead is rescaling the trigonometric functions.
5
-
5
-
I should note that, technically, the definition given in the video is not the definition of the integral. If f is a function on [a, b], then you can partition [a, b] into intervals [x(i), x(i + 1)] with x(0) = a and x(n + 1) = b, and you can tag this partition letting t(i) be an element of [x(i), x(i + 1)]. Then the Riemann sum over the tagged partition is the sum of f[t(i)]·[x(i + 1) – x(i)]. The mesh of a partition is given by max[x(i + 1) – x(i)]. The integral is equal to the limit as max[x(i + 1) – x(i)] —> 0 of the Riemann sums.
Here, we have that f(x) = 4·x^2 is a function on [1, 4]. So the Riemann sums are the sums of 4·t(i)^2·[x(i + 1) – x(i)]. Now, we have that max[x(i + 1) – x(i)] —> 0, so it is quite natural to have x(i + 1) – x(i) = max[x(i + 1) – x(i)] – d(i), with d(i) —> 0, so x(j) – x(0) = j·max[x(i + 1) – x(i)] – Sum{0 =< i =< j – 1, d(i)}. Let t(i) = x(i) + s(i) = x(0) + max[x(i + 1) – x(i)]·i + s(i) – Sum{0 =< j =< i – 1, d(j)}, with s(i) —> 0. Hence 4·t(i)^2·[x(i + 1) – x(i)] = 4·[1 + s(i) – Sum{0 =< j =< i – 1, d(j)} + max[x(i + 1) – x(i)]·i]^2·{max[x(i + 1) – x(i)] – d(i)} = 4·{[1 + s(i) – Sum{0 =< j =< i – 1, d(j)}]^2 + 2·[1 + s(i) – Sum{0 =< j =< i – 1, d(j)}]·max[x(i + 1) – x(i)]·i + max[x(i + 1) – x(i)]^2·i^2}·{max[x(i + 1) – x(i)] – d(i)} = 4·max[x(i + 1) – x(i)]·[1 + s(i) – Sum{0 =< j =< i – 1, d(j)}]^2 + 8·max[x(i + 1) – x(i)]^2·[1 + s(i) – Sum{0 =< j =< i – 1, d(j)}]·i + 4·max[x(i + 1) – x(i)]^3·i^2 – 4·[1 + s(i) – Sum{0 =< j =< i – 1, d(j)}]^2·d(i) – 8·max[x(i + 1) – x(i)]·[1 + s(i) – Sum{0 =< j =< i – 1, d(j)}]·d(i)·i – 4·max[x(i + 1) – x(i)]^2·d(i)·i^2. Since max[(i + 1) – x(i)] is asymptotically equivalent to K/n for some real K > 0, one can use repeated applications of Tannery's theorem to conclude the limit as max[x(i + 1) – x(i)] —> 0 of the sums of the above is equal to the same limit of the sums of 4·max[x(i + 1) – x(i)] + 8·max[x(i + 1) – x(i)]^2·i + 4·max[x(i + 1) – x(i)]^3·i^2. The sums are given by 4·(n + 1)·max[x(i + 1) – x(i)] + 8·max[x(i + 1) – x(i)]^2·(n^2 + n)/2 + 4·max[x(i + 1) – x(i)]^3·(n^3/3 + n^2/2 + n/6), and as we have max[x(i + 1) – x(i)] —> 0, we get 4·K + 4·K^2 + 4/3·K^3.
For reference, we have, from the fundamental theorem of calculus, that the integral is equal to 4/3·(4^3 – 1^3) = 4·(4^2 + 4·1 + 1^2) = 4·(16 + 4 + 1) = 4·21 = 84. So we know 4/3·K^3 + 4·K^2 + 4·K – 84 = 0. This has only one real solution, K = 3. In fact, the way one would go about computing K is by having K = x(n + 1) – x(0) = b – a = 4 – 1 = 3, but proving that this is the case is tedious and very laborious in itself. This is why we do not use the definition of the integral to compute them, and instead, we prove the minimal integral theorems, which are easier to do than computing any given integral, and then use the theorems instead.
Also, if someone is wondering how exactly is this more correct, this is because t(i) is not being taken to simply be equal to x(i) or x(i + 1), but is an arbitrary number in the enclosed interval. Otherwise, s(i) = 0 or s(i) = x(i + 1) – x(i) respectively; and x(i + 1) – x(i) is not constant, hence t(i) is not merely a first-degree polynomial function of i. Otherwise, d(i) = 0.
4
-
0:20 - 0:30 What type of object does x represent? Simply calling it a variable does absolutely nothing.
0:30 - 0:38 This implies x is the element of a near-semiring, or some similar structure.
0:39 - 1:26 For an arbitrary element x of a near-semiring, x can only be guaranteed to be always expressible as a sum of 1s for all x in the near-semiring if the near-semiring is the free near-semiring generated by {0, 1}, which is actually the semiring of natural numbers {0, 1, 1 + 1, ...}. In other semirings, the statement just made by the video is false. For example, in the ring of Gaussian integers, 2 + 3·i cannot be written as a sum of 1s. In the ring of algebraic numbers, sqrt(2) cannot be written as a sum of 1s.
2:27 - 3:00 Once again, this assumes x is an element of the natural numbers. If x is an element of some other semiring that is not a subsemiring of the natural numbers, then this statement is false.
3:00 - 5:05 Many comments to the video have indicated that you cannot differentiate using this rule, because the definition of the derivative is only applicable to the real numbers, and here, x is restricted to the natural numbers. Technically, this is true, but it does not genuinely address the issue at hand. Why do I say that? Because given a function f[m] : N —> N such that f[m](x) = x^m, there is nothing stopping me from defining an operator D such that D{f[m]} = m·f[m – 1] for m > 0, and D{f[0]} = 0. Whether this operator deserves to be called "the derivative" or not is a discussion that is not relevant to the argument in the video. Choosing to define such an operator is perfectly valid, and so this is not where the proof goes wrong. The proof goes wrong later.
5:05 - 6:17 This is where the proof goes wrong. Let g : N —> N be such that g(x) = x + ••• + x (x times). D{g}(x) is not equal to 1 + ••• + 1 (x times). Why not? Because the (x times) part was ignored. Instead, D{g}(x) = 1 + ••• + 1 (x times) + x + ••• + x (1 times) = x + x = 2·x. Another way to see the mistake is that x^2 = x·x, and so, if D{x^2} = 2·x, then D{x·x} = 2·x as well, but instead, in the proof, it is assumed that D{x·x} = x·D{x} = D{x}·x = x, which is incorrect. D{x·x} = x·D{x} + D{x}·x = 2·x. This also highlights why saying x·x = x + ••• + x (x times) is a mistake. Yes, for natural x, this is correct, if sloppy and imprecise, but the moment you bring the operator D into this, it changes things, because we are no longer dealing with x as a natural number, we are dealing with functions. x is not a function, and neither is x^2: these are natural numbers in their own right. However, the functions f[1] and f[2] defined earlier, such that f[1](x) = x and f[2](x) = x^2, do indeed represent functions, and so, it is meaningful to talk about D{f[1]} and D{f[2]}. However, it is not meaningful to say f[2] = f[1] + ••• + f[1], or anything silly like that. f[2] cannot be written as a sum of f[1]s only. f[2] = f[1]·f[1], but this cannot be written as a sum, since f[1] is not a natural number, it is a function. This incorrect proof shows that the way schools teach symbolic phrases in mathematics is very misleading. Schools teach students to treat expressions such as sin(x) or x^2 as functions, but this is just incorrect: sin(x) and x^2 are just numbers, in most instances. You can define functions g and f[2] such that g(x) = sin(x) and f[2](x) = x^2, but one most not conflate f[2] with x^2 or g with sin(x). g is a function, sin(x) is just a number. f[2] is a function, x^2 is just a number. Since they are different types of mathematical objects, how you do mathematics with them changes quite significantly. You cannot take derivatives of numbers. You can take derivatives of functions. You can multiply numbers, and often, though not always, those products can be written as sums. You can multiply functions, but unlike with numbers, these products can almost never be written as sums, unless one of the factors is a constant function whose output is a natural number. The distinction seems pedantic after you have already been seeing this misuse of notation for years, but despite how it may seem, I think this paradox demonstrates how important it actually is.
6:17 - 7:05 While you can definitely conclude 2 = 1 at this stage, division is actually not mathematically valid, because we are dealing with natural numbers here, and division is not an operation you can perform with natural numbers. If we were talking about rational numbers, that would be a different story. Instead, what you need to realize is that you have a functional equation here, 2·f[1] = 1·f[1]. Since f[1] is not the 0 function, it is scalar-cancellable, and so 2 = 1. Again, realizing that we have functions, rather than just numbers, is important.
7:14 - 9:57 This entire explanation is misleading. x is a number, not a function, so even calling it a constant is not quite accurate. It is, however, variable. Variables and functions refer to different things. A variable just refers to some quantity that could be different: it could be any of many of the same type. The crucial issue is understanding that writing x·x as a sum of x's is acceptable (though imprecise, and should not really be done either way), but differentiating numbers is nonsense. Instead, you want to realize that f[1](x) = x for all natural x, and so x·x = f[1](x)·f[1](x) = (f[1]·f[1])(x) = f[2](x), thus f[1]·f[1] = f[2], which can be differentiated (in the strange sense in which I have clarified earlier), but unlike with x·x, f[1]·f[1] cannot be written as a sum of f[1]'s, because f[1] is a function, not a natural number. So, technically, there are multiple mistakes in the proof.
4
-
4
-
Let us stay with the real numbers before going into the complex numbers. Above, I explained that you can use the ordering of the real numbers to extend to the real numbers into a system where every subset has a supremum, and this introduces the objects T and B, which Wikipedia (and most non-mathematicians) call ∞ and –∞.
However, there is a different extension you can do with the real numbers. Rather than using the ordering, you can use a notion in the theory of topological spaces called a one-point compactification. The idea is similarly to that of the affinely extended real line, except that you take the two endpoints that this line has, and you join them together into one point. It is almost as if you are saying "let –∞ = ∞," but you can do this very rigorously, and it is actually very useful as well. While you do give up the ability to do arithmetic, like with the affinely extended real numbers, and while you also give up the ordering, unlike the affinely extended real numbers, it gives you one huge benefit in exchange: the ability to do projective geometry. As such, this structure is called the projectively extended real line. You can study Möbius transformations (whenever people write 1/0 = ∞, they are talking about a special case of an elementary Möbius transformation, they are not talking about division, and this is clearly another example of abuse of notation, which again, is by no means universal).
If you think about it visually, it amounts to turning the real line into a real "circle" of sorts, but a circle with an infinite circumference. This is intentional, because one grand motif in projective geometry is that lines are treated like circles, and parabolas are just treated like ellipses, and since circles are ellipses, they are all just ellipses in the context of projective geometry. Lines are just ellipses with infinite eccentricity, and parabolas are just ellipses with eccentricity 1. You can even get funky and treat hyperbolas like ellipses as well if you allow complex numbers. More importantly, this type of structure allows you to systematically study the different kinds of asymptotes that exist. One intuition behind this is that it makes calculus more symmetric, so to speak. Rather than speaking about approach T or B (∞ and –∞ respectively), you instead speak about approaching ∞ (again, we really should use a different symbol) from the right (the positive real numbers) or the left (the negative real numbers), which feels rather natural.
The projective extended complex plane, also known as the Riemann sphere, is actually just a trivial extension of the projectively extended real line: it is just the union of the complex numbers and the projectively extended real line. The Riemann sphere is extremely useful in complex analysis as it simplifies many different concepts, and it serves as the main inspiration of wheel theory. In your comment, you talked about the complex numbers having only one "infinity," and it refers precisely to this, the Riemann sphere.
This brings me back to why I said it is confusing to use the word "infinity" here. You said "the real numbers have 2 infinities, while the complex numbers have 1 infinity." This is clearly not the case, though. As you can see, there are two different ways you can extend the real numbers, one of them introducing two new objects (which you can think of as infinite if you want, but please do not call them "infinity"), and the other one introducing only object (this object completely unrelated to the T and B of the other extension). In principle, there are other rigorous ways you can extend the real numbers too to introduce infinite objects, though they may not necessarily be useful. As for the complex numbers, I only talked about the projective extension, but you absolutely can use an affine extension of the complex numbers if you want to, it is perfectly valid, just nowhere as useful as the Riemann sphere. This introduces not two new objects, but infinitely many new objects, actually. The intuition is that, in analogy to how the real line was closed off by two endpoints, the complex plane is being closed off by a border which, intuitively, is the shape of a circle with infinite circumference. Each new infinite object corresponds to a direction in the complex plane. You can come up with a toroidal extension, where the real axis and the imaginary axis both get projectively extended, but separately, so that you get two new different infinite objects: one imaginary, one real. You can do all sorts of other extensions, as long as they are mathematically coherent.
3
-
0:03 - 0:12 You would need to specify what ring you are working in. In the trivial ring, 0 = 1 is true. For the sake of this video, then, I suppose we should assume we are working in a non-trivial ring, so that the proof could be conceivably incorrect.
0:32 - 0:43 This is not actually necessary, nor does it add more correctness to the proof. The restraint that x is nonzero ends up being irrelevant.
2:50 - 3:12 It should be noted that a^2 – b^2 = (a – b)·(a + b) is only true if a and b commute, which is to say, if a·b = b·a. Since x = y, x·y = y·x is indeed true, but this should be stated.
4:10 - 4:11 At this stage in the proof, we most definitely have a 0 = 0 situation. Specifically, since x = y, it follows that x – y = 0, and so (x – y)·(x + y) = 0·(x + y), while y·(x – y) = y·0 = 0·y, and so we have 0·(x + y) = 0·y.
4:12 - 4:41 This is where the proof went wrong. As I noted in my previous paragraph, the equation (x – y)·(x + y) = (x – y)·y is equivalent to 0·(x + y) = 0·y, since x = y implies x – y = 0. What the video is thus effectively doing is declaring that 0·(x + y) = 0·y implies x + y = y, which is not true. This is because, even when a is not equal to b, 0·a = 0·b = 0, and this is true in all rings. This is equivalent to just saying that 0·2 = 0·1 implies 2 = 1, which is obviously not the case.
5:13 - 5:29 This means x is idempotent with respect to addition, and since this is a ring, it implies x = 0. This contradicts the fact that x is arbitrary, though.
5:39 - 5:45 This extra restriction is not necessary, all you need is for x to be arbitrary
6:18 - 6:21 Even if it were true, it would not show the universe will end.
9:04 - 9:08 It is not that you are "not allowed to divide by 0." Rather, it is that, since we are working in a ring, as is required for distributivity to apply, and addition, subtraction, and multiplication to be well-defined, it must be the case that 0·x = y·0 = 0 for all x, y, and so 0 is not cancellable, which means that even if a is not equal to b, 0·a = 0·b.
3
-
Let f : [0, ∞) —> R, where R is the ordered field of real numbers, and [0, ∞) := {t in R : t >= 0}, and f(x) = sqrt(x) everywhere. We say lim f(x) (x —> ∞) = L for some real number L, if and only if for all real numbers ε > 0, there exists some U > 0, such that if x > U, then |f(x) – L| < ε. |f(x) – L| < ε is equivalent to L – ε < sqrt(x) < L + ε. Since sqrt(x) >= 0, it follows that max(0, L – ε) < sqrt(x) < L + ε. This is equivalent to x >= 0 and max(0, L – ε)^2 < x < (L + ε)^2. Hence, if max(0, L – ε)^2 =< U, then if x > U, x > max(0, L – ε)^2. However, x < (L + ε)^2 for some ε, L, and x > U. Therefore, lim sqrt(x) (x —> ∞) = L is false for every real number L, which means lim sqrt(x) (x —> ∞) does not exist.
On the other hand, for all real numbers B > 0, there does exist a real number U > 0, such that if x > U, then sqrt(x) > B. This would be the case whenever U >= B^2. Therefore, one may say that as x —> ∞, sqrt(x) —> ∞. This is just special notation to say what I said above.
3
-
3
-
Let sqrt : [0, ∞) —> [0, ∞) be a bijection, such that sqrt(x)^2 = x everywhere. Let g : [5, ∞) —> [0, ∞) be such that g(x) = x – 5 everywhere. g is a bijection. Let f := sqrt°g. Since sqrt and g are bijections, f is a bijection. Therefore, dom(f) = dom(g) = [5, ∞), and range(f) = codom(f) = codom(sqrt) = [0, ∞). Since g(x) = x – 5 everywhere, f(x) = (sqrt°g)(x) = sqrt(x – 5) everywhere. Therefore, graph(f) = {{{x}, {x, sqrt(x – 5)}} in [5, ∞) cross [0, ∞) : x in [5, ∞)}.
3
-
Mathematics are not static either. How mathematics were understood in the 1800s was very different than it is today: so different, it would be irrecognizable.
By your extremely flawed logic, nothing should be taught at all. Science is not static, languages are not static, literature and art are not static, social skills are not static. Nothing is static, and again, I would like to emphasize, mathematics are no exception to this. So by insinuating that there is something wrong with teaching skills that are contemporary, you are insinuating that teaching any skills at all is wrong, since all skills are contemporary and change drastically over time.
3
-
3
-
3
-
To start with, you should probably stop using the word "infinity" to talk about these objects, because it is mostly nonsensical, and it does not correspond to the way the mathematical theory actually works, and so, you will likely just end up confusing yourself by trying to think of these objects all under the name "infinity." "Infinity" is not a mathematical object, it is just some vague, pseudo-mathematical concept. Yes, in mathematics, we work with infinite objects all the time, but these objects all have different names, they are not "infinity," and they are well-defined within a given context, but they are not denoted by the symbol ∞ that everyone uses. These objects all do different things, and what it means for them to be "infinite" means different things, depending on what they are. This is why, if you want to understand what is really happening, you should really stop thinking of "infinity" as more than just a concept, and instead, just at mathematical concepts for what they are. I believe this is the most useful advice I can give you to move forward with understanding these topics.
The mathematical object that we call the "real number system" is defined in such a way that it has two fundamental components. One of the components makes the real numbers form a field. This means you can add these objects, you can multiply them, and you can (mostly) divide them, and there are certain strict rules for how to do that. The other component is that of an ordering: you can order the real numbers, you can compare them. It makes sense to say that 0 is less than 1. It makes sense to say that 1/3 < 1/2. This is important, because not every mathematical structure has this. Now, the way the real numbers are ordered is actually very, very special. In what way is it special? In three ways: (a) addition and multiplication are compatible with the ordering; (b) the ordering is total, meaning that for any two real numbers x, y, you can always, without exception, compare them; (c) the ordering satisfies the least upper bound property. This last one may confuse you. What is the least upper bound property? It is the property that says, that if I take any arbitrary (non-empty) subset S of the real numbers, if S is bounded from above (it has an upper bound), then it must have a least upper bound. For example, consider the interval (0, 1) (the endpoints are excluded). This interval has an upper bound. 2 is an upper bound. 10 is an upper bound. π is an upper bound. However, of all the infinitely many upper bounds, one of them is the smallest one possible, and this is the real number 1. Why is this the smallest upper bound? Because any real number less than 1 is either in the interval (0, 1), or smaller than 0, and so, not an upper bound of (0, 1). Even though (0, 1) has no greatest real number, it does have a least upper bound, which is 1. By the way, the least upper bound is also called the supremum, and this is the name I will be using from now on.
This should make you think a little more about (c). The ordering is such that every nonempty set of real numbers bounded from above has a supremum, but it feels as though we should be able to make this even stronger. What happens if we extend the real numbers, in such a way that all sets of real numbers have a supremum? For this to be possible, there needs to exist some object greater than all real numbers, and this object will be the largest object in the set. I will call this object T, which stands for "top." There also needs to exist some object smaller than all real numbers, and this object will be the smallest in the set. This object needs to exist so that the empty set can have a supremum in this ordering. This object, I will call B, which stands for "bottom." Hence, we have the set of all real numbers, and also, the objects T and B. Together, this new ordered system is called the affinely extended real line, and the geometric, visual idea is that the line of real numbers has been extended in such a way, that it now has endpoints, B and T. If you read the Wikipedia article, or some other popular but non-scholarly source, then you will find that the symbols for T and B that they use are ∞ and –∞, and they usually are read "infinity" and "negative infinity." However, as I already explained to you, this notation/language is very confusing, and it is misleading, if not outright incoherent, and this is not universal in the mathematical literature either. So, although I do want you to be aware where the symbols –∞ and ∞ come from when doing calculus, I will not be using them, unless it amounts to clarifying something, because if I do use them, it will confuse you. Every time you see a statement of the form lim f(x) (x —> ∞) = L, you should replace this with lim f(x) (x —> T) = L, and similarly with –∞ and B. Also, every time you see lim f(x) (x —> p) = ∞, you should replace this with lim f(x) (x —> p) = T, and again, this analogous with –∞ and B.
One thing that is very important to understand is that, while you can do arithmetic with real numbers, you cannot do arithmetic with T and B (which is why people usually say "infinity is not a number"). Yes, these are valid, well-defined mathematical objects as far the ordering system is concerned, but you cannot perform addition, multiplication, or division with these objects, without creating a bunch of contradictions. This can be proven carefully, but I will not do that here. And I know that often, you will see these strange "conventions" where you see things like 1/∞ = 0 and x + ∞ = ∞, but these are just abuse of notation, and are not universal - different contexts use different conventions for abusing notation. Properly speaking, there actually is no such a thing as arithmetic with these objects. You can still do certain operations with these objects, like the supremum operation, of course, and you can still define many classes of functions for these objects whenever you are not relying on analysis rather than arithmetic, but these funcions and operations are not the operations we usually call "arithmetic." This is why it is very tricky and complicated to evaluate limits when it comes to expressions where these objects become involved in some way or another, and actually, you are not required to have these objects to do any calculus at all.
Anyway, this is all to say that the conceptual idea I just explained, of making sure all sets of real numbers have a supremum (an idea that is indeed very useful, as long as you are not trying to turn it into an arithmetic number system), is where the symbols –∞ and ∞ come from. Meanwhile, the usage of the symbol ∞ in the context of complex analysis, is completely, completely unrelated to this, and again, it just boils down to abuse of notation. But what is the actual underlying concept behind it? I will explain that in the next comment.
2
-
0:52 - 0:55 Saying "I can keep adding 0s forever will continue giving me 0" is a mathematically incoherent statement, because of the non-concept of "adding forever." It is extremely easy to make mistakes in mathematics when using colloquial notions and intuition, when you should be using precise language instead. I think this here is the actual mistake in the "proof," rather than anything else. There is nothing that can be discussed at all about the validity in the proof until the terms in the proof are actually precisely defined, to begin with, so nothing else could possibly be meaningfully identified as the mistake. This is not very different from how, if I say "Gooblydegock is heavy and of the color martin," it amounts to nothing but literal gibberish, since the words "gooblydegock" and "martin" are undefined terms. There is no meaning in say the sentence is true or false: conceptually, it is not even really a sentence at all, to begin with. It is merely a nonsensical string of symbols. On that note, the idea being conveyed has multiple inequivalent ways of being formalized, but the most basic of those formalizations, and the one most commonly used, as well as the one most relevant to this video, would be to consider a sequence f(n) = 0. You can find the sequence of partial sums of f, and that gives you still s[f](n) = 0. This also means lim s[f](n) = 0. Yes, I do understand that colloquial language and intuition are important in the context of teaching. However, as far as proofs are concerned, those are completely inappropriate.
2:20 - 3:37 What is truly happening here is that we are talking about the partial sums of two different sequences. Earlier, we had f(n) = 0. Here, we have g(0) = π, g(n) = 0 otherwise. What the video is claiming is that lim s[g](n) = lim s[f](n). This is clearly false. The reason this is confusing, though, is that the notation used in the video (which is inappropriate when doing mathematics) makes it seem as though you are still talking about the sequence f and its partial sums, but in reality, it has been replaced by the sequence g with its partial sums, without the viewer noticing, because the notation being used is simply misleading and ill-defined. This goes back to what I said in my first paragraph. Since it is not even clear what the notation being used is even supposed to mean, it can be easily used to deceive people. This is not a matter of unnecessary pedantry, it lies at the very core of the problem.
2
-
2
-
2
-
@thebestof2341 No, it was not. The order of operations allows a + (b + c) = (a + b) + c. This is called associativity. In fact, the order of operations exists only because of associativity being true.
Also, I should point out that you are not actually obligated to follow the order of operations, since it is just a convention. Mathematicians could have chosen a different convention altogether, and it would have been just as valid. In fact, you can rewrite all of mathematics using Polish notation, where no order of operations is even needed, as the notation is unambiguous, unlike the infix notation most mathematicians use when publishing, which is the same notation we use. The ambiguity of infix notation is why the order of operations exists. However, with that being said, the order of operations was followed in the video. Anyone saying otherwise does not understand the order of operations.
2
-
2
-
2
-
2
-
2
-
You can't take the negation of "for all" if there is no "all", i.e, no members of φ, to begin with; by definition of logical negation and empty set.
No, this is nonsensical. You should really sit down one of these days and read an introductory-level book in formal logic. You absolutely can negate quantifiers over an empty domain. Logical negation of the universal quantifier is actually equal to to the existential quantifier of the negation, and this remains true in the empty domain.
But also, this objection is genuinely stupid: he never used universal quantifiers in his proof.
The contradiction is that φ (the empty set) is not a subset of A, φ is empty to begin with.
0. φ does not denote the empty set. Stop calling it "phi". We do have notation for the empty set, namely ø and {}. φ is not one such valid notation.
1. {} is empty, which is why it is a subset of A. There is no contradiction here. Since the empty set has no elements, every element this empty has (a.k.a none) is an element of A, and this much is true.
If you assume {} has members to begin with, the contradiction doesn't change that.
It absolutely does. If {} actually has elements, then at least one of those elements is not an element of A. So {} is not a subset of A.
The contradiction applies to subset, not membership.
No, it applies to membership, because the contradiction occurs on the existential quantifier. Also, this is a silly objection: the subset relation is defined exclusively in terms of the membership relation.
Otherwise, your argument runs: Assume {} has members...
No, that is definitely not at all how his argument would run.
2
-
2
-
2
-
@southali If the room has no vacuums, how can the vacuum be a subset of anything since it doesn't exist?
As I pointed out already, it does exist. But also, this argument is fundamentally fallacious, since this is inadequate analogy for sets. The problem here is that objects in space do not merely form sets. Sets are not compatible with the idea that elements appearing a different number of times changes the set, yet in this room, changing the number of particles of any given molecule type definitionally changes the mereological sum of the particles. Also, because the existence of any given configuration is also dependent on the amount of space, and not just the number of elements, there is no well-defined notion of a subset. For example, the configuration where we only consider the oxygen atoms is also not a valid subset, because it is not true, at least according to you, that the room only has oxygen atoms. This is the issue. So the vacuum is not the issue here: the scenario itself is incapable of working with a coherent notion of subset. Again, this is because the scenario you are describing is not at all analogous to a set. What you are describing is a differentiable manifold with topological deformities of a given type. This has nothing to do with sets, really.
Also, I should point out that it is impossible to refute a valid formal proof using analogies from intuition, regardless of how much sense you think those analogies make. Analogies, by their very nature, are necessarily flawed. This is why we use logic, rather than analogies, for evaluating the validity of proofs.
2
-
2
-
2
-
This is incorrect. The domain can actually be any set, including the empty set. In particular, the domain can be N, Z, Q, or C. It need not be R. In fact, the equation f(x) = 5 does not specify a function. The domain of a function cannot be uniquely determined by an equation, and neither can be the codomain or the range. To the contrary: for f to be a well-defined function, the domain must be specified in and of itself, and the same is true for the codomain. I could have chosen the domain to be Z, the codomain to be Q, so f : Z —> Q, such that f(x) = 5 everywhere, and this would be a well-defined function. If instead, g : Q —> C, such that g(x) = 5 everywhere, then this still would be a well-defined function, and it is a different function from f. In both cases, the range is {5}, but the domain and codomain are different. In fact, the meaning of the symbol 5 is technically different in both cases. For f, f(x) = 5 is a rational number, whereas g(x) = 5 is a complex number. Despite the fact that we use the same symbol to denote them, these are different, unequal mathematical objects.
2
-
2
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1