Comments by "MC116" (@angelmendez-rivera351) on "The Math Sorcerer"
channel.
-
1
-
1
-
1
-
1
-
1
-
1
-
Let (Q, 0, 1, +, ·) be the field of rational numbers. Hence (Q\{0}, 1, ·) is an Abelian group. Let f : Q\{0} —> Q\{0} be such that f(x) = x·x everywhere, and let f[Q\{0}] denote the range of f. Since f(1) = 1, and since f(x·y) = f(x)·f(y) for all x, y in Q\{0}, f is a group homomorphism. Therefore, if • is · restricted to f[Q\{0}]^2, then (f[Q\{0}], 1, •) is an Abelian group. Q. E. D.
1
-
1
-
It is harmful advice to tell students that the correct way to evaluate a limit is to start by "plugging in" into the function. This is incorrect, and it fails whenever the function is discontinuous, which you cannot know unless you already know the limit. Below, I present the correct method to do this exercise.
Let f(x) = x – 1, and let g(x) = x^2·(x + 2). lim f(x) (x —> –2, x > –2) = –3, and lim g(x) (x —> –2, x > –2) = 0. Therefore, lim f(x)/g(x) (x —> –2, x > –2) = lim (x – 1)/(x^2·(x + 2)) (x —> –2, x > –2) does not exist.
However, you can say that as x > –2, x —> –2, f(x)/g(x) —> –∞. You can say this, because g(x)/f(x) < 0 as x > –2, x —> –2.
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
In the first line only, 0 cannot be written as a sum of infinite zeroes.
Technically, there is no such a thing as an infinite sum. However, there is such a thing as the limit of a sequence. You can write 0 as the limit of the zero sequence. Consider this: z(m) = 0 everywhere. Let s[z](0) = 0, and s[z](m + 1) = z(m) + s[z](m) everywhere. Hence, s[z](m) = 0 everywhere. s[z] is the sequence of partial sums of z, so lim s[z] is the "value of the series" of z. lim s[z] = 0. This is completely valid.
As, if it is possible, then we can write the integral of a continuous function on [a, b], as the sum of the integral at each point.
No, that is actually not how that works. The integral of a function cannot be evaluated "at a single point." You can only evaluate the integral of a function over closed intervals, and finite unions or intersections of those closed intervals.
1
-
1
-
1
-
π is defined as the ratio between the circumference of a circle and the diameter of said circle. One can prove said ratio is constant by first, noting that this ratio does not change if a circle is centered at the origin. A circle centered at the origin with radius r has equation x^2 + y^2 = r^2. The ratio between the circumference and the diameter is equal to the ratio between the arclength of the upper semicircle and the radius of the corresponding circle. The upper semicircle is given by y = sqrt(r^2 – x^2), and the arclength is given by the integral on (–r, r) of sqrt[1 + (y')^2]. y' = –x/sqrt(r^2 – x^2), hence (y')^2 = x^2/(r^2 – x^2), implying that sqrt[1 + (y')^2] = r/sqrt(r^2 – x^2) = r/sqrt(r^2·[1 – (x/r)^2]) = r/(r·sqrt[1 – (x/r)^2]) = 1/sqrt[1 – (x/r)^2]. Let t = x/r, hence x = r·t, hence dx/dt = r, and (–r, r) |—> (–1, 1), so the above integral is equal to r multiplied by the integral of 1/sqrt(1 – t^2) on (–1, 1). The integral on (–1, 1) of 1/sqrt(1 – t^2) is independent of r, so this is a constant ratio, and so the arclength is proportional to r. Therefore, the arclength divided by the radius r is simply this constant of proportionality: the integral on (–1, 1) of 1/sqrt(1 – t^2). This integral is the definition of π.
To get a better definition that we can use to prove that π is a real number that is not rational, we can first notice that if we define g(x) as being the integral on (–x, x) of 1/sqrt(1 – t^2), then π := g(1). One can then obtain the Maclaurin series expansion of g, which converges everywhere for g, and use the Lagrange inversion theorem to prove that [g^(–1)](π) = 0, and furthermore that [g^(–1)](z) = Im[exp(i·z)]. Hence g^(–1) can be analytically continued to the entire complex plane, and it can be shown that [g^(–1)](0) = 0, and that in general, exp(2·m·π·i) = 1. This gives us a new, more useful definition of π: it is the unique real number such that it is half of the imaginary period of exp. This can be used to prove all sorts of properties of π.
1
-
1
-
1
-
1
-
1
-
1
-
@SequinBrain I'll say this, and since it's the second time I've said it, I'm repeating myself: the subject is "value." Banks have value or the account doesn't exist.
I agree, but this supports my point, not yours, and you are the one who brought banks, not me. Also, this is also me repeating myself.
The subject is value, not "things." Or if it is things, it's things that have value, and like 0, things that don't. zero is the absence of value. zero destroys everything it touches like zero oxygen destroys humans.
Well, no, that is just factually incorrect. 2^0 = 1. 0! = 1. cos(0) = 1. ζ(0) = –1/2. Saying "zero has no value" and "it destroys everything" is about the most childish mathematical viewpoint I know of.
zero multiplied by anything destroys its value and gives it none. The inconsistency is introducing a new subject when we haven't finished the first yet.
You were the one who introduced the subject of banks, not me. This is your mistake, not mine.
This is the last I have to say since I'm repeating what's already been said and apparently not comprehended yet.
No, I comprehend it just fine. I am not confident you comprehend my response, and the fact that you say this strengthens my doubt.
1
-
1
-
1
-
1
-
I think there is a better proof for the claim that lim 1/x (x —> c) = 1/c for all 0 < c. If 0 < δ & 0 < x & -δ < x – c < δ, then c – δ < x < c + δ. Hence, if you can find δ such that δ < c, then 0 < c – δ, and 1/x < 1/(c – δ, thus 1/(c·x) = 1/|c·x| < 1/[c·(c – δ)], and |x – c|/|c·x| = |1/x – 1/c| < |x – c|/[c·(c – δ)] < δ/[c·(c – δ)]. Therefore, |1/x – 1/c| < δ/[c·(c – δ)]. To prove |1/x – 1/c| < ε, let ε = δ/[c·(c – δ)], equivalent to δ = c^2·ε/(1 + c·ε). Thus, all that remains to be proven is that c^2·ε/(1 + c·ε) < c, in accordance to δ < c, and the proof is complete. c^2·ε/(1 + c·ε) < c is equivalent to c·ε/(1 + c·ε) < 1, equivalent to c·ε < 1 + cε, equivalent to 0 < 1, which is axiomatic. Q. E. D.
1
-
1
-
1
-
Actually, one could be far more careful here. dS/dr = k·S implies S = 0, or 1/S·dS/dr = k. Consider f(S) = ln(–S) + A iff S < 0 and f(S) = ln(S) + B iff S > 0. f'(S) = 1/S. Hence 1/S·dS/dr = d[f(S)]/dr = k, which implies ln(–S) = k·r – A iff S < 0, and ln(S) = k·r – B iff S > 0. With this, we have S = 0, or S = –exp(–A)·exp(k·r) iff S < 0, or S = exp(–B)·exp(k·r) iff S > 0. S = 0 = 0·exp(k·r) = λ·exp(k·r) iff λ = 0, while S = –exp(–A)·exp(k·r) = λ·exp(k·r) iff λ < 0, and S = exp(–B)·exp(k·r) = λ·exp(k·r) iff λ > 0. Therefore, S = λ·exp(k·r).
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
1
-
Since the cosine is not zero, you can cancel it out (divide by it) on both sides, and you get 0 = 2π.
This is not true. cos is nonzero, but cos could be a zero divisor, or, more simply, it may not be left-cancellable.
Divide both sides again, this time by 2, and you get 0 = π.
This is not justified. How have you discounted 2 = 0? Or 2 being not left-cancellable?
With 2 totally independent proofs, it has to be true, right?
2 is quite a small sample size, making your study statistically insignficant 😉
1
-
1
-
1
-
1