General statistics
List of Youtube channels
Youtube commenter search
Distinguished comments
About
Taxtro
Mathologer
comments
Comments by "Taxtro" (@MrCmon113) on "9.999... really is equal to 10" video.
EXACTLY! The problem is not the students questioning 0.9... being equal to 1.0..., the problem is rather not questioning what "..." is supposed to mean at all. Every shorthand explanation you find in the comments is not only wrong, but utterly meaningless. Once one establishes what infinite decimals are actually supposed to mean, no explanation is necessary anymore and 0.9... = 1.0... becomes as natural as 1/2 = 2/4. The problem is that "convergence" or "equivalence classes" don't enter the discussion. I'm with the students rejecting the lazy off-hand explanations. Those are less than wrong.
9
That's still vague bullshit. There is no temporal component. 0.99... is the limit over k of the sums of the first k nines. 0.9 + 0.09 + 0.009 +... That limit is 1 and therefore 0.99.. is a number and equal to 1.
4
All of that is utterly meaningless without defining "...". That's the problem with all "explanations" that X.999... = (X+1).000... With the correct definition you don't need an explanation. And if there anything to show then it's that 9 + 9*10^-1 + 9*10^-2+... converges.
2
Who taught you that bullshit? XD Your new number has infinite 9s and a zero at the end... interesting.
2
1.000... and 0.999... describe exactly the same object. It's like poteyhto - potahto. Anyways you are right to question those flimsical non-sense "explanations" for it. When you learn the theory of what is actually meant by infinite decimals, it becomes immediately obvious why 1.000... describes the same object as 0.999... (ie they are both representatives of the same equivalence class, the equivalence class is what is identified with the real number "one")
1
0.333... is an infinte sum converging on "one third". We identify "one third" with the class of all infinite sums converging on it, ie "one third" = [0.333...] In the same way "ten" = [9.999...] = [10.000...] 9.999... and 10.000... are merely two different representatives of the same equivalence class. Of course it leads to confusion when you leave out the braces or any precise definition. I'm with the students questioning blanked statements like 2 = 1.999... accompanied by shitty, meaningless explanations. You don't need an explanation if you have the proper definition.
1
It is not obvious at all and since you are saying that, you obviously haven't understood it. The problem is not that people question "0.999..." being "1", the problem is that they don't question "..." in the first place. It is ill defined at school and therefore all "explanations" are null and void. The professors calculation is right, but also utterly meaningless without a notion of convergence and the value of an infinite sum as the limit of a series. Using precise language all explanations become obsolete. The concept of "one" exists independent from all of it's representations. We identify "one" with the equivalence class of all infinite sums (of digits times ten to the power of x) converging on it. That equivalence class has two elements 0.999... and 1.000... So we may write "one" = [0.999...] = [1.000...].
1
@Manning Bartlett That's because they were introduced to division, but not limits. If you never told people about division 1/2 = 2/4 would also raise eyebrows.
1
You should just ignore all of these explanations. Instead look at how infinite decimals are formally defined. They are equivalence classes of (certain) series converging on a number. We identify "one" with the equivalence class of certain series converging on one. The series (0 , 0 + 0.9 , 0 + 0.9 + 0.009 , ...) (lets call it 0.9....) converges on one. So does the series 1.0... . So 1.0... is just a different representative of the same equivalence class as 0.9... . To write "one" = [1.000...] is the same as writing "one" = [0.999...] since [1.000...] is exactly the same object as [0.999...].
1
That "discrete" stuff would be irrelevant as well. All of this is build directly on cantor's set theory and that's as concrete as it gets.
1
No, people are right to question this. The problem is that they don't question the infinite decimals as a whole until they are defined properly. For starters we should never write 2 = 1.999..., but 2.000... = 1.999... when introducing infinite decimals. Also we should make clear that any number exists independently from it's representation in decimals. Next we should make clear that when we identify "two" and "2" with "2.000..." we are idenfitfying the LIMIT of a series of sums with a number. If we provide the proper theoretical underpinning of convergence and don't confuse the infinite sum (a SERIES) with it's value ( a limit), then two = 2.000... becomes no more counter intuitive than two = 1.999...
1
Exaclty. The problem is not students questioning the latter. The problem is students not questioning the former. Both are total nonsense until given a precise meaning (which they have).
1
All of that is meaningless without a precise definition of "(9)" or "...". Mathologers calculation is correct (and provide more insight than any off hand "explanations"), but also utterly meaningless at the same time, because he didn't define what in the world an "infinite sum" is supposed to be. (Spoiler: It's not a sum at all.)
1
That only shows that "3.3..." is just as bogus as "0.999...". To tell people to accept the latter, because they accepted the former sets them up to be intellectually lazy. Rather you should explain what "..." means precisely.
1
It's a sideeffect of an otherwise very practical system.
1
They are correct to question you until you formally define convergence and infinite sums, then say that you want to identify real numbers with infinite sums, which converge on that real number, and show that 9 + 0.9 + 0.009 + ...converges on "ten". Using the formal definitions, it all becomes clear and intuitive.
1
@Dale Collins You are right to question shorthand explanations. Indeed infinite decimals are not defined properly in school and you are right to reject the "...". What is meant is that we identify a real number with a series of sums, which converges on the real number. For some real numbers there are multiple series of that kind, which is why some real numbers have two different representations as infinite decimals. Remember that a number is different from any particular representation of that number. "three" remains a valid concept without having the expression "3" or "3*10^0" or "3.000..." or "2.999..." If you look up convergence and infinite sums what I said will become immediately obvious. All that is to show is that 9 + 0.9 + 0.009 + ... converges on 10. In other words real numbers can be identified with equivalence classes of infinite sums, grouping every sum with all other sums of the same value. So [0.999...] = [1.000...] = "one".
1
That's utterly meaningless and you haven't understood what you are even talking about. What is "0.9..."? Why is it the same as "1.0..."? Is that identity any different from saying "one" equals "1.0..." or "0.9..." ? (it fundamentally is)
1
I think another part of the confusion is the (often lacking) distinction between infinite sums and the value of an infinite sum.
1
You are completely right and that's why "0.9..." is not a series, but an EQUIVALENCE CLASS of series converging on one. What you pointed out is the crucial distinction between a series and it's limit.
1
>nearly everyone might try to write its position as x = 0.999~ And they'd be correct. Don't confuse the infinite sum with it's limit. I think that is the source of a lot of confusion. The concept of "one" is not identical to the concept of the infinite sum 0.999..., but we may identify "one" with the equivalence class of all infinite sums (or rather all infinite sums of digits times ten raised to the power of x), which converge on "one". IE: "one" = [0.999...] Another representative of that equivalence class is 1.000... . Therefore [0.999...] means *exaclty* the same object as [1.000...]. 1.000... = 0.999... is wrong if you talk about the sum itself, [1.000...] = [0.999...] is trivial if you know that 0.999... converges on one.
1
That is utterly meaningless. You haven't learned anything and you would be right to question infinite decimals altogether until you get a proper definition. Then again, you would also be justified in questioning almost everything else.
1
People are right to raise an eyebrow, because without the theory of convergence and infinite sums the infinite decimal numbers make no sense. Another problem is people identifying numbers with their representation as infinite decimal numbers; rather it should be thought about like this: When you represent a number as an infinite sum of digits times ten to some power, there is multiple ways to represent some numbers. For example you can write three as 3 + 0 *10^-1 + 0*10^-2 + 0... or you can write it as 2 + 9*10^-1 + 9*10^-2... Most importantly three = 3.000... is as much of transgression as three = 2.999...
1
VERY GOOD QUESTION. What we call a sum here is actually the limit of an infinite series of sums and NOT a sum at all! Now we take all series (of a certain kind), which converge in a number and identify this group of series with the number. That is the gist of infinite decimals. They are equivalence classes of a special kind of series. Similar constructions via equivalence classes actually define natural numbers, integers, fractions and real numbers, but not complex numbers.
1
Exaclty. What he did is right only after he provides the precise definition of what he means and shows that the series he is working with actually converges.
1
Not like he did. HOWEVER what he did was meaningless until you define what infinite decimals are precisely.
1
The series 1, 1+0, 1+0+0 is a series. The series 0, 0 + 0.9, 0 + 0.9 + 0.09... is a different series. "One" is a number. How are two different series the same? Let alone a number? Your comment misses the crucial information that we want to identify a real number with the equivalence classes of series converging on it. Then it makes sense to write "one" = [1.000...].
1
The calculation is correct, but it doesn't support your argument. 2+2=4, therefore your mother is fat...
1
@Omnissiah How is he more of an idiot than you? You wrote a lot of comments and never gave even the slightest reason to justify these expressions. OP made a totally valid observation "3.3..." is definitely different from ten divided by three if "3.3..." was actually a series. The key part of the definition, that no one mentions here, is that "3.3..." is not a series or an "infinite sum", but an equivalence class of series converging on a certain number. And we chose to identify that equivalence class with ten divided by three.
1