Comments by "MC116" (@angelmendez-rivera351) on "Another Roof"
channel.
-
17
-
9
-
7
-
5
-
5
-
@lexinwonderland5741 I agree with you that more should have been mentioned on it, since the distinction between irreducible elements and units in a ring is ultimately at the root of why 1 cannot be considered a prime number.
Look, do not get me wrong. I think that understanding how mathematical concepts from antiquity involved into mathematical concepts today as our understanding of mathematics improved and became more refined is very fascinating, and certainly an important kind of knowledge to have in general. However, as far as answering the question "is 1 a prime number?," the history is not enlightening at all: it ultimately does not answer the question. Yes, I know that mathematicians in the 1700s thought of 1 as a prime number, this is all well and fine, but that tells us nothing as to whether 1 actually is or should be considered a prime number or not. These questions are questions regarding the relationships between various mathematical concepts at a foundational level, not questions about names and conventions that mathematicians vote on. If you want to get at the question of whether 1 is a prime number or not, then you ought to compare the prime numbers with 1, analyze their properties and their roles within the integers, then compare how these things extend or fail to extend when you move on to other mathematical structures, like polynomials and Gaussian integers. This is how you answer the question.
Appealing to the history of mathematics actually reinforces most people's misconception that 1 should be considered a prime number, and reading the comments to this video has resoundingly confirmed this suspicion. I think that discussing the history is perfectly fine when addressing the question "why did we ever consider 1 a prime number?" or "how has our understanding of prime numbers changed?" But neither of those questions is the question the video claims to address.
5
-
5
-
All hyperoperations can be defined in this fashion. Given the μth hyperoperation, denoted %, the S(m)th hyperoperation, denoted #, can be defined by having m#0 = 1, m#S(n) = m%(m#n). Even more direcrly, we can define a function H : N^3 —> N such that H(m, n, 0) := S(m) & H(m, S(n), S(μ)) := H(m, H(m, n, S(μ)), μ). This uniquely defines every hyperoperation all at once. This is related the Ackermann function.
4
-
4
-
4
-
4
-
3
-
3
-
3
-
3
-
3
-
2
-
2
-
2
-
@kurtu5 I know ring theorists have proof that you cannot. All commutative rings can be decomposed into 4 substructures: the zero divisors (always includes 0 itself), the invertible elements (also called the units; always includes –1 and 1), the irreducible elements (which we call the prime numbers when it comes specifically to the integers; for a given ring, this substructure could be empty), and the composite element (for a given ring, this substructure could be empty). There is no exception to this. In the case of the integers, the classification simplifies as follows: the set of zero divisors is {0}, the set of units is {–1, 1}, the set of prime numbers {..., –5, –3, –2, 2, 3, 5, ...}, and the composite numbers, which are the remaining integers. Because of the additive symmetry, though, when studying number theory, we very often ignore the negative integers, so we speak as if only the positive integers could be prime numbers. In this case, we would have four substructures, {0}, {1}, {2, 3, 5, ...}, and whatever remains. Theorems in number theory are just special cases of theorems in ring theory, which hold for very general classes of rings, not just specific chosen rings like the ring of integers. As such, we can literally just classify the theorems in accordance to what rings they apply to. We can even make statements about what theorems can be proven about rings at all.
2
-
@valmao91 It could, but we don't have a way to know that, as we don't know everything about primes, and probably never will.
I hate to burst your bubble down, but despite how much you want to insist that mathematicians are highly ignorant about prime numbers, they are not. We know more than enough about prime numbers to know that our definition is the correct one. In fact, our definition not only encapsulates the concept of prime numbers perfectly in the integers, it does so in all commutative rings. Tested and proven. We have thousands of theorems on the matter reinforcing this conclusion, together with over 200 years of studying ring theory formally to back it up. So, no, you are completely wrong about our inability to know even basic facts about prime numbers, and I wish you were not so arrogant as to pretend you can tell mathematicians what they can and cannot know.
Therefore, there is room for discussion, especially with a case like this one where, technically, 1 should be prime, but isn't because it's redundant.
No, this is factually incorrect. 1 not being a prime number is not a technicality. 1 literally does not satisfy the definition of a prime number. 1 is not a prime number, and should not be considered one. Redundancy has nothing to do with it, and in my comments above, I laid out a perfect line of reasoning behind the definition of prime numbers, and why –1 and 1 are not prime numbers. I know you find it convenient to ignore all of that (because you did ignore it), but that is just dishonest.
2
-
2
-
2
-
@elijahbedinger1222 For two sets X, Y, if an invertible function f : X —> Y exists, then X and Y have the same amount of elements, and are said to be equinumerous. What is the number of elements of a set S? Find a natural number n, and treat it as a set. If there is an invertible function g : n —> S, then n is the number of elements of S. If S is infinite, then just replace n by a cardinal number λ. If there is an invertible function g : λ —> S, then the cardinal number λ (which may be a natural number, or it may be infinite) is the number of elements of S. However, –1 is not a cardinal number. 0 is the empty set, {}, and –1 is the predecessor of 0. Therefore, if –1 is a cardinal number, then the union of –1 and {–1} is 0 = {}. However, by the axiom of union, if the union of U and V is {}, then U = V = {}. Therefore, what you are claiming is that –1 = {} = 0, and {–1} = {} = 0. However, this is impossible, because if {} = {–1}, then –1 is an element of {}, which is false. Therefore, there is no cardinal number that is a predecessor of 0. Therefore, there is no set with –1 elements.
2
-
@petevenuti7355 Is there such a thing as a ternary operation that can't be broken down into binary operations.
There is, surprisingly. These are called irreducible n-ary operations. They exist for all n > 2.
What is an operation then? It must involve action, yes?
An operarion is a function, but this raises the question of what is a function, does it not? So, what is a function? We intuitively tend to think of a function f as being fed by an input x, and spitting the output f(x). This makes it sound like a function has to refer to an algorithm, a physical procedure. However, a function is actually just an abstract relationship. f relates x and f(x) in an abstract way. The reason teachers present it as an algorithm is because it makes the axiom that defines what a function is easy to visualize, but at the cost of being misleading.
Consider two sets X and Y. In mathematics, we typically consider all objects to be sets, but for the sake of explanation, we can allow the members of X and Y to be arbitrary objects, they do not necessarily have to be sets themselves. Given X and Y, you can form a third set, the Cartesian product of X and Y. The Cartesian product of X and Y is the set of ordered pairs (x, y), where x is in X, and y is in Y. Now, there is a special class of subsets of this Cartesian product. These subsets G satisfy the following property: for all x in X, there is exactly one y in Y (always one, and only one), such that (x, y) is in G. This property is the property that teachers are ultimately alluding to when they talk about inputs and outputs of a function. The unique y such that (x, y) is in G is called the image of x under G, but in school mathematics, the teachers just call it "the output." As you can see, there is an abstract relationship between x and y that defines what the set G is, but there is no physical procedure involved. You can say y exists, but actually finding what y is, that is not required in order for G to be a valid "special subset" of the Cartesian product.
I should mention that this is not the complete definition of a function, but the technical details that I have omitted are not important for the point I am making. The point I am making is that for every function f, there is an associated set G that satisfy the above property, that for all x in X, there is exactly one y in Y such that the ordered pair (x, y) is in G. And that is all there is to it. These sets exist as abstract objects, not as physical procedures. Now, in a given model of computability, there are some functions, for which you can prescribe an algorithm that explicitly constructs or produces what the corresponding y is for a given x. In doing this, not only have you shown y exists, but also, you know what y is, you can give a finite description of how you obtained y. However, y could exist without being able to construct an algorithm to determine what it is. If this is the case, then such a function is called uncomputable (within that particular model). The most famous example of this is the busy beaver function, which I will not define here because I am not confident I understand the definition well enough to explain it.
The only way you can limit yourself to computable functions, realistically, is by saying that the axiom of infinity is false (meaning there are no infinite sets, as far as the axioms are concerned). However, this means that you are saying that there is no such a thing as the set of natural numbers. And by doing this, you give up Peano arithmetic and a bunch of other things. Building a mathematical system that is actually useful from this is very tedious, and not really worth the trouble of denying the axiom of infinity, especially because this axiom does so much for mathematicians and physicists. You cannot do any science without accepting the existence of infinite sets.
2
-
@petevenuti7355 I was just pointing out how significant it is that there can be such widely different meanings of the same word between people even speaking the same language.
Well, I think the point is that we are not actually speaking the same language. Computer science and mathematics are very, very different, and the conventions are very, very different as well.
I meant linear to denot the one to one or many to one relationship defining a function, you meant linear essemtially as a line in a coordinate system, and Angel I believe originally meant it in the strictest sense of linear algebra.
rms is using the word linear in the same sense I am using it, only that they are essentially presenting the definition in a bit of a simplified fashion. I imahine that the thing you are calling a "function" is actually just a program, rather than a function in the mathematical sense, though I could be wrong. Strictly speaking, a program, in terms of mathematics, is just a computable relation. A function is a relation that is left-total (in terms of programs, it means an output exists for all inputs, but without the requirement of computability).
You should take a look at a video called "What Does a Diagonal Argument Look Like?" or something like that, you will recognize the thumbnail as it talks a bit about The One Trick to Them All. In the video, the distinction between a program and a function is expanded upon a bit more.
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
2
-
@forbidden-cyrillic-handle Obviously, some mathematicians had different opinions, and routinely used it in the past.
Yes, a definition that was used over a thousand years ago. Do you think I care? No, I do not care. The correct definitions are the ones which are used today, until such a time comes when those definitions are changed, if they ever do get changed. Some definitions were thought to have been correct in the past, yes. This is fine, but today we know them to be incorrect, so the past is irrelevant. We study mathematical history to learn from the mistakes we made in the past, not to continue making them by continuing to use definitions that no mathematicians today use.
What is routinely used by some mathematicians is not what the definition is.
You are wrong. The definition I proposed is used by all mathematicians, not just "some." You will not find any mathematicians from the 20th or 21st century who use any other definitions. You can try searching all you want, but you will not find it.
Until it officially changes, and becomes something more than routinely used by some, I prefer to keep the current definition.
The current definition is the one I provided. It has been the current definition since the early-mid 1800s. This is the definition that originated from rigorous research in ring theory. It is the definition every mathematician, without fail, has used since the late 1800s.
You need a big conference to vote the new definition,...
This was done more than over a century ago. You are behind the times by millennnia. This is ignorant.
2
-
2
-
2
-
@vladislavanikin3398 It is the most powerful argument, and I would say, the only actual argument that really succeeds. All other arguments rely on logic that is fallacious, uncompelling. For example, this whole "it makes the fundamental theorem of arithmetic easier to state" nonsense is highly specious outside of the context of unique factorization domains. In most other areas of mathematics, mathematicians are completely unbothered when theorems have exceptions built into them. Heck, even for other theorems about prime numbers, there are many, many theorems which have 2, 3, and sometimes even 5 as the exception, and yet no one bats eye in calling these prime numbers anyway. The double standard makes no sense. Besides, the validity of a theorem should not depend so much on the precise details of how its phrased. Otherwise, you could prove literally anything by choosing "the adequate phrasing." This makes it obvious that the actual answer to the question "why is 1 not a prime number?" has nothing to do with the fundamental theorem. The question is answered, as you said, by the fact that 1 is a unit, and that units are conceptually and fundamentally different from irreducible elements.
2
-
This is not the definition of a prime number. This is the definition that is often taught in grade school, but it is incorrect. The definition of a prime number is an integer which is nonzero, not invertible, and which, when written as a product of two integers, must always contain a factor of –1 or 1. Since 1 and –1 are invertible, they are not prime numbers. The grade school definition is meant to be a simplification of the true definition, to keep the concept intuitive, but it is an incorrect simplification that does not lead to the correct intuition captured by the true definition. A composite number is a nonzero, non-invertible integer which is not a prime number.
Alternatively, one can define a prime number as an integer which has exactly 4 divisors (–p, –1, 1, p). –1 and 1 are integers which have only 2 divisors (–1, 1), so they are not prime numbers. 0 has infinitely many divisors, and so it also is not a prime number. Or, you can define a prime number as an integer which has exactly 2 positive divisors. The integer 1 only has 1 positive divisor, so it is not a prime number. However, these alternative definitions, although strictly "correct" as far as semantics are concerned, are bad definitions conceptually. The definition provided in the previous paragraph is the one that actually has genuine mathematical meaning.
2
-
@Happy_Abe No, that does not use the definition of a 2-tuple. A 2-tuple is itself a function, a function from 2 to the target set, where 2 = {0, 1}. The object {{x}, {x, y}} is actually called a Kuratowski pair. The corresponding two-tuple is instead the set {{{0}, {0, x}}, {{1}, {1, y}}}, which is built from the Kuratowski pairs {{0}, {0, x}} and {{1}, {1, y}}. The distinction is subtle, but it exists, because there is no 3-element analogue for the Kuratowski pair construction, while there is such a thing as a 3-tuple: a function from 3 to the target set, where 3 = {0, 1, 2}. In this case, the three tuple would look like {{0}, {0, x}}, {{1}, {1, y}}, {{2}, {2, z}}}.
2
-
@rmsgrey The problem is that when you are working with real numbers or the algebraic numbers, it becomes impractical and conceptually useless to work with explicit constructions and encodings. To properly give a formal introduction to the algebraic numbers, you need ring theory, and to properly give a formal introduction to the real numbers, you need lattice theory on top of ring theory. Set-theoretic constructions are not appropriate when dealing with these higher-level mathematical objects.
For instance, axiomatically, it is very easy to write a list of simple axioms that uniquely define what the real numbers are. Talking about the algebraic numbers is even easier: the field of algebraic numbers is the algebraic closure of the field of rational numbers. However, while it is very easy to understand the axioms, actually constructing these objects using nothing but sets is complicated, and to be honest, a waste of time. That is not to say that it cannot be done, but rather, that it should not be done.
2
-
2
-
2
-
2
-
1
-
1
-
@erikziak1249 Contradictory axioms. First I learn that there cannot be a square root of a negative number, since every number squared is a positive number. Then I am told that it is not true. And that that it sort of still is true, but I have to imagine that there exists such a thing.
These are not contradictory axioms. The real numbers form a mathematical structure called an "ordered field." The fact that they are ordered is actually very important, it is part of how real numbers are defined. To put it simply, the real numbers being ordered just means that there exists a well-defined notion of positive real numbers, and negative real numbers, and a well-defined notion of comparison. I can compare two real numbers 3 and 5, and conclude that 3 is less than 5. This is what the concept of order refers to.
In an ordered field, it is true that the square of every quantity is nonnegative. However, not all fields need to be ordered. If the field is not ordered, then there is no well-defined notion of positive or negative quantities in this field. In such a field, it is entirely permissible for all quantities to have a square root, but this just means the field cannot be ordered. As I said, the real numbers form an ordered field. However, we can choose to get rid of the ordering altogether, and just forget about there being such a thing as positive numbers or negative numbers. Now, even after you get rid of the ordering, the fact is, some numbers still have no square root. This is because you have not actually changed the multiplication at all. But, that being said, now that there is no ordering restricting you, you can just extend these numbers to a larger class of numbers where everything has a square root. This is all fine, because you got rid of the ordering. Notice that there is no actual contradiction here: it still remains a fact that if you want to keep the ordering, then negative quantities cannot have a square root. There is no "well, actually..." caveat here, this actually is just what it is. The extension is only possible if you get rid of the ordering. This is not a contradiction: by getting rid of the ordering, you are legitimately changing the type of mathematical object you are working with.
What will be next? We can divide by zero?
Despite what many misleading videos on YouTube claim, we cannot divide by 0. This is not because we choose to not define division by 0. No, this is actually a theorem. The axioms of arithmetic imply that 0•x = x•0 = 0, and this already just makes division by 0 impossible. There is nothing anyone can do about it.
I am pretty much aware of limits, when something approaches zero, but what if it IS zero?
Limits are actually irrelevant to the discussion, and they have no implications on the topic of division by 0. Discussing x —> 0 is very different from discussing x = 0. If a person tells you that limits are relevant, then you should immediately conclude that they do not understand how limits work at all.
Expecting me to think about a number as having a "real" part and an "imaginary" part is also quite stupid.
It is not a stupid at all. The concepts of the real part function and the imaginary part function are very essential in complex analysis. Also, they are important in the vectorial/geometric understanding of complex numbers.
What is a "real" number?
The real numbers have a very precise mathematical definition. To put it in simplest words, they are the unique field of numbers that form a continuum. This idea of "continuum" is important, because it enables you to do geometry and calculus. The rational numbers, for example, do not form a continuum. Instead, they are discrete points with gaps in between. The real numbers are an extension that fill in those gaps, and no other extensions exist that actually succeed in filling those gaps.
No numbers are real! They are just a mental concept.
Numbers being a mental concept does not mean they are not real. No, numbers are not physical, if that is what you mean, but 'physical' and 'real' are not synonymous. That being I said, I do think that the name "real number" should be replaced by an actual descriptive name. But, this is also your mistake. You are just placing an unhealthy and unnecessary amount of importance on mere names, to the point that it has become an obsession, and are not even willing to actually look at the concepts behind the name, which is where you should be looking. To put into perspective why this is a problem, just consider this: my legal name, which is also my birth name, is Ángel. Do you think I am actually a literal, true-to-the-Bible angel? No, of course I am not. But, you have absolutely no qualms with seeing my name on the YT username, you think nothing of it. You understand that the word "angel" is just a name when it comes to people, and take no issue with it being used to describe humans who clearly are not angels in the biblical sense. It has no meaning beyond this. Well, names for mathematical objects are no different at all. There is no reason you should even be paying attention to the names much beyond just the convenience of being able to communicate with people. If you are trying to learn mathematics, then what you should be studying are the concepts hidden behind the names, the actual definitions. The definitions may be confusing, but there are actual explanations that you can find for them.
Look, this is not exclusive to mathematics either. It applies to all areas of life. There exist many more concepts than there exist English words. So, necessarily, some words we have to recycle, and use in two completely different ways, having completely different definitions that are unrelated. We do this in mathematics, we do this in history, science, politics, economics, engineering, law, etc. Every career that exists has this conundrum. Yet, I am sure that in most other areas of life, you actually do overlook the fact that the same word is used in two different ways, and you just adapt to it. Everyone does. There is no reason why mathematics should even be the exception. In fact, even within mathematics, you already do this, and you have not even noticed it. For example, it is statistically likely you never noticed that in mathematics, there exist two completely different definitions of the word 'division.' You just adapted to the fact, and your brain processed it like it does anything else. What it all comes down to is that the names really do not matter outside the context of communication. They ultimately have no actual bearing on the mathematics. If I use the name of an object in mathematics, what you should be doing is asking yourself if you have already learned how this object is defined specifically within mathematics. If you suspect you have not, then you should ask for the definition, and the definition will be given to you. If you do not understand the definition, then that is perfectly fine! Someone will explain the definition to you, that is what education is for, and that is what we have YT videos now for. This is how one approaches learning mathematics. Focusing on the name itself is not how one learns mathematics. In fact, this focusing on the name thing is not an effective approach to learning mathematics even if we improve the names to more descriptive ones. Atthe end of the day, you are not going to learn anything if you are not focusing on the definitions, regardless of how "good" the names are. The names, ideally, should be a helpful bonus. But they are definitely not meant to be the core of it.
This is very, very bad. Maybe it is being taught at schools differently today, I do not know, but the stupid name "imaginary numbers" is still used.
1
-
1
-
1
-
1