Comments by "MC116" (@angelmendez-rivera351) on "Another Roof" channel.

  1. 17
  2. 9
  3. 7
  4. 5
  5. 5
  6.  @lexinwonderland5741  I agree with you that more should have been mentioned on it, since the distinction between irreducible elements and units in a ring is ultimately at the root of why 1 cannot be considered a prime number. Look, do not get me wrong. I think that understanding how mathematical concepts from antiquity involved into mathematical concepts today as our understanding of mathematics improved and became more refined is very fascinating, and certainly an important kind of knowledge to have in general. However, as far as answering the question "is 1 a prime number?," the history is not enlightening at all: it ultimately does not answer the question. Yes, I know that mathematicians in the 1700s thought of 1 as a prime number, this is all well and fine, but that tells us nothing as to whether 1 actually is or should be considered a prime number or not. These questions are questions regarding the relationships between various mathematical concepts at a foundational level, not questions about names and conventions that mathematicians vote on. If you want to get at the question of whether 1 is a prime number or not, then you ought to compare the prime numbers with 1, analyze their properties and their roles within the integers, then compare how these things extend or fail to extend when you move on to other mathematical structures, like polynomials and Gaussian integers. This is how you answer the question. Appealing to the history of mathematics actually reinforces most people's misconception that 1 should be considered a prime number, and reading the comments to this video has resoundingly confirmed this suspicion. I think that discussing the history is perfectly fine when addressing the question "why did we ever consider 1 a prime number?" or "how has our understanding of prime numbers changed?" But neither of those questions is the question the video claims to address.
    5
  7. 5
  8. 4
  9. 4
  10. 4
  11. 4
  12. 3
  13. 3
  14. 3
  15. 3
  16. 3
  17. 2
  18. 2
  19. 2
  20. 2
  21.  @valmao91  It could, but we don't have a way to know that, as we don't know everything about primes, and probably never will. I hate to burst your bubble down, but despite how much you want to insist that mathematicians are highly ignorant about prime numbers, they are not. We know more than enough about prime numbers to know that our definition is the correct one. In fact, our definition not only encapsulates the concept of prime numbers perfectly in the integers, it does so in all commutative rings. Tested and proven. We have thousands of theorems on the matter reinforcing this conclusion, together with over 200 years of studying ring theory formally to back it up. So, no, you are completely wrong about our inability to know even basic facts about prime numbers, and I wish you were not so arrogant as to pretend you can tell mathematicians what they can and cannot know. Therefore, there is room for discussion, especially with a case like this one where, technically, 1 should be prime, but isn't because it's redundant. No, this is factually incorrect. 1 not being a prime number is not a technicality. 1 literally does not satisfy the definition of a prime number. 1 is not a prime number, and should not be considered one. Redundancy has nothing to do with it, and in my comments above, I laid out a perfect line of reasoning behind the definition of prime numbers, and why –1 and 1 are not prime numbers. I know you find it convenient to ignore all of that (because you did ignore it), but that is just dishonest.
    2
  22. 2
  23. 2
  24. 2
  25.  @petevenuti7355  Is there such a thing as a ternary operation that can't be broken down into binary operations. There is, surprisingly. These are called irreducible n-ary operations. They exist for all n > 2. What is an operation then? It must involve action, yes? An operarion is a function, but this raises the question of what is a function, does it not? So, what is a function? We intuitively tend to think of a function f as being fed by an input x, and spitting the output f(x). This makes it sound like a function has to refer to an algorithm, a physical procedure. However, a function is actually just an abstract relationship. f relates x and f(x) in an abstract way. The reason teachers present it as an algorithm is because it makes the axiom that defines what a function is easy to visualize, but at the cost of being misleading. Consider two sets X and Y. In mathematics, we typically consider all objects to be sets, but for the sake of explanation, we can allow the members of X and Y to be arbitrary objects, they do not necessarily have to be sets themselves. Given X and Y, you can form a third set, the Cartesian product of X and Y. The Cartesian product of X and Y is the set of ordered pairs (x, y), where x is in X, and y is in Y. Now, there is a special class of subsets of this Cartesian product. These subsets G satisfy the following property: for all x in X, there is exactly one y in Y (always one, and only one), such that (x, y) is in G. This property is the property that teachers are ultimately alluding to when they talk about inputs and outputs of a function. The unique y such that (x, y) is in G is called the image of x under G, but in school mathematics, the teachers just call it "the output." As you can see, there is an abstract relationship between x and y that defines what the set G is, but there is no physical procedure involved. You can say y exists, but actually finding what y is, that is not required in order for G to be a valid "special subset" of the Cartesian product. I should mention that this is not the complete definition of a function, but the technical details that I have omitted are not important for the point I am making. The point I am making is that for every function f, there is an associated set G that satisfy the above property, that for all x in X, there is exactly one y in Y such that the ordered pair (x, y) is in G. And that is all there is to it. These sets exist as abstract objects, not as physical procedures. Now, in a given model of computability, there are some functions, for which you can prescribe an algorithm that explicitly constructs or produces what the corresponding y is for a given x. In doing this, not only have you shown y exists, but also, you know what y is, you can give a finite description of how you obtained y. However, y could exist without being able to construct an algorithm to determine what it is. If this is the case, then such a function is called uncomputable (within that particular model). The most famous example of this is the busy beaver function, which I will not define here because I am not confident I understand the definition well enough to explain it. The only way you can limit yourself to computable functions, realistically, is by saying that the axiom of infinity is false (meaning there are no infinite sets, as far as the axioms are concerned). However, this means that you are saying that there is no such a thing as the set of natural numbers. And by doing this, you give up Peano arithmetic and a bunch of other things. Building a mathematical system that is actually useful from this is very tedious, and not really worth the trouble of denying the axiom of infinity, especially because this axiom does so much for mathematicians and physicists. You cannot do any science without accepting the existence of infinite sets.
    2
  26. 2
  27. 2
  28. 2
  29. 2
  30. 2
  31. 2
  32. 2
  33. 2
  34. 2
  35.  @forbidden-cyrillic-handle  Obviously, some mathematicians had different opinions, and routinely used it in the past. Yes, a definition that was used over a thousand years ago. Do you think I care? No, I do not care. The correct definitions are the ones which are used today, until such a time comes when those definitions are changed, if they ever do get changed. Some definitions were thought to have been correct in the past, yes. This is fine, but today we know them to be incorrect, so the past is irrelevant. We study mathematical history to learn from the mistakes we made in the past, not to continue making them by continuing to use definitions that no mathematicians today use. What is routinely used by some mathematicians is not what the definition is. You are wrong. The definition I proposed is used by all mathematicians, not just "some." You will not find any mathematicians from the 20th or 21st century who use any other definitions. You can try searching all you want, but you will not find it. Until it officially changes, and becomes something more than routinely used by some, I prefer to keep the current definition. The current definition is the one I provided. It has been the current definition since the early-mid 1800s. This is the definition that originated from rigorous research in ring theory. It is the definition every mathematician, without fail, has used since the late 1800s. You need a big conference to vote the new definition,... This was done more than over a century ago. You are behind the times by millennnia. This is ignorant.
    2
  36. 2
  37. 2
  38. 2
  39. 2
  40. 2
  41. 2
  42. 2
  43. 2
  44. 2
  45. 1
  46. 1
  47.  @erikziak1249  Contradictory axioms. First I learn that there cannot be a square root of a negative number, since every number squared is a positive number. Then I am told that it is not true. And that that it sort of still is true, but I have to imagine that there exists such a thing. These are not contradictory axioms. The real numbers form a mathematical structure called an "ordered field." The fact that they are ordered is actually very important, it is part of how real numbers are defined. To put it simply, the real numbers being ordered just means that there exists a well-defined notion of positive real numbers, and negative real numbers, and a well-defined notion of comparison. I can compare two real numbers 3 and 5, and conclude that 3 is less than 5. This is what the concept of order refers to. In an ordered field, it is true that the square of every quantity is nonnegative. However, not all fields need to be ordered. If the field is not ordered, then there is no well-defined notion of positive or negative quantities in this field. In such a field, it is entirely permissible for all quantities to have a square root, but this just means the field cannot be ordered. As I said, the real numbers form an ordered field. However, we can choose to get rid of the ordering altogether, and just forget about there being such a thing as positive numbers or negative numbers. Now, even after you get rid of the ordering, the fact is, some numbers still have no square root. This is because you have not actually changed the multiplication at all. But, that being said, now that there is no ordering restricting you, you can just extend these numbers to a larger class of numbers where everything has a square root. This is all fine, because you got rid of the ordering. Notice that there is no actual contradiction here: it still remains a fact that if you want to keep the ordering, then negative quantities cannot have a square root. There is no "well, actually..." caveat here, this actually is just what it is. The extension is only possible if you get rid of the ordering. This is not a contradiction: by getting rid of the ordering, you are legitimately changing the type of mathematical object you are working with. What will be next? We can divide by zero? Despite what many misleading videos on YouTube claim, we cannot divide by 0. This is not because we choose to not define division by 0. No, this is actually a theorem. The axioms of arithmetic imply that 0•x = x•0 = 0, and this already just makes division by 0 impossible. There is nothing anyone can do about it. I am pretty much aware of limits, when something approaches zero, but what if it IS zero? Limits are actually irrelevant to the discussion, and they have no implications on the topic of division by 0. Discussing x —> 0 is very different from discussing x = 0. If a person tells you that limits are relevant, then you should immediately conclude that they do not understand how limits work at all. Expecting me to think about a number as having a "real" part and an "imaginary" part is also quite stupid. It is not a stupid at all. The concepts of the real part function and the imaginary part function are very essential in complex analysis. Also, they are important in the vectorial/geometric understanding of complex numbers. What is a "real" number? The real numbers have a very precise mathematical definition. To put it in simplest words, they are the unique field of numbers that form a continuum. This idea of "continuum" is important, because it enables you to do geometry and calculus. The rational numbers, for example, do not form a continuum. Instead, they are discrete points with gaps in between. The real numbers are an extension that fill in those gaps, and no other extensions exist that actually succeed in filling those gaps. No numbers are real! They are just a mental concept. Numbers being a mental concept does not mean they are not real. No, numbers are not physical, if that is what you mean, but 'physical' and 'real' are not synonymous. That being I said, I do think that the name "real number" should be replaced by an actual descriptive name. But, this is also your mistake. You are just placing an unhealthy and unnecessary amount of importance on mere names, to the point that it has become an obsession, and are not even willing to actually look at the concepts behind the name, which is where you should be looking. To put into perspective why this is a problem, just consider this: my legal name, which is also my birth name, is Ángel. Do you think I am actually a literal, true-to-the-Bible angel? No, of course I am not. But, you have absolutely no qualms with seeing my name on the YT username, you think nothing of it. You understand that the word "angel" is just a name when it comes to people, and take no issue with it being used to describe humans who clearly are not angels in the biblical sense. It has no meaning beyond this. Well, names for mathematical objects are no different at all. There is no reason you should even be paying attention to the names much beyond just the convenience of being able to communicate with people. If you are trying to learn mathematics, then what you should be studying are the concepts hidden behind the names, the actual definitions. The definitions may be confusing, but there are actual explanations that you can find for them. Look, this is not exclusive to mathematics either. It applies to all areas of life. There exist many more concepts than there exist English words. So, necessarily, some words we have to recycle, and use in two completely different ways, having completely different definitions that are unrelated. We do this in mathematics, we do this in history, science, politics, economics, engineering, law, etc. Every career that exists has this conundrum. Yet, I am sure that in most other areas of life, you actually do overlook the fact that the same word is used in two different ways, and you just adapt to it. Everyone does. There is no reason why mathematics should even be the exception. In fact, even within mathematics, you already do this, and you have not even noticed it. For example, it is statistically likely you never noticed that in mathematics, there exist two completely different definitions of the word 'division.' You just adapted to the fact, and your brain processed it like it does anything else. What it all comes down to is that the names really do not matter outside the context of communication. They ultimately have no actual bearing on the mathematics. If I use the name of an object in mathematics, what you should be doing is asking yourself if you have already learned how this object is defined specifically within mathematics. If you suspect you have not, then you should ask for the definition, and the definition will be given to you. If you do not understand the definition, then that is perfectly fine! Someone will explain the definition to you, that is what education is for, and that is what we have YT videos now for. This is how one approaches learning mathematics. Focusing on the name itself is not how one learns mathematics. In fact, this focusing on the name thing is not an effective approach to learning mathematics even if we improve the names to more descriptive ones. Atthe end of the day, you are not going to learn anything if you are not focusing on the definitions, regardless of how "good" the names are. The names, ideally, should be a helpful bonus. But they are definitely not meant to be the core of it. This is very, very bad. Maybe it is being taught at schools differently today, I do not know, but the stupid name "imaginary numbers" is still used.
    1
  48. 1
  49. 1
  50. 1