Comments by "MC116" (@angelmendez-rivera351) on "Is 1 a Prime Number?" video.

  1.  @lexinwonderland5741  I agree with you that more should have been mentioned on it, since the distinction between irreducible elements and units in a ring is ultimately at the root of why 1 cannot be considered a prime number. Look, do not get me wrong. I think that understanding how mathematical concepts from antiquity involved into mathematical concepts today as our understanding of mathematics improved and became more refined is very fascinating, and certainly an important kind of knowledge to have in general. However, as far as answering the question "is 1 a prime number?," the history is not enlightening at all: it ultimately does not answer the question. Yes, I know that mathematicians in the 1700s thought of 1 as a prime number, this is all well and fine, but that tells us nothing as to whether 1 actually is or should be considered a prime number or not. These questions are questions regarding the relationships between various mathematical concepts at a foundational level, not questions about names and conventions that mathematicians vote on. If you want to get at the question of whether 1 is a prime number or not, then you ought to compare the prime numbers with 1, analyze their properties and their roles within the integers, then compare how these things extend or fail to extend when you move on to other mathematical structures, like polynomials and Gaussian integers. This is how you answer the question. Appealing to the history of mathematics actually reinforces most people's misconception that 1 should be considered a prime number, and reading the comments to this video has resoundingly confirmed this suspicion. I think that discussing the history is perfectly fine when addressing the question "why did we ever consider 1 a prime number?" or "how has our understanding of prime numbers changed?" But neither of those questions is the question the video claims to address.
    5
  2. 2
  3.  @valmao91  It could, but we don't have a way to know that, as we don't know everything about primes, and probably never will. I hate to burst your bubble down, but despite how much you want to insist that mathematicians are highly ignorant about prime numbers, they are not. We know more than enough about prime numbers to know that our definition is the correct one. In fact, our definition not only encapsulates the concept of prime numbers perfectly in the integers, it does so in all commutative rings. Tested and proven. We have thousands of theorems on the matter reinforcing this conclusion, together with over 200 years of studying ring theory formally to back it up. So, no, you are completely wrong about our inability to know even basic facts about prime numbers, and I wish you were not so arrogant as to pretend you can tell mathematicians what they can and cannot know. Therefore, there is room for discussion, especially with a case like this one where, technically, 1 should be prime, but isn't because it's redundant. No, this is factually incorrect. 1 not being a prime number is not a technicality. 1 literally does not satisfy the definition of a prime number. 1 is not a prime number, and should not be considered one. Redundancy has nothing to do with it, and in my comments above, I laid out a perfect line of reasoning behind the definition of prime numbers, and why –1 and 1 are not prime numbers. I know you find it convenient to ignore all of that (because you did ignore it), but that is just dishonest.
    2
  4. 2
  5. 2
  6.  @petevenuti7355  Is there such a thing as a ternary operation that can't be broken down into binary operations. There is, surprisingly. These are called irreducible n-ary operations. They exist for all n > 2. What is an operation then? It must involve action, yes? An operarion is a function, but this raises the question of what is a function, does it not? So, what is a function? We intuitively tend to think of a function f as being fed by an input x, and spitting the output f(x). This makes it sound like a function has to refer to an algorithm, a physical procedure. However, a function is actually just an abstract relationship. f relates x and f(x) in an abstract way. The reason teachers present it as an algorithm is because it makes the axiom that defines what a function is easy to visualize, but at the cost of being misleading. Consider two sets X and Y. In mathematics, we typically consider all objects to be sets, but for the sake of explanation, we can allow the members of X and Y to be arbitrary objects, they do not necessarily have to be sets themselves. Given X and Y, you can form a third set, the Cartesian product of X and Y. The Cartesian product of X and Y is the set of ordered pairs (x, y), where x is in X, and y is in Y. Now, there is a special class of subsets of this Cartesian product. These subsets G satisfy the following property: for all x in X, there is exactly one y in Y (always one, and only one), such that (x, y) is in G. This property is the property that teachers are ultimately alluding to when they talk about inputs and outputs of a function. The unique y such that (x, y) is in G is called the image of x under G, but in school mathematics, the teachers just call it "the output." As you can see, there is an abstract relationship between x and y that defines what the set G is, but there is no physical procedure involved. You can say y exists, but actually finding what y is, that is not required in order for G to be a valid "special subset" of the Cartesian product. I should mention that this is not the complete definition of a function, but the technical details that I have omitted are not important for the point I am making. The point I am making is that for every function f, there is an associated set G that satisfy the above property, that for all x in X, there is exactly one y in Y such that the ordered pair (x, y) is in G. And that is all there is to it. These sets exist as abstract objects, not as physical procedures. Now, in a given model of computability, there are some functions, for which you can prescribe an algorithm that explicitly constructs or produces what the corresponding y is for a given x. In doing this, not only have you shown y exists, but also, you know what y is, you can give a finite description of how you obtained y. However, y could exist without being able to construct an algorithm to determine what it is. If this is the case, then such a function is called uncomputable (within that particular model). The most famous example of this is the busy beaver function, which I will not define here because I am not confident I understand the definition well enough to explain it. The only way you can limit yourself to computable functions, realistically, is by saying that the axiom of infinity is false (meaning there are no infinite sets, as far as the axioms are concerned). However, this means that you are saying that there is no such a thing as the set of natural numbers. And by doing this, you give up Peano arithmetic and a bunch of other things. Building a mathematical system that is actually useful from this is very tedious, and not really worth the trouble of denying the axiom of infinity, especially because this axiom does so much for mathematicians and physicists. You cannot do any science without accepting the existence of infinite sets.
    2
  7. 2
  8. 2
  9. 2
  10. 2
  11. 2
  12.  @forbidden-cyrillic-handle  Obviously, some mathematicians had different opinions, and routinely used it in the past. Yes, a definition that was used over a thousand years ago. Do you think I care? No, I do not care. The correct definitions are the ones which are used today, until such a time comes when those definitions are changed, if they ever do get changed. Some definitions were thought to have been correct in the past, yes. This is fine, but today we know them to be incorrect, so the past is irrelevant. We study mathematical history to learn from the mistakes we made in the past, not to continue making them by continuing to use definitions that no mathematicians today use. What is routinely used by some mathematicians is not what the definition is. You are wrong. The definition I proposed is used by all mathematicians, not just "some." You will not find any mathematicians from the 20th or 21st century who use any other definitions. You can try searching all you want, but you will not find it. Until it officially changes, and becomes something more than routinely used by some, I prefer to keep the current definition. The current definition is the one I provided. It has been the current definition since the early-mid 1800s. This is the definition that originated from rigorous research in ring theory. It is the definition every mathematician, without fail, has used since the late 1800s. You need a big conference to vote the new definition,... This was done more than over a century ago. You are behind the times by millennnia. This is ignorant.
    2
  13. 2
  14. 2
  15. 2
  16. 2
  17. 1
  18. 1
  19. 1
  20.  @Alex-02  If I say all green animals should be considered plants because they have the same color, the current definition of “plant” is irrelevant because we are discussing the definition itself. If you say all green animals should be considered plants because they have the same color, then this does not address the current definition of "plant" at all, and as such, it does not replace said definition. Also, you chose a poor example for an analogy, because not all plants are green. In any case, if you want to define categories of objects by their color, then all you have defined is the color itself. This has nothing to do with biology. You can even define a subclass of the category of living beings based on color. Again, though, this has nothing to do with biology: this is simply about the color of the living beings. You are still including green fungi, green protozoa, green bacteria, etc., all in this category. There is no biological property that green animals share with plants, such that no other entities share that property, living or non-living. Therefore, it is ontologically inadequate to insist that they are in the same category. Definitions are non-universal, made up by humans and always changing. Well, this is just a false belief. Definitions are not mere labels you come up with arbitrarily on the basis of preference for labeling objects of your preference. Definitions have to be consistent with reality. To apply definitions to a set of properties, you first have to ensure that these properties actually are well-defined, and that there exists at least one object which has those properties. Then, when you apply the label to this set of properties, it follows that the label applies to all objects satisfying these properties. If you want to single out certain such objects as being qualified, and excluding the rest, then you need to specify properties which are satisfied exactly only by those objects you are trying to single out. Also, for this to be justified, you cannot do this on an ad hoc whim. Otherwise, that renders all definitions as completely redundant and useless. What makes definitions useful is precisely their non ad hoc nature. Finally, there is the problem of decidability. A definition has to be decidable, because if there exists no algorithm by which anyone would ever be able to tell if a class of objects satisfies the given definition or not, then it is completely pointless, and as such, the 'definition' does not actually define anything. How precise you want to get with all of this, it all depends on what academic discipline of study you are a part of, or whether the definition is just intended for colloquial nonsense. In mathematics, though, the highest level of achievable precision with these definitions is required. On the note of universality, mathematics actually are universal. Mathematics are independent of culture, nation, religious belief, etc. Yes, the types of notation you use to talk about mathematical concepts vary from language to language and culture to culture. The mathematical concepts themselves, though, are universal. A quasigroup is defined in Norway in exactly the same way it is defined in South Africa. A topological space is defined in Malaysia in exactly the same as it is defined in the United States of America. The works that you see published by mathematicians from New Zealand are not in disagreement regarding the concepts and their properties with the works published by mathematicians in China. You seem to insinuate that mathematical definitions are on the same caliber as definitions of words you find in the Urban Dictionary. Yes, those words do vary in definition from location to location, and even just from person to person, and really, those words are not defined in any particular way at all, they are used arbitrarily, because they are not used to discuss concepts, they are just used to communicate basic bits of information about a particular ill-defined thing. This is not how mathematical definitions work at all, though. Therefore you can discuss hypothetical scenarios were the current definition does not exist. I can guarantee you that there is no hypothetical scenario in which the current definition would not exist and in which people have the knowledge of ring theory that we do today.
    1
  21.  @Alex-02  The point of this exercise was to ignore the current definition. Then such an exercise is itself futile and pointless. It is not worth engaging in. It is sophistry. When in a discussion in a topic where definitions already exist, those definitions must be acknowledged before discussing if them existing is, in fact, justified. Otherwise, you may as well be speaking gibberish. I know, I agree that that would be a terrible definition! Yes, and my point is, so is any definition where 1 ends up being a prime number. Hence, why I have said in the past, that saying 1 is a prime number is like saying that my Toyota is an animal. That is kinda irrelevant for my point tho, all I'm saying is a discussion can be had about what the definition should be, and that's a matter of opinion. We can have the discussion, but if you are going to ignore the current definition, then said discussion cannot be had. If you want to change the definition, then you need start by (a) acknowledging that the definition exists, then (b) explaining how or why this definition fails in any way. Also, appealing to extraneous ideas like that of "factors" without even acknowledging how those are related to the current definition, and therefore, to whether there could be any merit in changing the definition, is entirely pointless. I agree with the general point you are making in this section, yes, the mathematical concepts are universal. But that's different from mathematical definitions. It is not. Definitions are merely how we formally classify and compartmentalize those concepts by using language. The actual strings of letters used for naming the compartments vary, of course, but that falls under notation, which I already addressed. The substance, the structure of the definition, does not vary. The example of prime numbers in this video is proof of that. The definition has changed throughout history, so that is a counterexample to definitions not being universal. No, it is not, because the universality of a definition has nothing to do with whether it has existed throughout all of history. Sorry, but that is literally not how the word "universal" works, and it never has been how the word works. Besides, the definition of primality used in the past according to the video, if we account for restrictions to positive integers only, is actually completely in agreement with the modern definition of primality. Therefore, your claim is false. Let me explain: the definition used in antiquity and in medieval times, according to the video, when applied only to positive integers, is that an integer is prime if and only if the only (positive) integer that measures it is the integer 1. This definition is completely equivalent to the modern definition, and in fact, the integer 1 does not fit this definition, hence even by the old definition, 1 is not a prime number. 1 does not fit the definition, because 1 does not measure 1. This is because integers do not measure themselves, as the video itself explained. Mathematicians in the past never really cared for numbers being divisible by themselves: they only cared about the smaller divisors, what we today call the proper divisors. The only positive integer that are proper divisors of a prime number is the integer 1. 1, on the other hand, has no proper divisors. The set of positive integers smaller than 1 that divide it is the empty set. Notice how this contrasts with prime numbers, which by definition, do have proper divisors. This is completely equivalent to the modern definition. However, in the same way that there are people today who do not actually understand the modern definition of prime numbers, back then, there were also many people that did not understand the definition of prime numbers that was used back then. This was, as the video explained, because of philosophical debates surrounding the nature of the number 1, and whether it actually was a number. Mathematicians began to argue that we should actually consider 1 to be measurable by itself, despite not providing any sound reasoning for it. This led to this weird inconsistency where 1 did not fit the definition of primality, but for philosophical reasons, it was included in lists of prime numbers anyway, as if it was supposed to be "an exception" to the definition, in a very ad hoc, for reasons that had nothing to do with mathematics. Such inconsistencies were common even during the medieval period, because mathematics lacked rigor. We say these same inconsistencies in older formulations of concepts in calculus, and even in concepts in algebra. Until a few centuries ago, for example, it was widely believed, without any reason at all, that an integer divided by 0 must have been 0. There was never any mathematical reasoning for this, not even heuristically. It was grounded on philosophical biases. Even so, in practice, no one actually ever evaluated divisions by 0, because it led to results that they knew were incorrect. Hence the inconsistency. But... you just described your own hypothetical scenario, then claimed it doesn't exists when it clearly does because we can talk about it. No, that is not how that works at all. Me being able to give a verbal description of an impossible situation does not mean the situation is actually possible and actually exists. This is an astronomical leap in logic. The whole point of a hypothetical scenario is it doesn't have to be likely or even possible to happen in the real world. No, that is not the whole point of a hypothetical scenario, not even close. A hypothetical scenario just refers to a coherent scenario which has not yet been known to happen, and which may or may not happen, but which is possible in principle (hence "coherent"). The fact that said scenario can be talked about does not make it real. If it were real, then, it would not be hypothetical, it would be actual.
    1
  22.  @Alex-02  Anyhow, you seem to be insisting on wasting time with banal platitudes, rather than actually getting to the point of the discussion of whether 1 is a prime number or not. The utility of the definition of prime numbers comes in considering the divisibility relation. For any two integers m, n, we are concerned with whether m divides n or not, and we want to classify the integers in such a useful way that it facilitates that study. The integer 1 is characterized by the unique property that for all integers m, m•1 = 1•m = m. This means that 1 divides all integers. –1 also divides all integers, because (–1)•(–1) = 1, which means that –1 divides 1, and 1 divides all integers, and the divisibility relation is transitive. –1 and 1 divide all integers. They are the only integers with this property. We can prove this. The only integer between –1 and 1 exclusive is 0, but 0 only divides 0 and no other integers. All other integers m satisfy m < –1 and m > 1. But, if m divides –1 or 1, and m is not equal to –1 or 1, then m > –1 or m < 1, but this is a contradiction. Therefore, for all other integers m, there exists n such that m does not divide n. The only divisors of –1 and 1 are, in both cases, –1 and 1, and –1 and 1 are the only integers that divide all integers. As such, –1 and 1 are structurally isolated from the other integers in this fashion. They form a multiplicatively-closed structure, called a group. This already implies that, regardless of how we structurally classify the other integers in terms of their properties with respect to the divisibility relation, they belong in a different class than –1 and 1 do. 0 is also structurally isolated, because for all integers m, n, if m•n = 0, then m = 0 or n = 0, and because for all integers m, 0•m = m•0 = 0. All integers divide 0, and 0 only divides 0. All the integers that are not equal to –1, 0, 1, thus, when classified by their properties with respect to the divisibility relation, are demonstrably in a different class than either the class of 0 by itself, or the group that –1 and 1 are in. Earlier, I mentioned that for all integers m, 1•m = m•1 = m, and this is the defining property of 1. This means that for all integers m, m divides m. This also means that for all integers m, –m divides m, and m divides –m. This is possible, because –1 divides 1, and 1 divides –1. This means, that for all integers m, –m, m, –1, 1 divide m and –m. However, remember that only –1 and 1 divide –1 and 1. This means that the only integers that divide all their own divisors are –1 and 1. Again, this is why –1 and 1 are multiplicatively isolated, and form a multiplicative group. All other integers are divisible by –1 and 1, but do not themselves divide –1 and 1. Thus, for all other integers, we can define a concept of a divisors which is not divisible by the dividend. Such divisors are called proper divisors. –1 and 1 are proper divisors of –m and m when –m and m are not equal to –1 and 1. –1 and 1 have no proper divisors. –m and m always divide m and –m, which are themselves divisors of –m and m, so –m and m are not proper divisors of –m and m, and this is true for all m. This raises the question, are there integers whose only proper divisors are –1 and 1? Yes, and those are precisely the prime numbers (if you include their negative versions too). All the other integers, besides them, and besides –1, 0, 1, can be written as products of proper divisors which are not –1 or 1. In fact, the prime numbers are characterized, precisely by the idea that the product of two prime numbers is never a prime number. This means that, in this classification, they must be categorically distinct from the remaining integers not considered, which we call the composite integers. This is how you motivate the definitions: you need to focus on the analyzing the structure the definitions are meant to intuitively capture. Furthermore, this classification of the integers into four classes with defining properties extends to all commutative rings, making it an extremely robust and general classification. Functions which preserve the structure of a ring (ring homomorphisms) also preserve these classes and divisibility relations: such functions will never map prime numbers from one ring to composite numbers from another, or will never map –1 or 1 (or some other unit) to a prime number. This is how we know this conceptual classification is the correct one. The way the classification is done for arbitrary rings is as follows: we consider all the zero divisors. A zero divisor is an element m such that there exists some nonzero n such that m•n = n•m = 0. In the integers, the only zero divisor is 0, but in other commutative rings, there may be nontrivial zero divisors. We know this works, because all multiples of zero divisors are also zero divisors, and zero divisors can only divide other zero divisors. Once we have considered this, we consider all the units. The units are those elements m such that there exists some n such that m•n = n•m = 1. In all rings, –1 and 1 are units, but some rings have other units. Units are only divisible by units, and units divide all objects in a ring. Also, the product of two units is always a unit. Of the remaining objects in a ring, we consider their proper divisors. If their only proper divisors are units (remember, the units have no proper divisors), then they are called irreducible elements, and they are the equivalent of prime numbers in that ring. The product of two irreducible element is never irreducible, but composite, and composite elements always have some proper divisors which are irreducible. One last important idea is the way these classifications actually make the divisibility relation into an order relation, where the units are the minimal elements, and the zero divisors are the maximal elements (well, strictly speaking, 0 is always the greatest element). Irreducible elements are always the "next" elements after the units. The way you make this work is you say that two elements m, n are equivalent (m ~ n) if and only if m divides n and n divides m. This partitions the ring into equivalence classes, which we can denote [m]. We say that [m] divides [n] if and only if every element in m divides every element in n. Now, if [m] and [n] are distinct classes, then one is a proper divisor of the other. The class [1] is the smallest element in this ordering, because for all m, [1] divides [m], and if [n] divides [1], then [1] = [n]. The class [0] is the greatest element, and the class of nontrivial zero divisors is the class "right before" [0]. Well, the zero divisors are more complicated, since some zero divisors are proper divisors of other zero divisors, but the general idea stands.
    1
  23. 1
  24. 1
  25.  @valmao91  Your statement is a naive one. All of number theory is one academic endeavor centered around the study of prime numbers, so I have no idea of what you mean by "almost no math problems require diving deep into prime numbers." This, to me, sounds like a case of wilful ignorance. Not only are not willing to read my comments carefully, you also have done nothing to address them, or even understand them, and it is clear that you actually have no interest in learning anything about the mathematics, or in becoming better informed about how mathematicians come to the conclusions they do. I am starting to lose respect for you, and I get the impression that there is no line of reasoning, no matter how sound, that is capable of making you realize you are wrong. I am afraid interacting with you and listening to you was a waste of my time. Then again, this is often the case when I interact with laypeople. Most people nowadays would rather protect their own feelings than listen to the facts. History is proof of this, as even while being uncertain about the nature of 1, math as a whole went on. Yeah, this does not prove your point at all, for the very simple reason that "the nature of 1" is ultimately completely irrelevant to mathematics. It is entirely a philosophical discussion. You see, the "nature" of objects in mathematics is ultimately irrelevant. Mathematics are only concerned with mathematical structures, how objects in the structures are related, and how the various structures interact. The actual nature of those objects is irrelevant: what matters is that the objects interact according to the axioms that define the structure. Is 1 a "number"? Well, what even is the definition of a "number"? There is no definition that is widely accepted by mathematicians, because again, it does not matter. Even if we say 1 is not a number, it has no effect on the mathematics. After all, we can do arithmetic just fine with all sorts of objects which are not numbers. We do arithmetic with functions, vectors, tensors, matrices (well, strictly speaking, matrices are special cases of functions), polynomials, etc. We can even perform arithmetic on mathematical structures themselves. So, the nature of these objects is just a philosophical concern that does not matter to mathematicians. The only defining property of 1 that any mathematician cares about is that 1•x = x•1 = x for all x in the structure. Anyway, I replied to you simply because I think it would have been rude on my part to not let you know, but I know better than to be a fool and continue engaging with you. If, one day, you decide to be intellectually honest and take the work of mathematicians seriousky and be willing to be open to having your mind changed by mathematical reasoning, then I will be willing to interact with you again, but in the meantime, you will not be hearing back from me, and I will not be listening to you any further. I hope you have a nice day, though.
    1
  26. 1
  27. 1
  28. 1
  29. 1
  30. 13:08 - 13:19 It actually does not fit the definition. Earlier in the video, it was clarified that what mathematicians meant by "measured" was a notion of divisibility which only considersd proper divisors. In other words, the number of which the divisors are being considered does not measure itself, because it is not a proper divisor of itself, since it is not smaller than itself. This is included in the definition of a prime number. "A number is prime if and only if the only number that measures it is 1." Prime numbers are divisible by 1, AND by themselves, as well. This confirms that "measured by" and "divisible by" are not synonymous, because of the distinction between divisors and proper divisors. Prime numbers are divisible by themselves, but are not measured by themselves, they are only measured by 1. In modern terminology, what this means is that the only proper positive divisor of a prime number is 1. Now, it is clear that 1 does not fit the definition of a prime number: 1 is not measured by 1. It cannot be, because 1 is not a proper divisor of itself, by definition. In fact, 1 has no proper divisors. In the older terminology, this means that there no numbers that measure the number 1. Prime numbers are measured by the number 1, and only the number 1, but the number 1 is measured by no numbers at all. Therefore, it does not satisfy the definition given by medieval mathematicians of a prime number, even if you argue that 1 is actually a number at all. In other words, what this tells me is that medieval mathematicians were just inconsistent, and were incapable of detecting that treating 1 as a prime number was inconsistent with the definition of prime number they themselves used, since the definition of "measured by" that they used necessarily meant 1 was not measured by any numbers. They still came to the conclusion that 1 was measured by 1, because they were applying their own definitions inconsistently. This is something that was not uncommon back then, and it is the reason why rigor became a necessity later on.
    1
  31. 1
  32. 1
  33. 1
  34. 1
  35.  @maxaafbackname5562  Yes, there does exist such a number. I can construct it very easily too. Let Z be the set of integers, where + and · denote addition and multiplication, respectively. Since · is commutative, I can form the set of polynomials with integer coefficients, Z[X]. Every polynomial can be written in the form R(X)·(X^2 + 1) + A·X + B. Two polynomials P(X) and Q(X) are equivalent if and only if the difference P(X) – Q(X) is divisible by the polynomial X^2 + 1. In other words, A·X + B is equivalent to R(X)·(X^2 + 1) + A·X + B for all polynomials R(X). If two polynomials P(X) and Q(X) are equivalent, then we write P(X) ~ Q(X). For a given polynomial P(X), the set of all polynomials Q(X) such that P(X) ~ Q(X) is denoted [P(X)]. In other words, [P(X)] = [Q(X)] if and only if P(X) ~ Q(X). [P(X)] is called the equivalence class of P(X). We can find the set of all the possible equivalence classes, {[P(X)] : P(X) in Z[X]}. This set is denoted Z[X]/~, it also denoted Z[X]/(X^2 + 1). We can define addition +' for these equivalence classes, by letting [P(X)] +' [Q(X)] := [P(X) + Q(X)]. Similarly, we can define multiplication ·' by letting [P(X)]·'[Q(X)] := [P(X)·Q(X)]. Remember, P(X) = R(X)·(X^2 + 1) + A·X + B for some polynomial R(X) and integers A, B. As such, [P(X)] +' [Q(X)] = [R(X)·(X^2 + 1) + A·X + B] +' [S(X)·(X^2 + 1) + C·X + D] = [R(X)·(X^2 + 1) + A·X + B + S(X)·(X^2 + 1) + C·X + D] = [(R(X) + S(X))·(X^2 + 1) + (A + C)·X + (B + D)] = [(A + C)·X + (B + D)], while [P(X)]·'[Q(X)] = [R(X)·(X^2 + 1) + A·X + B]·'[S(X)·(X^2 + 1) + C·X + D] = [(R(X)·(X^2 + 1) + A·X + B)·(S(X)·(X^2 + 1) + C·X + D)] = [R(X)·S(X)·(X^2 + 1)·(X^2 + 1) + R(X)·(C·X + D)·(X^2 + 1) + S(X)·(A·X + B)·(X^2 + 1) + (A·X + B)·(C·X + D)] = [(A·X + B)·(C·X + D)] = [A·C·X^2 + A·D·X + B·C·X + B·D] = [A·C·(X^2 + 1) + (A·D + B·C)·X + (B·D – A·C)] = [(A·D + B·C)·X + (B·D – A·C)]. In other words, +' and ·' are well-defined. This means [A + B·X] +' [C + D·X] = [(A + C) + (B + D)·X], and [A + B·X]·'[C + D·X] = [(A·C – B·D) + (A·D + B·C)·X]. With this construction in place, I can prove that [X]·'[X] = –1. [X]·'[X] = [0 + 1·X]·'[0 + 1·X] = [(0·0 – 1·1) + (0·1 + 1·0)·X] = [–1 + 0·X] = [–1]. We typically denote [A + B·X] as A + B·i, with [X] = i, so i·i = –1. There, I have constructed a number, an element of Z[X]/~, such that when squared, it is equal to –1. The set Z[X]/~ is called the set of Gaussian whole numbers, or the set of Gaussian integers. This set contains the integers. A Gaussian integer A + B·i is an integer if and only if B = 0.
    1
  36.  @petevenuti7355  I like to see operations like adverbs, the definition being a description of the process, basically, am I wrong? Yes, this is wrong. You are thinking of mathematics as "something a computer does," not as a genuine collection of abstract concepts, which is what it actually is. If you want to only consider a concept "mathematical" if there is an algorithm associated to it, then you will be shocked to learn that 99% of what we call mathematics would not count as "mathematical" under your definition. A perfect computer cannot do the vast majority of things we can do. In fact, under your definition, there is no such a thing as a real number or a complex number, there is no such a thing as an uncountable set. Why? Because computers can only deal with countable sets. An arithmetic operation is not an algorithm that spits out a result. Some arithmetic operations do have an associated algorithm, these are called computable operations. The result of an operation already exists as a mathematical object, regardless of whether you or any computer can actually identify the result. This is because an operation is just a set, a function, and nothing more. In the specific case of 0/0, would it be one, zero, or undefined? IF 0/0 exists (and it may not exist), THEN it is equal to 1. Why? Well, what is the definition of division? In the context of semirings and generalizations thereof, x/y is an abbreviation for x·y^(–1). Here, · is the multiplication of the structure, which is well-defined, and y^(–1) is the multiplicative inverses of y: it is defined to be the unique element with the property that y·y^(–1) = y^(–1)·y = 1. The reason (x/y)·y = x is because (x·y^(–1))·y = x·(y^(–1)·y) = x·1 = x. Similarly, (x·y)/y = x, because (x·y)·y^(–1) = x·(y·y^(–1)) = x·1 = x. This is the actual definition of division. Hence, x/0 is just an abbreviation for x·0^(–1), where 0^(–1) is defined such that 0·0^(–1) = 0^(–1)·0 = 1. Here, you should ask yourself: does 0^(–1) exist? If it does exist, what is it equal to? The answer is that it does not exist (unless 0 = 1), and I can prove it, because I can prove that 0·x = x·0 = 0 is true for all x, meaning there cannot be an x such that 0·x = x·0 = 1 (again, unless 0 = 1). So, 0/0 does not exist, but if it did exist, it would be 1, because 0/0 := 0·0^(–1) = 1. On the other hand, if 0 = 1, then 0^(–1) = 1^(–1), so 0/0 = 0·0^(–1) = 1·1^(–1) = 1 = 0. It largely depends on your definitions of division and zero... Well, yes. ALL symbolic expressions depend on their definition. Symbols do not have definitions. Definitions are something we, sentient beings, impose on those symbols. Otherwise, the symbols are just arbitrary arrangements of matter and energy on a physical surface. That being said, there is only one definition ever used for these objects, so the dependence is irrelevant. To properly define what 0 is, you need to know about the distributive property. Consider two binary operations $ and °. $ is said to distribute over ° if a$(b°c) = (a$b)°(a$c), and (b°c)$a = (b$a)°(c$a). Normally, these operations are denoted as · and +, but I am avoiding this because I want you to realize the binary operations could be anything, they do not have to be the familiar addition and multiplication operations of natural numbers that we know. Now, say $ has an identity element e, where you have e$x = x$e = x, and say ° has an identity element z, where you have z°x = x°z = x. Then z is called 0, and e is called 1. The 0 and 1 here do not have to be interpreted as natural numbers at all, they could be any type of mathematical object, in principle, but we still denote them as 0 and 1, because regardless of the actual structure, they always play the same role. You will find that z$x = x$z = z, which means z is an absorbing element of $, motivating us to call z as 0, and e as 1. This is also why $ is generally just denoted as ·, and ° as +. If you have two binary operations, and one distributes over the other, then the identity element of the one being distributed over is called 0. This is the definition. As for division, I believe I already explained the definition. In this case, just replace · with $, and you have your definition. On the other hand, if you considered division a multi-step process, then dividing anything by zero, even zero, would be a do-nothing function, because it would be zero steps, and what do you even return from a do-nothing function? Zero, the starting number, or it is just meaninless? If /0 is a do-nothing function, then x/0 = x, by definition. However, your reasoning behind /0 being a do nothing function is completely wrong, since it assumes division is an algorithm, not an operation. if division is defined as recursive subtraction... You cannot define division as recursive subtraction. If you do, then (1/2)/(2/3) is undefined, and so is e/sqrt(π). Similarly, you cannot define multiplication as recursive addition. In the specific case of the natural numbers, you can define multiplication as recursive addition, but if you are talking about rational numbers or real numbers, then no, you definitely cannot. if 1 is defined a set of {}, isn't the empty set just as prime as 1? What do sets have to do with primality? If you think of something times zero as a do-nothing function and that's why returns zero,... You have the wrong idea of what "doing nothing" is. If I have a function f that takes the input x and maps it to the output 0, that is not a do-nothing function, because you are taking the input and doing something to it, changing it into 0. A do-nothing function is a function that maps x to x for all x. It does nothing to x, leaving it unchanged. Multiplying by 0 is not "doing nothing." Also, 0 is not "nothing." 0 is the empty set. The empty set is something. There is no such a thing as "nothing" in mathematics, and I wish teachers stopped telling people that there is. If you think of something times zero as a do-nothing function and that's why returns zero, then zero is not in other numbers like a factor... What are you talking about? What does 0 being the empty set have to do with the factors of an integer? This is nonsensical, to be honest, but I want to help you find where your question went wrong. What is the concise definition of division? x/y := x·y^(–1), where · distributes over +. Then I can see how it works, where it breaks, and think about how the undefined can be defined without breaking the rest of math. Division by 0 cannot be defined. This is a theorem. There is nothing you can do to "fix it" anything, and I would argue that thinking of division as being "broken" to begin with is already incorrect. 0·x = x·0 = 0 is an inevitable consequence of how · and 0 are defined. The concept of the empty set being the building block of 1 does seem to make 0 a factor of everything,... No, it does not. Do you understand what the word "factor" means?
    1
  37.  @rmsgrey  For the question of 0/0, that does, indeed, come back to your definitions. Are you implying that there are things that do not come down to definition? Because if not, then I find that pointing this out is unhelpful. Of course it comes down to definitions. OP's question is, what is that definition? If you define division as the inverse operation of multiplication, then x = 0/0 is equivalent to 0·x = 0, and any (finite) value of x will work in that equation This is not a coherent definition. You have not even explained what exactly it means to be the inverse of a binary operation, which is what I would argue needs defining to begin with. You said division is the inverse operation of multiplication, but then proceded to claim that the result of 0/0 is an entire set of numbers. This is nonsensical, and I would also say confusing to most people. If the function converges on the same value from either direction, then it's convenient to assume that it's also continuous at that point, giving a value to 0/0 which can be any number. No, this is definitely not how that works, and I strongly dislike it when professors make the choice to give an explanation this inaccurate to calculus students. To start with: no, we do not assume a function is continuous at a point. If I have two functions f, g, and I am interested in studying the behavior of f(x)/g(x), then it does not matter if f(p) = 0 and g(p) = 0. We care about the behavior of f and g near p, not their value at p. To put it symbolically, we only care about lim f(x) (x —> p) and lim g(x) (x —> p). f and g may discontinuous, they may be continuous, they may even be undefined at p, it does not matter, because what we are interested in are the limits, not the actual values of f and g at p. Also, whatever conclusions you end up drawing about lim f(x)/g(x) (x —> p) have absolutely nothing to do with what the value of 0/0 is. Absolutely nothing. Besides, the conceptual approach here is completely wrong too. What you should be looking at is to define a function h such that h(x, y) = x/y, and then look at lim h(x, y) (x —> 0, y —> 0). But, even then, whatever conclusions you end up drawing are irrelevant, and ultimately have nothing to do with the value of h(0, 0) = 0/0. h does not have to be continuous. This idea that it has to be is nonsense. Division is an arithmetic operation, so you obviously cannot define what h(0, 0) is in terms of limits. If it were defined, then you would already know what it is before ever arriving at a calculus course. There are several ways of defining division, which pretty much all explicitly exclude division by zero,... Are there multiple definitions? I have only ever seen one definition of division: that of multiplying by the multiplicative inverse. One moderately standard definition of division (when working with rationals) is: (a/b)/(c/d) = (a/b)·(d/c). This is not exclusive to the rational numbers. This is the definition of division for all mathematical structures where a concept of multiplication (defined as distributing over addition) is well-defined. When working with integers, there are at least with two different concepts of division: there's division only when the divisor is a factor of the dividend, and there's quotient remainder division. I have never heard a mathematician call either of those things division. The Euclidean quotient-remainder algorithm is just that: the Euclidean algorithm, and while it may be somewhat related to division, it is a different concept altogether. And you are conflating the concept of divisibility with the concept of division. Again, related, but not both concepts go by the name of "division."
    1
  38.  @rmsgrey  How would you define division when restricted to integers? Or polynomial division? I would not, is the answer. You can perform the Euclidean quotient-remainder in the ring of integers and in any polynomial ring, but that is not the same as division. The existence of such an algorithm is actually used to classify rings. An integral domain where such an algorithm exists is called a Euclidean domain. Polynomial rings and the ring of integers are examples of unique factorization domains, and all unique factorization domains are also Euclidean. To define division, you work with a division ring, instead. In abstract algebra, a magma (a set with a binary operation, where the set is closed under that operation), whose binary operation can be referred to as multiplication (symbol * ) may not have an identity element, let alone multiplicative inverses, but there are still two partial functions, left division ( a \ b ) and right division ( a / b ) defined as the value x, where such a value exists and is unique, such that b = a * x and a = x * b respectively. I have never seen a scholarly work on quasigroups where left-division and right-division are defined as partial operations. If the magma is not a quasigroup, then division is simply not well-defined. That is all there is to it. Besides, when we talk about "dividing by zero," which is the context we are in, this general formalism of quasigroups is inapplicable. Defining 0 requires having two binary operations, one distributing over the other. The structure need not be a ring (the addition and multiplication need not be associative nor commutative). The identity element of the operation being distributed over is called 0, and the operation that distributes, if it does have an identity element, is called 1. However, 0 is not a multiplicatively cancellable element in a structure satisfying these axioms. The ring theoretic definition I provided is just a special case of the quasigroup definition, because rings are a richer structure than quasigroups, where it is meaningful to talk about 0, as opposed to arbitrary quasigroups. Though, again, I remind you that I believe you have still defined division incorrectly, even for the context of quasigroups. (Right)-division, the operation, is characterized by the axioms (x/y)·y = x, (x·y)/y = x. Where a multiplicative inverse exists, the two definitions - division as the direct inverse of multiplication, and division as multiplication by the multiplicative inverse - are equivalent (and the definition I gave for rational numbers is a special case of the latter)... I know as much. ...except when you try to extend them to 0/0, where the multiplicative inverse approach concludes that, if you pretend you can have a multiplicative inverse of 0, you arrive at a specific value of 1 (which also breaks the general rule that 0 times anything gives 0 since 0 times the hypothetical 1/0 gives 1), while the direct inverse approach concludes that 0/0 could have any value since 0 times x is 0 whatever x is. No. The quasigroup approach does not conclude 0/0 could have any value. The quasigroup approach concludes that if 0/0 exists, then (0/0)·0 = 0 AND (0·0)/0 = 0. In the latter, one simply has 0/0 = 0, which indeed satisfies (0/0)·0 = 0, since (0)·0 = 0. However, as stated, 0 is not actually multiplicatively cancellable, so the multiplicative magma of the bi-magma cannot be embedded in a quasigroup.
    1
  39. 1
  40. 1
  41. 1
  42. 1
  43. 1
  44.  @petevenuti7355  Personally in my mind any linear algebra has that one input one output property and that's what linear means in that context. I don't believe you are also saying that a function has to be linear to be a function were you? I have never seen or heard the words "linear algebra" in that fashion. This may just be my unfamiliarity with computer science, but in mathematics, "linear algebra" refers to the study of vector spaces, and of linear functions between them. A linear function is a function such that satisfies f(u + v) = f(u) + f(w) and f(a·u) = a·f(u). When it comes to finite-dimensional vector spaces, we represent those functions via matrices. Anyway, functions need not be one-to-one. Those are injective functions. Functions are in general, some-to-one, where some could be one, or many. A function cannot be one-to-many, however, or one-to-none. I'm assuming that only needs to be the case in the description of the definition of the simple binary functions we are discussing being division multiplication and what they're built from, I believe you're only saying that those specifically have to be linear , correct? If by linear, you mean some-to-one, then yes, that defines what a function is. However, the definition of a binary operation is more specific than that of a function. An n-ary operation on X is a function with domain X^n and codomain X. A binary operation is an n-ary operation where n = 2. However, I reiterate that I have never seen the word "linear" being used in this fashion, and in general, no, binary operations are not linear, in the mathematical sense. What would be your opinion on giving 0/0 it's own symbol, much like the numeral " i " , (essentially making it its own object outside of the systems you guys were discussing that I don't know the vocabulary for) even if it won't allow for a conceptual definition (like i) it would at least make errors glaringly obvious. Would it? I think 0/0 already is capable of denoting errors by itself just fine. I think I'm seeing what you're trying to explain to me, your separating the concept of what these relationships are from the mechanism of how they are calculated. This is correct. For example, in a mathematical proof, I may be able to prove that there exists a natural number satisfying some property. However, there are may not exist an algorithm to compute the binary digital expansion that represents such a number. For example, Rayo's number exists, and is well-defined, but it is uncomputable. There exists no algorithm that can tell me what the mth digit of Rayo's number is. And I am not saying the technology of modern computers is not advanced enough. I am saying it is logically impossible for such an algorithm to exist, regardless of how perfect or ideal this computer is. Simply a put, an idealized Turing machine could not compute the digits of such a number. An omniscient being could not compute the digits of such a number, because it is not mathematically possible for that to happen. However, the number does exist, and one can prove it satisfies a number of properties without even knowing the number "truly is." Fun fact: Rayo's number is not the largest named finite number out there. I never was and would never deny the concept of infinity, and I don't think you were saying I was, I think you were just saying that some of the things I was saying would point to that conclusion but in explaining that to me it sounded like you're denying the concept of "nothing" or by saying "nothing doesn't exist" or did you just mean in the sense that that's what nothing is by definition, something, everything, that doesn't exist..? I am not sure you how you came to that conclusion, since I am not sure how that is a plausible interpretation of the words I said. The point I am making is that, in mathematics, we have certain axioms. What you are able to prove and study in mathematics depends on what those axioms are. By changing the axioms, you change the realm of possibilities of what can be proven, and what cannot be proven. The axioms most mathematicians have agreed to use for basically all intents and purposes are the Zermelo-Fraenkel axioms of set theory, which include the axiom of infinity. These axioms imply the existence of functions which cannot be computed in a computability model. However, the functions need not be computable to be well-defined. To be well-defined, you just need some well-formed formula in the language of the theory that describes the existence of such a function, and prove that the formula is satisfiable. I can give you a description that this function, and only this function, satisfies, even I cannot figure out which exact ordered pairs are the elements of G and which are not. The relationships between these properties and the properties of other objects is what allows us to do mathematics. This is why it is so easy for us to study numbers that are too big for computers to handle. A mathematician can tell you many things about the number 3^(3^(3^(3^(3^3)))), even though no computer can store its binary digital expansion (such a computer would have to be bigger than the observable universe). I can describe infinite sets, even if the elements of the set cannot all be known. I can describe uncountable sets, even though none of the elements can be known. Mathematical proofs are such that they transcend the capabilities of computation, because they deal with the properties of abstract objects, and not merely computer representations of those objects.
    1
  45. 1
  46.  @MuffinsAPlenty  I disagree. The truth is that 1 not being a prime number has absolutely nothing to do with the convenience of how we state the fundamental theorem of arithmetic. I dislike it when people who are educated in mathematics try to present everything as if it is "a matter of convenience," because that is just not true at all. 99.99999% of things in mathematics are never about convenience. Sure, some things in mathematics are purely a matter of convention. The fact that we still use what I consider to be bad notation for derivatives, such as dy/dx, instead of using functional notation for them, such as D(y), really is entirely a matter of convention, even if there if one of the conventions is objectively superior to the other for three dozen reasons. It really is just notation. Using base 10 for our positional representation system to denote integers, instead of base, say, 60, like the Babylonians used to do, is s convention. In fact, using positional representations systems at all, rather than non-positional ones, is itself a convention. So, I am not saying there are no conventions in mathematics. My problem is that everyone on YouTube presents so many things (like 0! = 1, or 1 is not prime, or the ideas about the radical symbol being used as a function, etc.) as conventions that actually are not conventions. 1 not being a prime number is not a matter of notation, and it is not a choice we get to make. It is an irrefutable mathematical fact, that when it comes to commutative rings, and how we classify objects, there are exactly four families into which these objects can and do fall into, and these four families are exhaustive, distinct, and mutually exclusive. We can characterize these families as (a) those objects x such that there exists some y such that x•y = y•x = 0; (b) those objects x such that there exists some y such that x•y = y•x = 1; (c) those objects which are neither of the above, and whose proper divisors are exactly the objects in (b); (d) those objects which are not in (a) and have proper divisors in (c). How we choose to label these four families with four distinct labels is completely arbitrary, yes. We can even choose to have multiple distinct labels for the same individual family, yes. However, we can never choose to insist that elements from distinct families actually belong to one single family, and should be labeled as such. This is not a choice we get to make, because it is conceptually inconsistent with the mathematics above, and it leads to ill-defined terminology. Calling 1 a prime number is entirely analogous to insisting that my Toyota is a plant. Anyway, what this comes down to is, there is an actual conceptual reason behind why 1 is not and cannot be a prime number, no matter how much we would like it to be. It has nothing to do with how easy it is to formulate the language the factorization theorem in the English language when we reject 1 as a prime number. People should be taught the actual conceptual reason behind why 1 is a prime number, not this "it's more convenient" nonsense. And no, I am not saying we need to be formal about it. Simple intuitive explanations will do. I know that, as far as colloquial language is concerned, you can arbitrarily coin words and make them mean absolutely nothing and use them in self-contradicting fashion, or make them have useless meanings for the sake of trolling, and you can arbitrarily change how you use those labels any time you want to. But, this is not a colloquial language we are dealing with, now, or is it? We are dealing with abstract mathematics and number theory. It is a serious discipline of study. Going around telling biologists that your Toyota is a plant, because you chose to change the definition of the word "plant" in some unspecified, ad hoc way to include your Toyota in the definition, is not how science works. Similarly, simply changing the definition of terminology ad hoc so that 1 is a prime number, that is not mathematics. That is just pseudomathematical crankery. Now, I am not actually accusing anyone of having done this. That is not the point I am making. The point I am making is that educators need to stop encouraging this idea that all definitions exist only according to convenience, and that we change them willy-nilly how we want to. This is true of colloquial language, but not of language in academic disciplines of research. Educators also need to stop presenting fundamental mathematical facts that we do not get to do anything about as if they are something we choose. Perhaps you think otherwise, but that would beyond incomprehensible to me. Maybe I am out of my element. Maybe my strong advocation for the idea that people should not be taught false things makes me unreasonable, although if true, that gives me very little faith in the human species. But I remain unconvinced that this is the case.
    1
  47. 1
  48. 1
  49. 1
  50. 1
  51. 1
  52. 1
  53. 1
  54. 1
  55. 1
  56. 1
  57. 1
  58. One of the things that makes it confusing for people to understand why 1 is not a prime number is the fact that there is more to primality than the intuitive notion of indecomposability. There is another intuitive idea that the definition of primality should capture that people forget about, and it is the idea of non-invertibility. As we know, there are no multiplicative inverses in the integers. The only integers with a multiplicative inverse are –1 and 1. This makes –1 and 1 fundamentally different from all the other integers, be they composite, prime, or otherwise. This distinction is actually more fundamental than indecomposability, and it is what separates the integers from the rational numbers. Because –1 are multiplicatively invertible, you also get a kind of "closure," in the sense that products that contain only –1 or 1 can never be equal to any quantities other than –1 and 1. You cannot multiplicatively generate the integers with –1 and 1. The prime numbers are fundamentally different from –1 and 1, because they are not invertible. As such, you could never get the kind of multiplicative closure that –1 and 1 get. If you multiply prime numbers together, then you necessarily must produce new integers, which are not prime. It is this property that makes the indecomposability of prime numbers special. Yes, naïvely speaking, –1 and 1 are also indecomposable, intuitively, but this indecomposability is not a mathematically meaningful property, since the only thing you can do with –1 and 1 in multiplication is just get –1 and 1. This is not so with prime numbers. The indecomposability of prime numbers actually has meaning, only because they are not multiplicatively invertible. Therefore, it makes no conceptual sense to actually think that 1 and the prime numbers should be part of the same classification system at all. Saying that 1 is a prime number is like saying that a car should be in the same classification system as animals. Sure, if you want to, you can just come up with a name for any arbitrary collection of objects you see, no matter how ridiculous it is. The collection of all animals and a car can be given its own name, say, the carnimals. You can do this if you really want to. But it is completely nonsensical. Clearly, a car does not belong in the same categorization system as animals do, at least not unless you include many other non-animal things in the classification that have the same properties as cars and animals. Well, this is completely analogous to 1 and the prime numbers. 1 is not a prime number, and that is not because "it's more convenient that way," it is because it is just simply mathematically, conceptually unsound.
    1
  59. 1
  60. 1
  61. 1
  62. 1
  63. 1
  64. 1
  65. 1
  66. 1
  67. 1
  68.  @1ToTheInfinity  ...since 1, 5, and 79, are all examples of positive integers which can only be divided by 1 and itself, it makes perfect sense 1 is among the primes... No, it does not make sense. The problem is, "only divisible by 1 and itself" is not the definition of a prime number. It never really has been the definition of a prime number. The video talked about how, in the past, different definitions of prime number were used, and the one which was used the most was this notion of being "measured" by another number. For example, "the prime numbers are numbers that are measured only by the number 1 and nothing else." This is the definition that you encounter Another Roof mentioning throughout the video, because that is the definition that was used in antiquity and in medieval times. But saying that a number is divisible by x is not the same as saying the number is measured by x. This was explicitly stated in the video too. Why are they not the same thing? Well, because (a) the mathematicians of those times did not consider numbers to be measured by themselves. In other words, 1 measures 7, but 7 does not measure 7. Again, this was explicitly stated in the video. The other reason is that (b), well, it simply is not true that 5 is divisible by only 5 and 1. 5 is also divisibly by –5 and –1. But for a long time, negative integers were never considered in number theory. This is an oversight that needs repair. Let me go back to part (a). As mentioned, 1 measures 7, but 7 does not measure 7. That is how mathematicians used to think of it, but why? What is the difference between saying that 7 measures 7, and that 7 divides 7? The difference is that 1 is a proper divisor of 7, but 7, although it is a divisor of 7, is not a proper divisor of 7. The distinction between a divisor of x and a proper divisor of x is actually the exact distinction as the distinction between subset and proper subset. It is also completely analogous to the distinction between "equal or less than" and "less than." y is called a divisor of x if and only if y divides x. But, a proper divisor is more special. y is called a proper divisor of x if and only if y divides x AND x does not divide y. Now we get it: 1 divides 7, but 7 does not divide 1, so 1 is a proper divisor of 7. The whole point of proper divisors is that we only care about the divisors of x that are "simpler" than x: we do not at all care about the fact that x divides itself, because that is just completely useless, and trivial. Like, all numbers divide themselves anyway, so do we care about x being a divisor of x? This is why the concept of proper divisors exists. And the concept of proper divisors is the concept that the mathematicians of old were alluding to when they said "x measures y." The old language of "x measures y" translates into the modern language as "x is a proper positive divisor of y." Positive, because again, negative numbers were never considered seriously by European mathematicians prior to like the 1500s. So, now that you know what the distinction is between divisors and proper divisors, and now that you have watched the video and you understand that prime numbers were always ultimately defined in terms of proper divisors, albeit in a different language, it should be clear why 1 is not a prime number. You see, the definition of a prime number always has been "a positive number which is only measured (positively) by the number 1." Translating this to the modern language, this means "p is prime if its only positive proper divisor is the integer 1." THIS is the definition of a prime number. This is what it has been since basically forever, in concept, even if the language used was different. But look: the number 1 has no positive proper divisors, since its only positive divisor 1 itself. So, it does not actually satisfy the definition of a prime number. The problem here is that most teachers, and most textbook authors, believe that the actual definition is too complicated for grade schoolers (i.e, children) to learn. So they simplify it down, they get rid of all the "technical details," and so they just tell the children that 'a prime number is divisible only by itself and by 1.' But what this is a mistake, because this is completely misleading: you have changed the definition itself altogether by getting rid of the technical details. You cannot get rid of the technical details, because they are the most important part of the definition, and not the least important part. Prime numbers always have exactly 4 divisors: –p, –1, 1, p. The proper divisors are –1 and 1. But –1 and 1 have no proper divisors at all! They are fundamentally different from the prime numbers, and do not satisfy the definition of a prime number, so they belong to an entirely different classification system. –1 and 1 are called "units," or unitary numbers. One of the defining properties of units is that they divide all numbers. Also, the product of two units is always a unit. Notice how this can never be true with prime numbers: the product of two prime numbers necessarily is a composite number, by definition. Also, units cannot be divided by any prime numbers at all. They can only be divided by units. 2 cannot divide 1 or –1. –7 cannot divide 1 or –1. Units are fundamentally different from prime numbers. Saying 1 is a prime number is like saying "a car is an animal." Like, no, that is just way off. Also, 0 is not a prime number either. It is also not a unit, because you cannot divide by 0. 0 is what is called a zero divisor. In the integers, 0 is the only zero divisor, but this is not true for all systems of arithmetic, actually. For example, when you work with matrices, there are some matrices not equal to the zero matrix, but are zero divisors anyway. A zero divisor is a quantity x such that x•y = 0 for some nonzero y. By the way, this distinction between primes, units, and zero divisors, is not exclusive to the integers. It holds universally. It holds for matrices, polynomials, associative vector algebras, the rational numbers, the complex numbers, the Gaussian integers, the dual-integers, the split-complex integers, etc. It holds for all commutative systems with associative multiplication and addition. A unit is some quantity x such that for some y, x•y = 1. The integers –1 and 1 are units, and they are the only units in the integers. In the Gaussian integers, the units are 1, –1, i, –i. In the rational numbers, everything is a unit, except 0, which is a (trivial) zero divisor. The analogue for prime numbers in more general settings is called an irreducible element. An irreducible element is an element that (a) is not a zero divisor, (b) its only proper divisors are units. Remember: units have no proper divisors, so they are not irreducible elements. A composite element is just a product of two or more irreducible elements, just like in the integers. Note: how you factorize composites into irreducibles need not be unique in general. But, in the integers, it is unique. Now that you know all this, you are probably going to object that this all just "the multiplicative perspective," and that when you look at it from "the additive perspective," it is very different. But that is not the case at all. Why? Because the definition of a prime number has absolutely nothing to do with how you write numbers as repeated sums of integers. In fact, there is nothing interesting to point out: all integers can be written as sums of two integers in infinitely many ways, and all nonzero integers can be written as sums of 1 or –1 alone. So, ultimately, the additive structure does not matter at all. My point is, there is no such a thing as "the additive perspective." Insisting that there is one is borne from a severe misunderstanding of how number theory actually works.
    1
  69. 1