Comments by "" (@TheHuxleyAgnostic) on "Dave Rubin's Racist Smears Backfire BIG TIME" video.

  1. 10
  2. 8
  3. 4
  4. 3
  5.  @gnubbiersh647  Harris argued it was already a given that, if you set a subjective goal, science can help you achieve that goal. Meaning, if you subjectively define "well being", and subjectively set that as your goal, then science can help you reach that goal. Or, for example, if you set reaching the moon as your subjective goal, then science can help you achieve that goal, and everything you do then objectively gets you closer to achieving that goal, or it doesn't. What people didn't believe was that science could itself set the goal, so that the goal was objective. By saying he wanted to prove that wrong, Sam had to show that the goal, "well being", is itself objective, to give us something beyond what he already acknowledged was a given. He argued all concepts of morality are about "well being". If that is actually the case, that would mean they all have different concepts of "well being" (a Christian's, for example, would be measured not by how healthy and happy you are, but instead measured by how closely you are following God's will, so your soul can have ultimate happiness in the afterlife). Sam set aside their beliefs in an afterlife, or any other concepts, and began arguing as if there was only one concept of "well being" ... his ... and that it is only measured by his standards. He had basically defeated himself, at this point, and is exactly at what he said was a given, at the outset. He also tried to support his concept, by arguing morality is only about sentient creatures, and that we don't value the "well being" of rocks. Problem there is ... we do. If we deem that a structure or sculpture, made of rocks, has historical value, we consider it immoral to harm or destroy it. Same with any rocks deemed to have artistic value. We also deem some environments to have value, including any rocks within, and provide them protection. We also deem many shiny rocks to have value, and some people will kill each other over them, considering that value to be worth more than another human being's life. He also made out like that also applies to lesser life forms, and yet some people's concept of morality includes all life forms. He didn't provide any real "objective" demarcation line. Are cows, pigs, and chickens, below the line? Most people's concepts of morality okay slaughtering them. Plenty of people think the world would be better off without humans. Plenty value certain animals over other human beings. Hedonists value simply fulfilling your desires. All these concepts of morals and values aren't samesies. Even ignoring that many concepts of "well being" include an afterlife, the various moral concepts aren't all using the exact same concept of "well being", that Sam puts forward, for this life. He goes completely deranged, and argues that an objective fact can change. He gives as an example, the distance of the Earth to the Sun. The problem there is that he's leaving out that any measurement to the Sun would be made at an exact moment in time. It will forever be true that, at that exact moment in time, we were exactly that distance from the Sun. If something is truly an objective fact, it will always be true. To support his "moral landscape" argument, he provides chess, as an "analogy". He claims it is a game of "perfect objectivity", where a move is objectively better or worse. No. Someone subjectively decided to create a game, subjectively decided on the board, the pieces, how those pieces would move, and how to win (back to what he said was already a given at the start). People have also subjectively come up with alternate rules. People have subjectively come up with alternate boards (3D Star Trek chess). Chess, and any game with rules, is more akin to laws, than morality. Once you've subjectively decided something is a rule/law, then you are objectively following the rule/law, or you aren't. Sometimes we decide laws are themselves immoral, and change them. Even when you're within the game, with the rules in place, nothing says it's wrong to maybe let your kid beat you once in a while. If that's your subjective goal, then what is "objectively" a better or worse move could become completely flipped, and there'd be nothing wrong with that, even though it would appear to be an "objectively" horrible move, by Sam's singular (and subjective) way to measure things. He also tries using healthy vs unhealthy, as an analogy to moral vs immoral. The problem there is that healthy/unhealthy don't include oughts. Eating a Big Mac might be unhealthy, but there's nothing really saying it's wrong to do something unhealthy (unless maybe it forces something unhealthy on others, against their will). Technically, skydiving increases your odds of dying, or being injured. So? People do it for kicks. Being healthy, or unhealthy, depends totally on your own subjectivity. If you subjectively want to be healthy, only then you ought not do X. If you subjectively don't care about being healthy, then you ought to do X, if you want. This is nothing like morality. We don't say go ahead and randomly kill someone, if you want. We'll call it "immoral", but that no longer means it's behavior/actions you ought not do. Sam paraded around like a peacock, making out like he was a genius, making out like Hume wasn't all that, and that he was the one to have finally filled, or dodged, the is/ought gap .... Nope. He is a complete and utter moron, who never got beyond what he said was a given, from the outset, and was just too stupid to see it. He had filled the gap with subjectivity, just like everyone before him. He also showed his limited grasp on objectivity vs subjectivity, when fear mongering about AI. He didn't take the angle that people would be able to greatly misuse it (as is possible). No, he took the Terminator angle, that it would rise up against us! Except, it doesn't matter how "intelligent" a computer is, there's zero indication that one can set its own subjective goals. That's because they care about nothing. Any goals one has, has been programmed in. An AI would be following at least one human's goals. A super intelligent computer, knowing a lot of facts (objectivity), would still make none of its own decisions (subjectivity), because it just doesn't care about outcomes. Humans have to program in all the decision making. They might give it a little room to learn the most efficient route, between point A and point B, for example, but once it learns the most efficient route, that's the one it will stick with. It will never, in a billion years, decide to take the scenic route, all on its own, because it just doesn't care. It feels nothing, good or bad, when "seeing" things. Sam has been taking sci-fi way too seriously.
    3
  6. 2
  7. 2
  8. 2
  9. 1
  10. 1
  11. 1
  12. 1
  13. 1
  14. 1
  15. 1