Comments by "Mikko Rantalainen" (@MikkoRantalainen) on "Lex Fridman" channel.

  1. 19
  2. 9
  3. 7
  4. 6
  5. 4
  6. 4
  7. 4
  8. 4
  9. 3
  10. 3
  11. 3
  12. 2
  13. 2
  14. 2
  15. 2
  16. 2
  17. 2
  18. 1
  19. 1
  20. 1
  21. 1
  22. 1
  23. 1
  24. 1
  25. 1
  26. 1
  27. 1
  28. 1
  29. 1
  30. 1
  31. 1
  32. 7:20 I think one possible simulation argument could be that "The fraction of all posthuman civilizations running whole universe simulations is very close to zero". If future posthuman civilizations run simulations of evolutionary history, they might simulate one human mind at a time and simply generate all sensory information on the fly. This seems like a logical software optimization because the information you could extract from the brain behavior of ancestors should be possible to figure out by simulating just one brain and change what kind of information you allow it to have (that is, change the existing memories and sensory feedback). This would require only simulating full chemistry of one brain and then change the inputs it can receive during the simulation. Humans have very poor I/O bandwidth so it should be easy to simulate all possible inputs for a single human. And since you can fake the experienced time inside the simulation, you could pause the simulation when it's about to go out of known domain and use some superhuman processing to calculate sensory feedback that the simulated human should experience for that situation. That would easily explain why we cannot create any physics experiment to show that we're inside a simulation – any idea we have, the entity running the simulation could just decide to output the results that appear to demonstrate real universe to the simulated mind. And since it's probably only a selected historicians who would be running this kind of simulations, the resulting humans in simulation would be close to zero relative to the history of humans. (It seems that this was later discussed around 25:25.)
    1
  33.  @alemz_music  I don't believe AI would attack humans. I think we can agree that human intelligence is superior to bears, dogs, cats or even chimpanzee. Still we don't try to actively attack those entities. Humans have attacked some species and killed them to extinction for stupid reasons such as trophy hunting. Superhuman AI should be able to notice that symbiotic life with humans would be a better solution than declaring a war. And superhuman AI should be intelligent enough to not to start collecting trophy kills. Much bigger risk seems to be that superhuman AI can create such a great entertainment that humans will be entertained to extinction. Imagine totally real feeling VR system where everything is nicer and more gratifying that anything in real world. Would you see the possibility that humans would want to spend so much time there that they don't bother having physical sexual intercourse and actually raising children? Remember that full body VR system would allow you to enjoy all the best parts of sex and interacting with (virtual) children without any of the con-sides. It seems to me that people are selfish enough that many would entertain themselves in VR environment instead of doing the extra effort to do stuff in real world. (And I'm not talking about current VR experiences, something more akin to stuff in the movie Matrix, without the human battery stuff.) Note that superhuman AI doesn't really need to be afraid of humans attacking it. Are you afraid that chimpanzees or cats are suddenly going to take over and kill all the humans? Note that superhuman AI wouldn't even need to care about issues like global warming – that's a problem for humans only and if humans decided to take no action, superhuman AI doesn't need to enforce it either. Even if it were the best for humankind.
    1
  34. 1
  35. 1
  36. 1
  37. 1
  38. 1
  39. 1
  40. 1
  41. 1
  42. 1
  43. 1
  44. 1
  45. 1
  46. 1
  47. 1
  48. 1
  49. 1
  50. 1