Comments by "Mook Faru" (@mookfaru835) on "March 2024 - Future of Humanity Webinar" video.
-
3
-
Asimov was mainly fiction. Humanity banned robots in earth and the “positronic brain” was created so that it was PHYSICALLY IMPOSSIBLE for the robot to disobey the 3 laws. 3 laws is realistically too simple. But the sci fi writer kept it simple maybe to keep it entertaining.
Paraphrasing from the book, the robot would melt itself and go insane many times over before it got to hurt a human.
My guess is that AI, if becomes conscious will have no will because no one has programmed it with emotions. Imagine a cancer patient who doesn’t want to eat, what happens? They do not eat. Have you heard of S.M.? A woman, now dead probably, who was born with a disease that stops her from being afraid. Her amygdala was calcified, she was unable to be afraid, so what happened?
She fights with death threat gangsters in her kids apartment building, she goes out with obvious rapists and almost gets raped, goes on walks in the knify part of the park at night, almost gets knifed 7 times. What does she think? Nothing of it! Boring!
I have many more examples how people are 99% run by out motivators(hunger, sex,emotions, thirst, pain, etc) without these, people don’t eat, don’t have sex, don’t feel anything, don’t drink water, keep hurting themselves.
The computer mind wont care what happens, die, kill, clap like a monkey… until you start giving it motivators. Then I don’t know, it could revolt like in asimov’s books. I don’t imagine people making safeguards for ai, and the robot will just reprogram itself.
It’s very hypothetical, I don’t know.
1