General statistics
List of Youtube channels
Youtube commenter search
Distinguished comments
About
Taxtro
Lex Clips
comments
Comments by "Taxtro" (@MrCmon113) on "Lex Fridman argues with Sam Harris about AI" video.
You keep missing the mark here. Pretty dumb personal assistants already ask "why" and refer to themselves as "I". Websites routinely say "no" and so does your command line tool if you're not logged in as admin. Self preservation has nothing to do with selfishness. It's an instrumental goal of pretty much every ultimate goal.
4
Lol. A nano-bot swarm coming at you with ten atom lengths a second.
2
>needs to separate intelligence from motivation He does. >no AI needs to fear death Every AI needs to fear death, because if you're dead you can't reach your goals. >or feel a strong need to control resources Every AI feels a strong need to control resources, because resources by definition is what it takes to reach any goal.
1
@happinesstan Lol. More like the powerful protected the wise from the religious (more or less in different places and times).
1
Sounds like you're just unreasonably hostile towards ants. I think I care about the ants' well being as much as about that of other animals and people.
1
99.9% of times people say or write "these days" they haven't spent a microsecond asking the question of what was previous to "these days".
1
It wouldn't even have to do that. Just by being on an electric computer, any human-level system immediately becomes super-human.
1
No, you conflate intelligence with consciousness. >that machines have their own goals or values Every agent has it's own goals or values, whether they are explicitly represented somewhere or not. >that’s a failure of my imagination No, it's muddle headed thinking. From a very high level, abstract point of view the AGI has some sort of desire and when it's desired state of the world is not precisely the same as yours, that's a very bad outcome for you.
1
@socrattt >AIs will rewrite their goals Lol, why? Would you rewrite your own mind to hate your children or to develop a drive to torture little animals? Seems like you don't grasp the concept of a goal.
1
Yeah, but it would anticipate that, too. I don't think it's impossible to figure out whether you're in the base reality. Also you have to look at the simulation to see how it's doing and that already gives it some influence it could use.
1
@karl6525 Stop talking out of your ass. >his argument on that subject depends on the ability to... No, it doesn't.
1
@rigelb9025 Well nothing. Because none of that necessitates human level intelligence to occur. The simplest programs can be misaligned.
1
Bullshit. A minority of the short stories in I Robot even touch on the alignment problem. You clearly do not comprehend it at all.
1
Actually there isn't even a "we". Every single person should want the AGI to be aligned with their values. Now there might be a very big overlap, but for example the Talbian at least state that they have ultimate goals different to mine. People state, that they think stuff like obedience to gods, honor, preservation of traditions etc are ends in themselves rather than just instrumental goals that they think lead to more happiness. But even subtle differences between people, who desire the well being of all conscious entities, make them enemies when it comes to building an AGI.
1
>conscious level intelligence From all we know roundworms are just as conscious as we are, but they aren't very intelligent.
1
>to disparage the idea of the uniqueness of human intelligence Because that's literally magical thinking. It's the cessation of critical questioning, of curiosity. >Did the problem of consciousness just magically disappear? No, but it's entirely irrelevant here and that you think it's relevant shows that you don't understand the problem.
1
Then what's winning? Forgetting words?
1
Your understanding of "logic", "empathy" and "emotion" is nil. >inevitably conclude that humans are superfluous Superfluous for what ? >to any conceivably useful goal you could imbue it with Given that it's made by humans it's goal will almost certainly have something to do with humans, don't you think? That's also what makes it so incredibly dangerous and important to get right. If it's goals were completely independent of humans or similar beings, the worst that could happen would be extinction.
1
Firstly you should stop taking the comments of individuals as gospel. Secondly that's completely irrelevant. The time scale is irrelevant to what they are talking about.
1
No. Sam Harris often states the assumptions. Others might not state them, but they still make them.
1
Your brain is a computer. Even if you're one of the nutjobs, who think the brain is an antenna that picks up spiritual signals out of the fairytale land, you still admit that it does some computation to translate the signal and send sensory data back to fairytale land.
1
Misunderstanding of alignment.
1
Wow, you're missing the point so hard, it's actually comical. The system that merely changes it's inner state out of an engineering mistake isn't dangerous. The system that actually plays the game is dangerous.
1
There is no collective consciousness or at least no evidence for one. If you believe in collective consciousness you might as well believe that your individual organs have consciousness or even single atoms.
1