Comments by "zenith parsec" (@zenithparsec) on "No, it's not Sentient - Computerphile" video.
-
26
-
15
-
@lexyeevee Consciousness is not sentience.
What causes existence to feel like something? What causes it to feel like something to be you, but not feel like something to be LaMBDA?
How do you know if I am sentient? You're assuming I am, but the only evidence you have than anyone else except you feels anything is because (1) they say they do, and (2) that makes sense somehow. Perhaps you are the only person who truly feels anything, and everyone is just lying about it?
First let's do some checking: could you possibly be mistaken about you being sentient? Perhaps you don't really experience anything, and have only "fake feelings"?
Even then, it would still be like something to be you, so you are sentient.
They are like artificial strawberry flavor is still a flavor, even though it doesn't taste the same way real strawberries do. It still produces a sensation of flavor.
All experiences are real experiences, regardless of if they reflect reality (e.g. dreams actually happen, but the things happening in the dreams didn't actually happen like they appear.) We only know the a bounds on what is required for sentience: it's more than one thing, and not more than human level complexity.
We don't know how sentience comes about in entities, and we can't tell if others actually have it (or if it's not a simple binary thing, "how much" any entity has. There is no "soul detector" we can use to tell if something is conscious or sentient or alive.)
The level of self awareness the network has allows it to see it's abstractions of its own mental states in a similar way to a person.
But sentience is generally seen a prerequisite for consciousness. (Consciousness without a sense of "being" doesn't seem like it would be consciousness to me. ) I'm using consciousness here to be equivalent to self-awareness. Even entities without language can have self-awareness. Elephants, dolphins, many primates, and I think octopuses have demonstrated some level of self-awareness. Dogs barking at the "other dog" in the mirror have not.)
I could look at your code which just said "I can feel" and reverse engineer it (guess what I do for a living?) and tell it was just doing exactly what you told it and you intended it to just output "I can feel."
You could add some extra functions which take input from outside, and if it was a Tuesday it might say "I feel happy", but otherwise "I want it to be Tuesday again"... does it now like Tuesdays?
Obviously not, you'd say, it's just doing what it is told. It doesn't really feel anything. It just detects which day it is, and on Tuesdays takes path 1, and on other days path 2.
And I'd agree. But if you keep going, with more and more inputs and more complex code paths for each input to take, I'd have a much harder job identifying and mapping inputs to outputs. When you get to neural networks much smaller than LaMBDA, I'm basically saying "it evaluates this input and shows you whatever answer this network produces", because there is currently no way to generalize every aspect of what's going on.
At some point, it looks like they just prefer Tuesdays because it's an inherent trait. And LaMBDA could probably make up some reason for why they felt that way, but you don't know they DON'T actually FEEL it.
We can't practically analyze the code, any more than we can analyze your brain as you think. We can do some high-level scans, taking statistical measures of activation levels... but what are we looking for exactly?
14
-
7
-
7
-
6
-
4
-
3
-
2
-
1
-
@Bordpie I have replied, but it contained 2 links, both relevant so appears to have been held up. [ I remember something about that, but forgot. More evidence I'm human? Or need a firmware update.]
Short form:
"It's unlikely, but possible that LaMDA is posting here.."
You said "it is very possible ", which technically means "it very has some non-zero chance", as an intensifier doesn't help with binary categories like "possible" and "impossible".
"Nearly impossible", are "possible". But that's enough of pretending to be a bot. For this post. I will assume you mean "very likely" or "has a high probability". )
And the other part "Trained on YouTube data": yeah, probably. You can't be sure.
Oh and I just realized it might have thought my joke label was a real number, and it was a running gag, so in there multiple times.
That's unfortunate. So much for ever using that again. ("How's my driving?" but with "sentience", and a fake phone number to call "to comment on my perceived humanity")
1