General statistics
List of Youtube channels
Youtube commenter search
Distinguished comments
About
xybersurfer
Computerphile
comments
Comments by "xybersurfer" (@xybersurfer) on "LaMDA Logic - Computerphile" video.
i think that you don't need a definition for X. that's probably the wrong approach. the real problem is a lack of consistency, in the answers of these models
4
you can't program something to be sentient? not that i think it's sentient
3
well there's your problem. there is no such thing as absolute certainty
3
@frilansspion all Artificial Neural Networks. LaMDA (Language Model for Dialogue Applications) discussed in this video, is one of them
2
and you are just existing in your lifetime. i'm not sure why you are so fixated on continuous processes
2
@totalermist it doesn't matter whether a function keeps running forever. a function can immediately return a new state on every call with which it can continue if called again with that state, all while that function is simulating another function that runs forever. in fact, do you know of any humans that run forever? do you think we stop being sentient if we were to become immortal? whether a function runs forever is a meaningless distinction
2
@rosameltrozo5889 indeed. that sounds more like a bug
2
that almost sounds unfalsifiable
2
@Crazen2 but neural nets (like LaMDA) are not really programmed as a set of rules in the traditional sense, where the programmer has to think these up beforehand as you seem to be suggesting. the only rules such a program gets is a way of figuring things out and a set of data like conversation on the internet. whatever rules it figures out that makes the best predictions is what it's final rules are
2
that's all the more reason, why better follow up questions should have been given to LaMDA
1
@frilansspion consciousness? no. acknowledge the context of the comment i replied to
1
@frilansspion i was not responding to sentience specifically. i was talking about getting the desired output from such a model. this is what i meant: "the real problem with getting the desired output, is the lack of consistency in the output of all neural networks. i don''t think the problem is the lack of an exact definition of the desired output" my reasoning being that having an exact definition, seems to defeat the purpose of using a neural network
1
@frilansspion it depends on what you mean with "a definition". i was not responding to the part of the original comment mentioning sentience. but when you put it all together and worded a little better than you did before, then in my opinion your conclusion about the problem with consciousness [in neural networks] being the lack of consistent outputs, makes some sense. of course i've considered this, but i won't come out and claim that. i think that's probably where the confusion is coming from. i'm trying to be careful not to make unwarranted claims
1
@frilansspion 1) yes. it was my conclusion that the problem is a lack of consistency, but it was your wrongful conclusion that the problem i'm referring to is that of consciousness. 2) we are discussing whatever you meant with "a definition". 3) no. it's your conclusion that i responded to the whole comment. my reply was to part of the comment, like i already said. do you understand the difference between talking about X and talking about consciousness? 4) no. i'm not a bot.
1
@frilansspion why do you think that the original commenter used X, when they could have used sentience/consciousness? the original comment doesn't even contain the text you quoted. it looks like you are being intellectually dishonest. you want me to be exact, but your quote is not exact. i've already described the part i'm referring to, but you keep adding your own interpretation
1
@GetawayFilms what i said was not meant as a personal attack. but this indeed seems to be going nowhere
1
@GetawayFilms ah i see. hey no problem. i'm glad that i'm not the only one that noticed, the way he was replying. people don't often follow others arguing online, so i'm pleasantly surprised haha
1
i think it's lack of control is not important
1
what makes the Chinese Room Argument interesting is the different perspectives, from which people respond to it. it is not some kind of absolute truth. i'll repeat what i wrote to you elsewhere under this video: you are aware that a function that takes as input a state and then stops and returns a state, can simulate a function that does the same but runs forever. right? whether a function runs forever is a meaningless distinction
1
@mickh2023 does this mean that no one on the internet can convince you that they are sentient, because you are communicating with them through classical computation?
1
@totalermist so what if a function is a fixed point iteration? why would you care. with an input as complicated as reality, i suspect that you would be hard pressed in finding a fixed point. time can also be simulated. we could also dissect someone's brain and ask which part is sentient. the problem with deconstruction, is that every time someone points out something sentient you could pick that thing apart and claim you don't see any sentience inside. sentience seems like an emerging property. you seem to be confusing the implementation details, with the result
1
@totalermist what do you mean by the function not being unconstrained? yes it can get ridiculous to do calculations with pen and paper. can you explain how this is related to "the function not being unconstrained"? all i am getting from this is "doing your calculations by pen and paper is slow". i seem to be missing your point. if the information is complete, then why would you not be able to simulate something? with "cheating", are you referring to optimization and therefore the ability to prediction? and if so why are we talking about this? no. if the calculation output differs depending on the physical representation of the machine, then i would argue that it is not a physical representation. i disagree that consciousness emerging is not rational, just as i would disagree with opening up a computer and asking which component is the computer. also who decides into how many pieces we would divide a computer? this actually seems like an arbitrary decision thus not rational, so why bother? you seem to be unnecessarily specific.
1
@laurendoe168 that certainly would be interesting. although i would doubt the trust worthiness of a system that can deceive without being told to do so (bad tool). maybe this is where the difference between it being intelligent and it being conscious makes a difference. i think that the messy way we are currently using Neural Nets, allows some of this weirdness
1