Comments by "" (@diadetediotedio6918) on "" video.
-
45
-
36
-
11
-
10
-
@blallocompany
No? Most of the time people are doing this, it is not an example of "limited situations". And again, to stop reinforcing this bs, we are not "emitting words", we are [chosing] words. Stop biasing your own vocabulary. We chose words deliberatedly [all] the time, which is my point, even when we are making accidental mistakes we are [chosing] our words and not just "emitting" them, it is a deliberate act even when it is mistake, and that's literally [why] it is called a mistake in the first place. A mistake is only possible if we [intended] to make a correct decision, a decision is only a decision if it has [intent] and [volition], both things LLMs don't posess. LLMs, in fact, don't even "make mistakes", we use those words with them because it is easier to say than "they selected the statistically most probable words that, in this specific case, lead to a factually incorrect final cohesive answer interpreted by the human mind", it is the same when a program does something that you as the programmer did not intended it to do and you say it is "misbehaving" or "behaving incorrectly", the program is behaving perfectly correctly according to the explicit instructions you gave it, what is wrong is the encoding of the desired instructions (by the programmer) into it.
And to end it, you are chosing those words to justify your position, you literally is having introspective power to the next word you are writting right now, and when you read my comment again, you will think about it and start responding to the thing with your own words intending to respond me with a specific line of reasoning.
9
-
8
-
@blallocompany
What you mean it "goes nowhere"? And I am free to pick the words I cam up with, what are you on about?
The fact that there's a set of defined words don't mean I need to adhere to them, I can quite literally make up new words on the spot using physical concepts as guides, like "claxchackles" where clax imply the action of eating something and chackles imply all the fruits that have the orange color. Now what your point is again?
Plus, it's not a less free decision because you have limited options, it's expected that you could only chose from already defined options (otherwise you would be the one creating the choices that you would then need to pick anyway), chosing means selecting X in place of [Y, Z, W, ...] and all of those things need to exist in order for the choice to exist, you always chose from an existing set.
Also, Greeks didn't had that sophisticated notion of volition in their times, this is closer to modern philosophy of mind and even language than that. And buddhists could be wrong all they want to, saying I'm an observer of the actions I am literally partaking of is no less false than a schizophrenic saying that the red dragon he sees behind me exists.
5
-
@PunmasterSTP
I don't think this has something to do with "knowing what you are doing" tho. If you take a random selection of neurons from your brain and put them into a petri dish, they would not be a "memory" or "retrieving something", they would be just cells in a petri dish dying slowly and trying to survive the new harsh environment. What makes a brain is the collection of all connected cells, and what makes a memory and the process of retrieving is the mutual work of those connected cells, it is a fundamentally non-reductive process and, in this sense, the common sense explanation of "I am retrieving something from my memory" is already sufficient (and probably one of the best we can do) for explaining the process. Descending this would actually lose information relative to this process instead of increasing it, so it is not only sufficient but adequate.
In other words: knowing about the causal process behind a conscious process does not increase the amount of knowledge about the conscious process, and knowing about the subjective nature of the conscious process don't increase the amount of intrinsic knowledge about the causal process. You cannot go from "this is the concept of a bird" into your neurons, if you go you will only see "a bunch of neurons firing together and wiring together" and the most you will know is "those neurons fire together and wire together when the subject thinks about the concept of a bird", this would say nothing in and on itself about the actual concept of a bird in the subject's mind. The same way, you need to actually study the causality of the thought "this is the concept of a bird" to increase your knowledge about the way the brain works, by itself the subjective description is not perpassable. This is what we usually call "the hard problem of consciousness", and it is the reason we don't need to be able to be conscious about the causal processes involved in making consciousness happen to say that "we know what we are doing most of the time".
It is also not exactly something just "popping into my mind", but rather I'm being directed towards some subject and evoking the thing into my mind, I'm saying those things are not purely aleatory or unknown, but rather basically volitions of your own.
5
-
4
-
4
-
3
-
2
-
@thesun9210
I don't know how a dragon is, yet I can point you precisely that a dog is, in fact, not a dragon. You only need partial knowledge of the thing to be able to point negatives, in fact in logic there's a kind of proof that you do by literally eliminating possibilities (abductive inference).
We obviously don't agree on a specific definition of consciousness, but it is completely INSANE to say we don't know what it is, it is the common experience of all humans and something we all know from it being literally in the roots of our existence. If I say "conscience is a little rock in the park I saw the other day" you would immediately say this does not even make sense. If I say conscience is being sad you would have an intuitive understanding that sadness is a conscious state you could be in, instead of being the consciousness, if I said consciousness includes some level of awareness of yourself or others you would understand that, based on your experience this, does, in fact, make sense, etc. We can't define it precisely but we can ostensively define it without much controversy at all.
1
-
@thesun9210
It literally makes it not a dog, because if a dragon was a dog we would [call it a dog], not a dragon. I have partial knowledge that dragons need to have long tails, they need to have scales, I have the knowledge that there are reptiles that we call "komodo dragons" because they have similarities with what people in mythology calls dragons, we know it is probably not a canine so I can rule it out. This is a reasonably not controversial partial knowledge about the thing, and if you came and said "well, a dragon does not have those characteristics" then I would say your dragon is another entirely different thing and that you either should chose a better name for it, or all other people should. This is how we arrive at sane conclusions even about things we are not certain about.
And yes, the nature of consciousness IS as common, and it IS extremely relatable, the fact that individuals have differences is not a contradiction about that, we are talking about what consciousness it, not what a self is, not what a specific instance of a consciousness is. And yes, I can say my consciousness is the same as of a schizophrenic person, a schizophrenic has the same basic characteristics of my own consciousness. It has awareness, it has subjective experiences, it manifests in first person, etc. What I cannot say is that the schizophrenic has a perfect understanding of the world, which is a specific feature of consciousness (which is also not the same as to say he does not have one). But we also need to remember that if a dog has it's tail cut, this does not imply the definition of dog does not include a tail, a dog is a being with a tail and if a particular dog does not have a tail it is more of a matter of accidents happening with the thing than with the definition being wrong, it's a question of ontology.
Consciousness is a universal thing, experienced universally (even if accidentally differently in aspects) by all humans, it has objective commonly shared characteristics we can talk about and understand about eachother (in fact, human communication would be impossible if this level of resemblance did not existed). It is as abstract as it is concrete, and we know it exists because WE HAVE IT. And we have no reason whatsoever to think AI has any kind of consciousness.
1
-
1
-
1
-
1
-
@blallocompany
1. We have past studies without this limitation that arrive at similar conclusions, I remember clearly this being a thing when CoT started to become popular, the models reasoning were not part of the reason they answered what they answered, and many times they would "reason" correctly and arrive at a completely different answer later, confabulating a response.
2. There's nothing specific in this paper mentioning a limitation of 1 single token, they asked for a direct response but the point was never for the model to just SPIT a token, the point is to test wheter or not the model is able to THINK before spitting the token, the reasoning being after the fact is literally to test wheter or not it is able to arrive at a correct explanation of how it did it + if it was able to identify the process itself. If you ask a human how much is x + y, even if you ask it to only spit one word, the human will still pass into a mental process to arrive at it and then you could ask the method and the human would say correctly the method.
You don't need to be able to "introspect how you """generate""" a word", because this is a pointless question, you don't "generate" a word, you CHOSE a word based on what you are thinking and how you want to say it, there's no disconnection between your thinkering and your speaking, when you are asked about X this is already in your flux of consciousness, you already know the options and then you just pick them. Words are chosen by volition, which is a deliberate act of will, and most of the things in the flow of thinkings is rationally explainable by yourself.
1
-
1
-
1