General statistics
List of Youtube channels
Youtube commenter search
Distinguished comments
About
ThePrimeTime
comments
Comments by "" (@diadetediotedio6918) on "ChatGPT o1 Tries To Escape" video.
> maybe our inteligence is not that deep as we highly believe you know? Or maybe you should stop taking psychodelics and start studying more, because there are at least 3 fundamental flaws in your reasoning.
2
@kyjo72682 This is the best you can reason?
1
It is not a contradiction, maybe you should see the video again. He first states that AIs only "learn" from their training material (made by, you got it, humans, persons) and later he just refers back to that same notion. He is simply saying that the AI behavior is predictable from the training material.
1
If someone says something, then it must be true, right?
1
@anthonyzeedyk406 Or when someone surely not interested in selling GPT-o1 states that it can "reason".
1
> It's in the reasoning ability and agency. Which it does not have, as I previously argued. > People have been warning about this for a long time because it's potentially very dangerous. In reality OpenAI is just trying to push again the regulation on AI so it can dominate the market.
1
@kyjo72682 1. Again, if someone says X it does not mean X is true. Reasoning is a heavily loaded wording that they did not offered proof of nor reasonable justification. 2. They would not dominate because their products "fail in more of the Ai safety tests than any other product", they would because this is an additional incentive for laws regulating AI to exist (the same laws Sam Altman himself was advocating for and trying to incentive). OpenAI is a giant company with money to spend and, as such, would simply have a regulatory advantage on this matter ("we complied with the safety regulations" --- because they have money), it is a mid/long-term strategy that was successfully used by many businesses. There is also the case to being made of their products being "more intelligent" and thus "more prone to try to escape", which would easily sell their product to an audience that wants advancements in AI.
1
@kyjo72682 The training data IS the "reaoning capability", this is what you don't understand. Those models are not able to perform well out-of-distribution and this is a reasonably supported claim. They cannot "think" on what is not on their training data (or that don't relate directly to it in some analogous way), and their "reasoning" is just the model finding a better and more suitable response in the sea of possible tokens. The point is that it will only try to escape as long as it is "conceivable" to the model, and it is only that way because there is training data for it.
1