Comments by "clray123" (@clray123) on "Henrik Kniberg" channel.

  1. 4
  2. 3
  3.  @AgentM124  No, not really. What you call "complex behaviors" (or "emerging behaviors", which is another such cop out) is just your illusion of those. It is like when people used to believe that the Clever Hans horse could do arithmetics. Even his trainer could not explain how the horse was about to perform the tasks. It was only explained through careful controlled experiments how his "skill" actually worked and that it was NOT in any way related to mathematics. Of course, such careful experiments (and many more careless ones) are also performed on the AI models today to measure their "skill". And they reveal their true nature and inherent deficiencies, but these are usually brushed off from marketing materials. The core problem is that language is a very misleading output - when we see words, we WANT to believe that the model is "thinking", and for an untrained person it is very difficult to distinguish at first glance how "replaying" of text sequences from a huge database (combined with some proximity searches) differs from actual reasoning. (As an exercise, you might ask your favorite LLM about which tests would be needed to make the distinction; you will get some interesting quotes out of them, copied from ML research papers.) What I find interesting is that people want to believe that these primitive algorithms "somehow" model our brains even if there is hugely relevant evidence which speaks against such assumptions, e.g. in form the largest models tripping up on some rather trivial tasks, or making mistakes of the kind that can be explained by overfitting to training data and imperfect recall. I wonder how people explain such failures to themselves and keep them consistent with their beliefs about the "AI progress". But then, people also "somehow" manage to believe in religions, so it is probably just human ability to eliminate any cognitive dissonance and accept contradictory data without second thought... as long as it fits one's preconceptions. And to be honest, it seems much easier to fool someone with a humungous database talking like a human than some old holy book.
    1
  4. 1