Comments by "Kristopher Driver" (@paxdriver) on "Sabine Hossenfelder" channel.

  1. 16
  2. 8
  3. 4
  4. 4
  5. 3
  6. 3
  7. 2
  8. 2
  9. 2
  10. 2
  11. 2
  12. 1
  13. Here's what a philosopher who studies ai would answer the objections / conjectures: Searle with instructions is fixed output, there is no processing of concepts. Language can be fixed for simple exchanges, but its power is concepts which are expressed in patterns and rules. There's symbolic representation and there are ciphers, but those symbolic abstractions are not the same, only similar. Generative models composite downscaled mappings in chunks and compare the activations of chunks with trained data. If understanding means it can be applied not only specifically but also analogously to new things learned later on even if they're unrelated. That's understanding. Without describing a hierarchy to an AI model it would never understand nested data structures like lists of lists as children or parent elements even if the model "understood" the family unit because it doesn't grasp conceptions of hierarchy without being explicitly shown. In contrast a human could learn these concepts if they had families but didn't have words for those family hierarchies. It would still be conceptually familiar to a human to learn of nested elements and how they are similar to family trees just by being alive and thinking about unrelated things their entire lives. A computer doesn't draw inference from prior understanding, only prior associative training. The difference can be likened to memorizing the answers to a math test or the formulae needed to answer the questions and bring able to derive a proof of a formula, compute it in reverse, or write it in another set of symbols. Understanding math and reproducing steps prescribed by an engineer are not the same, because at one point before we taught math we had to do the discovery and development of the rules process itself. That process definition requires understanding. Just because we can do things that machines can do doesn't mean machines are us or we are them. Performing functions is only part of knowledge, being able to expand and disseminate new knowledge is exclusively intelligent. If gpt were intelligent or understood things it would have the impulse to correct itself every time it's wrong because it has the data and perfect memory needed to do that, or it would lie on purpose for fun or spite. If it understood what it was doing it could develop an agenda and aspirations. Understanding also requires continuous learning which AI does not do by its own volition. Understanding requires consciousness and agency for continuum of relative experiences to keep training and feedback of its world model. Otherwise its fixed and not understanding anything new by definition.
    1
  14. 1
  15. 1
  16. 1
  17. 1
  18. 1
  19. 1
  20. Even if we perfectly articulate sensual experience which leads us to derive goals which steer cognition and motivation, that machine would still lack the sensory perceptions necessary for emergent properties of life because we're composés of cells which also sense their environments and evolved. AI will always need to be given goals until its switches are build from computers trained to respond individually on floating integrals of feeling simulations and which then combined to work together as a whole in a means which proved over time to be effective in sustainable development of their collective "self". That's not just unlikely, but it would means we'd have to let them and help them evolve far beyond an obviously safe autonomy. The reason we don't let kids drive is they'll kill people in cars until they're old enough to be trained to drive. Robots won't have the emotional wherewithal to evolve sentience unless we explicitly walk them through a global version of maturity and we can't even coordinate world peace or agree on borders so that level of training simply isn't possible to amount to an AI extermination. Imagine every computer as one neuron and guiding that to evolve from a parameceum to human fetus, then guiding that fetus to adulthood then expecting that one adult, now imbued with feeling and morality, to by happenstance be both suicidal and psychopathic... It's just not even close to reasonable to think of ai as that kind of threat. Yes ai follows Bayesian inference just like biology, but comparing timescales of billions of years to centuries is absurd. To think we'd be helpless to stop it even it if occurred is absurd. And to think it would want to even if it could and we couldn't prevent it is absurd. Every level the ai doomsday is insane, humans are far far greater threats to ourselves, we're already here and have already nearly exterminated ourselves in the past and we're already ignoring climate change and starting wars despite our intelligence, tools and capacity to emote. We are the existential threat by orders of magnitude not the ai. AI is a tool. It is math and engineering. People are irrational, unpredictable, insecure, greedy, myopic and violent. We are the threat, clear as day, not ai.
    1
  21. 1
  22. 1
  23. 1
  24. 1
  25. 1
  26. 1
  27. 1
  28. 1
  29. 1
  30. 1
  31. 1
  32. 1
  33. 1
  34. 1
  35. 1
  36. 1
  37. 1
  38. 1
  39. 1
  40. 1
  41. 1
  42. 1
  43. 1
  44. 1
  45. 1
  46. 1
  47. 1
  48. 1
  49. 1
  50. 1