Comments by "justgivemethetruth" (@justgivemethetruth) on "The Future of Intelligence: A Conversation with Jeff Hawkins (Episode #255)" video.

  1. 5
  2. 3
  3. 2
  4. 42:25 - When I think about Hawkins' and Harris' ideas on AI, and risk, and doomsday I have to side more with Hawkins. I think Harris' ideas on super-intelligence and AI "getting loose" are absurd science fiction, and fear-mongering to get attention and to help get funding for ... who really knows what. It is like the "missile gap" in the old Cold War, more of a marketing idea or manipulation by fear, by spelling out some kind of absurd catastrophe, and relying on our science fiction movie experience rather than any well thought out ideas. For example, the first problem is that no one has yet still explained what intelligence it, and what it its relationship to consciousness, and whatever consciousness is. One simple way of looking at it would be like Freud's conceptions of the mind, id, ego and super-ego. Id in particular, the driver, the motivator. I don't see human beings being able to deconstruct ourselves to the level necessary to understand or measure or maybe even perceive the elements of intelligence. And then Hawkins seems to think you can unhook the intelligence function as a kind of module, remove it, recreate it and interface it to your TV, or car, or house, or military force? I don't buy that either. That is an attempt to market to customers who he wants to sell his vision and inventions to. I don't know that anything he has produced so far as been effective or useful. I look at it this way, our consciousness was the product of evolution going back to the simple chemical processes in the first proto-cells, what contributes towards survival works, stays around and evolves and develops ... all the way up to our consciousnesses we have today. I don't see how he thinks you can unhook that from a supervisor/manager function with some kind of goals. Human goals have to do with feelings, and feelings are hardly intelligent. They can be, but they can also be twisted. How does an AI entity conceive of itself if it reaches consciousness. It has nothing in common with humans. It is not born into a connection with other humans, the product of birth, society, and seeking to reproduce and to control in some way its reproduction and the destiny of its species. For example, the end of Hawkins' book he is all about human survival. What would make an AI excited and interested in surviving? Does there need to be some kind of weird elusively hidden pleasure button that the AI consciousness does not understand but blindly allows to control its actions and thoughts? But Hawkins must be right about how to proceed to study intelligence, because our only example of it is ourselves, but then we have to acknowledge the evolutionary thread that goes back millions of years. How do you do that, and what happens if you try to avoid or deny that?
    1