Comments by "Ray Bod" (@raybod1775) on "Science Time" channel.

  1. 6
  2. 3
  3. 2
  4. 2
  5. 2
  6. 1
  7. Kaizaki Arata In the 70's the biggest thing going was expert systems in which human inputted rules determined how the AI worked. Many systems like legal programs are expert systems. That was it until about the last 15 years when things started to change. There's a version of AI that's essentially does a non-mathematical statistical analysis to determine 'what is this a picture of' like in what part of the self-driving cars do to determine if its a stop light or pedestrian ahead. The latest AI self-learning is essentially two battling AI programs that try to beat each other (more or less) in whatever game scenario they battle in so they play millions of scenarios against each other and the best AI wins. In both cases, the viewpoint of the AI is very narrow. Both systems take a tremendous amount of computing resources. My view is... self-learning AI could lead to a general intelligence system, but there needs to be an exponential increase in efficiency in process and an expansion in human viewpoint in how all things relate and how to set that task up for the AI in its initial program. It's theoretically possible that after enough self-simulations an AI could essentially start from nothing and evolve to a super general intelligence, but for lack of processing power and time. It would be seem to be possible to develop a general AI at a much higher step along the way of evolution and 'direct' the evolution of the AI so its more in sync with real world human evolution. I don't think there's a market for what might be true general AI. It's more likely the path will be through marketable products which will probably be done by stitching together various AI components to get practical tasks done in specific areas. All this is just speculation. Look at videos from 3Blue1Brown p, Edureka!, The Artificial Intelligence Channel... for learning about AI programming.
    1
  8. 1
  9. 1
  10. 1
  11. 1