Comments by "Aga" (@aga5109) on "Eliezer Yudkowsky: When will AGI arrive? | Lex Fridman Podcast Clips" video.
-
2
-
When it comes to models of scientific cognition that will be built by humans with the help of AI or AGI, there will also be build in limit. We do not know all the variables of the model. We enter the initial assumptions into the model, we verify scientific hypotheses with tools with limited parameters, also by matter. Our senses have a limited range. Knowledge will be accumulated and analyzed within a limited set of possibilities in finiteness. Overgenralized From one system to another, given that they differ in their level of complexity and operation.
The problem is to generalize such conclusions, from finite to infinity. And it's dangerous.
Unfortunately, we are threatened by overgeneralising, which is one of the aspects of human cognition, for example, leading to severely judging of other people, within a limited knowledge about them to get oriented in a situation & to retain "psychological homeostasis" governed by "defence mechanism". It is an obvious process seen by people who work with peoples' minds.
1
-
1