General statistics
List of Youtube channels
Youtube commenter search
Distinguished comments
About
Taxtro
TED
comments
Comments by "Taxtro" (@MrCmon113) on "Can we build AI without losing control over it? | Sam Harris" video.
An AGI will most probably learn probabilistic behaviors from a continuous repertoir by making use of first order derivatives. Of course you can think of all of this as "if-else" statements, because every computer only works with discrete numbers at some level, but that's not a helpful way of thinking about it.
9
What do you think qualifies as emotion?
7
An AGI will have extremely powerful heuristics. I don't think it's good to think about them as emotions. Also it would be able to manipulate it's own thinking. If it came to believe that thinking in a different way made it better at accomplishing it's goals, it could change itself.
3
Just put up a sign "No praying to the mainframe".
2
Total annihilation is by far not the worst outcome of AGI.
1
He is extraordinary, but there is many other public intelelctuals, who are at least as calm.
1
@EvolBob1 From what you wrote it becomes obvious that you don't have a clue what you are talking about. Almost all researchers agree that the control problem is indeed a problem. To get a grasp on the importance of that problem you can consult Nick Bostrom's "Superintelligence". AI safety research is booming, because the danger is very much real. All intelligence is information processing; that is what intelligence means . Nor does your alternative to this make sense. If it's a "simulation of intelligence", it's information processing. But it's not a "simulation of intelligence". That phrase is just nonsense. You cannot simulate intelligence without having actual intelligence. >Current technology on AI is simply trying to either copy human learning Totally wrong. Biology and neurology are only very seldomly considered in machine learning research. We focus on what empirically works and what is mathematically justified. >combined with intuitive programming "Intuitive programming" is not a thing. AI uses dynamic programming, genetic programming, differentiable programming, etc, but not "intuitive programming". >Personally I think all we will do is copy human intelligence If you copy human intelligence, you already have a superintelligent AGI. You are really completely clueless.
1
@NotQuiteGuru No, that's not the danger at all. Consult Stuart Russell or Nick Bostrom to find out what the danger is.
1
Not only would it not be a filter, it would be an accellerator. An AGI could transform the local solar system and spread out much faster than the species that built it.
1
Everything you just wrote is wrong.
1
Trying to "operate" an AGI would be like an ant trying to operate a gardener. It simply makes no sense. The danger of AGI persists, no matter how good your intentions are.
1
They are not nearly as intelligent as an AGI would be though.
1
The time doesn't matter. Whether it's in 50 years or in 500 years, the control problem remains.
1
@christoforosmeziriadis7016 Using your imagination to avoid dangers is a fundamental aspect of rational thought. The phenomenon of you falling of a cliff doesn't exist in the world until you take a step over the edge. But you can anticipate that this might happen and thereby avoid it.
1
@christoforosmeziriadis7016 You have no clue what you are talking about. The control problem has nothing whatsoever to do with consciousness.
1
@christoforosmeziriadis7016 You literally have no clue of what you are talking about and every single of your sentences reveals it. Why don't you at least do a 5 minute google search before spreading your nonsense? That would suffice to find out that the control problem is not about consciousness. It is further clear that you don't know what AI and statistics are. In don't know what those words mean in your head, but whoever you are parroting has no idea what he's talking about either. I guess with "AI" you mean machine learning and in your head that should be separate from statistics. But you cannot separate statistics from learning. All learning is statistics.
1
@christoforosmeziriadis7016 There is already chatbots, who are way better at understanding language than you are. Again: Consciousness has nothing to do with it. Do you understand that? Can you repeat that?
1
@christoforosmeziriadis7016 Nope, I don't have the impression of you having either intelligence or understanding. You never adressed anything I wrote. I think I could write a better chatbot than you are in a weekend.
1
@christoforosmeziriadis7016 They achieve more with a couple of thousand lines of code than you are doing with billions of neurons. You haven't acknowledged that consciousness has nothing to do with the control problem (or what the control problem even is). You haven't noticed that you changed the topic three times. You have no understanding of what either intelligence or stochastics are. You have zero knowledge of machine learning. And you are completely blind to all of this.
1
@christoforosmeziriadis7016 A truism: Something that is true only in so far as it is trivial. The entire point of machine learning is that you don't have to explicitly program everything. The same way your genes do not directly determine your behavior. You can learn from interacting with your environment. Also you have changed the topic again and you've again failed to acknowledge any of the things you got wrong. Do you admit that you have no idea what the control problem is?
1