Comments by "Anony Mousse" (@anon_y_mousse) on "What We Get Wrong About AI (feat. former Google CEO)" video.
-
I doubt anyone will read this, but I'll put it here just in case. Machine learning is just a collection of algorithms, so it's no different than using algorithms to solve equations, except now what's happening is algorithms that are less specific are being used to statistically analyze data to generically solve other problems. CPU's and GPU's are essentially the same thing, except that CPU's are more generic in how they're implemented and with the types of problems they solve and while GPU's have less functionality overall they parallelize it to an insane degree more. Eric Schmidt mentions briefly something that should scare everyone, that the people doing the research want to instill their own leftist values into the technology, which if they ever create real generic machine intelligence with their research will be an end to the human race.
However, this all brings me to the true dangers of the technology, and why I don't believe we'll ever reach real generic machine intelligence. This is my attempt to warn people, but I know no one will heed the warning and eventually humanity will cease to exist, so I'm not sure why I still have even a tiny inkling of hope, but I guess I do. The real danger of such technology will not be what it can do on its own, because it'll never develop to the point of acting unilaterally the way a human or any other sentient being can. The real danger will be of a psychopathic individual with sufficient intelligence to meld multiple technologies and use it to automate the process of killing people.
We already have the technology to recognize what is present in a photograph because of Google, and thanks to ChatGPT, voice recognition software and a host of other technologies, we can even use them in conjunction with each other to perform complex tasks at the mere utterance of words from an operator. It's only a matter of time, due to the lack of morals being taught to children in today's society, before someone attaches weaponry of any variety onto a drone and hooks it into some computer running their own instance of an open source "AI" program to start the process of killing everyone. With the right manufacturing setup they can produce drones with relative ease and speed and even have resource collector drones to draw in materials to produce more. Once the process is underway they could sit back and watch the world burn.
There is no legislation that can fix this. Only massive societal restructuring. It would require that leftism be rooted out and morals and ethics be taught to children again.
1