Comments by "Paul Aiello" (@paul1979uk2000) on "Smart, seductive, dangerous AI robots. Beyond GPT-4." video.
-
1
-
A.I. is one of those things that is going to continue to develop, regardless of the risks, humans are very curious and want to see what's possible.
Are there risks? Yes, A.I. like any other tech can be used for good and bad, and we already know A.I. will be used for bad, likely from governments and the military.
With that said, the economic benefits that A.I. can offer us all are massive and could change the world for the better if done in a smart way.
I think the biggest risk with A.I. could be if it all can interconnect with each other, which would allow the possibility of a rouge A.I. or human to take control of a lot of them and a lot of what they are connected too through the internet.
Truth be told, there is little to no reason for A.I. to have remote access to sensitive areas like nukes, energy grid and so on, something they usually show in movies in how they take over and it's very possible that we could create A.I. that have access as restricted like humans are in society or having each A.I. that's independent of each other, which would make it much harder for it to become a hive mind.
Personally, I think the critical areas we need to get right is the access to remote servers and safety protocols at the root level of the A.I. weather it's a A.I. online or in a robot, maybe that part needs to be hard coded into robots and can be altered by remote without you physically changing it manually, that would make it much harder for a rouge A.I. or human to take over millions of them.
In any case, the real danger of A.I. isn't the A.I. itself, it's the remote access we give it to control other things where the real danger could open up, a few A.I. or robots are quite limited in what they can do if the access is restricted to sensitive areas, but it's ad different ball game if we allow it access to everything like movies usually show us how they take over.
For now, I think the real risk is A.I. being centralised in too few hands with either big corporations or governments and I do think we need another approach, a more open approach so A.I. can benefit us all without it being controlled by a few, I think an open source solution is the only way to go, which ironically, was what ChatGPT was meant to be, hence the name, OpenAI, until they went for profit, that for me is the biggest risk we face with A.I. at the moment, especially as A.I. keeps getting better and more useful, it's not a good idea to have that gateway concentrated in so few hands with how big of a change it's going to have on the world, we need a much more open approach, something governments might push hard on over the coming years with regulations.
1