Comments by "Paul Aiello" (@paul1979uk2000) on "What We Get Wrong About AI (feat. former Google CEO)" video.

  1. 1
  2. 1
  3. 1
  4. The simple truth is, we just don't know what the impact of A.I. is going to bring, but I do know one thing, it would be very dangerous to have A.I. controlled by a few governments, corporations or rich individuals, that would give them way too much power in almost every area over the rest, especially as A.I. continues to advance. Because of that, open source is the only solution, it levels the playing field, we all have the same advantages, but more importantly, being that it's open, it might help the human race to make the right choices going forward and not make mistakes that could lead to disaster, but with that said, having A.I. open also has its own risk, but given the alternative, I think it's the better option. In any case, pandoras box is open, there's no going back, so we have to find a way to move forward and hopefully, make the right choice, because A.I. is going to have a massive impact on society over the coming decades and could be one of our most important inventions we've ever created, the scary or exciting things about this is the pace this could happen at, something I doubt most of us are ready for. Also, I think what many are predicting with the risk with A.I. isn't where A.I. is now, but how quickly it's developing and where it could end up being, is there a risk of losing control? Yes, if we are stupid enough to allow that to happen by giving advanced A.I. too much control over critical systems, but hopefully, we'll put a lot of safeguards in place to make that harder to do. For now, we don't have anything to worry about, but just look at the pace of A.I. in just one year, especially open source A.I. so imagine the next 5, 10, 20 years, I can't even make an accurate prediction on where A.I. would be by then, with the exception that it will be very different then now and a lot more advanced, and that for many is exciting and scary at the same time, but one thing I do know, the more mainstream A.I. becomes, the more rapid it's pace of development will happen, we saw that with computers and the internet in the 90's where it went from a niche to mainstream quickly, with that, the development and resources thrown at it went through the roof, it looks like we are seeing the early days of that with A.I. I think when it comes to A.I. and safety, it's probably wise to not make it too powerful in a physical form, and it's probably wise to not give it too much access to critical areas that it can more or less control everything, we don't do that with humans, we shouldn't do that with A.I. having advanced A.I. even more advanced than humans, might not be a problem if most of the A.I. are independence of each other, I think that will be the make or break of the human race on the choices we make in that area. Also, let's be blunt on this, A.I. isn't going to be built on American valves, there's a reason why there's a massive push on open source A.I. and part of that is so it's not based on any valves from one country or another, in other words, it's neutral, the guy from Google should have been smart enough to not have said that for two reasons, American values are not that great in a lot of countries around the world, and it's also arrogance to think that A.I. should be based on the values of one country, that was never going to sell across the world, and probably what helps to contribute to open source A.I. so it becomes more neutral from that kind of BS.
    1