General statistics
List of Youtube channels
Youtube commenter search
Distinguished comments
About
Night Raven
Cleo Abram
comments
Comments by "Night Raven" (@GiRR007) on "What We Get Wrong About AI (feat. former Google CEO)" video.
Realistically it will likly be simlar to all other forms of technology. Some positive effects, some negative effects. The extremems are always the most unlikely despite peoples doomsday propheceys or utopeanism.
2
@Landgraf43 It is very much comparable to other technology. AI cant just operate on its own it needs input from humans. What we have currently isn't even ai. Its just a bunch algorithms design for specific purposes. It doesn't think, it isnt alive. Current technology is already smarter than humans. A calculator is better than any human at mathematics.
1
@Landgraf43 The systems you are referencing do not set their own goals. They till need human input and feed back and likely always will other wise the tool has no purpose. Auto GPT still needs human input, its ability to sequence instructions in the best way to achieve the demanded out come isn't the same thing as autonomy. And we cant even really talk about AGI since we don't have it yet and possibly never will. And even still if we do reach that level the rule still applies of extremes being unlikely as with all other technology.
1
@Landgraf43 Sub goals and goals arent the same thing, Sub goals are. One is autonomous, the other isnt. Its not autonomy its still responding to present instructions. Autonomy would be the algorithm coming up with its own independent goals completely separate from the goals given. Doing things FOR itself. Not because someone told it to. That's the kind of autonomy we are referring to. Human goals go beyond basic biological programming. A lot of which have no bearing on survival. Emergent capabilities don't make an agi, it requires much more than that. There's a difference between one kind of technology advancing rapidly, and a whole different technology existing at all.
1
@Landgraf43 Again the type of goals people talk about in regards to AI arent sub goals, they are spontaneous goals independently choosen by the ai. Which is exclusively something you see for things with actual consciousness which ai might never have. In your example the system spontaneously making the decision to improve itself is unlikely. Unless that behavior was already pre installed or that goal was set by someone else it wouldn't have a reason to improve its internal scoring system like that if the rules arent set. In regards to algorithms a BIG part of what makes they useful in our eyes is to be ablity to understand the context of the goals we give it. So in reality its unlikely you would get an algorithm that spontaneously does things that are undesirable or even desirable. Sure other technology cant improve itself on its own but my point is neither can these algorithms unless we program them to do so. It does take ALOT of work to create and improve these algorithms as well as human intervention. We may not know exactly how it functions inside its black box but there is still alot of work put into it. Its far from just doing things on it own.
1