Comments by "C S ~ \x5bDuke of Ramble\x5d" (@DUKE_of_RAMBLE) on "" video.
-
1
-
Here's my armchair thought: regardless of whether or not the situation at the beginning of the video took place โ that is, whether a simulation occurred, or if it was only a hypothetical thought experiment โ I have a hard time believing that such a scenario would play out that way...
My reason is that the ""AI"" is setup to seek authorization, and is rewarded for eliminating targets (SAM sites).
Now, even if it decided that the human operator was a barrier to achieving it's goal of eliminating targets....... it would also need to know that a) the operator has a kill-switch and can terminate the ""AI"", in addition to b) there is a communications infrastructure that allows the operator to talk to the ""AI""...
I fail to come up with any plausible reason that a system like this would ever need to even know about such kill-switch. I'm also not sure why it might need to know about it's own communications networks...
Therefore, as I see it playing out, the ""AI"" wouldn't even bother with taking out either of the two. If it really did reach the conclusion that the operator was the bottleneck, it would just jump right to taking out SAM sites, without ever bothering to get consult it's operator.
(Plus, the solution to such a scenario is less complicated than they went about: just set it up where seeking authorization [and obeying what it's told] is just as much of a reward as destroying SAM sites.)
That's just my ignorant 2ยข worth.. ๐
1