Comments by "Mark Armage" (@markarmage3776) on "Israel's Lavender: What could go wrong when AI is used in military operations? | GZERO AI" video.
-
2
-
1
-
1
-
There's a limit to how many targets human can engage at one time, the machine can engage thousands of targets at once.
Meaning that if a human error is about 50%, at most in a day, a human officer can give out kill list order to 10 people, which means out of those 10, only 5 are killed by mistakes.
But with a machine, let's say the error is about 20-30%, but it engages 1000 targets a day, then it literally killed 200-300 people by mistake in a day.
Please go study actual mathematics.
And furthermore, there's absolutely no guarantee that the standard used to tune the machine is even an acceptable standard for military conduct. What happens if the standards used in the machine turns out to be illegal?
If the machine was tuned to committ war crimes, the scale of war crimes increases exponentially.
Don't you get it, buddy? Go get a proper education. You have lost touch with your humanity.
1
-
1