Comments by "Tony Wilson" (@tonywilson4713) on "736 CRASHES From Tesla’s Autopilot REVEALED | The Kyle Kulinski Show" video.
-
As an engineer this system WAS NEVER GOING TO WORK!
Why?
Short answer: We don't yet have the technology to match the human visual cortex system.
Long answer (and sorry if its long).
It all has to do with how algorithms work and how visions systems what (what engineers call camera systems).
At best they can mimic what a human does with respect to a certain types of tasks. The better that task can be defined then the better an algorithm can be developed. I work in industrial control systems which are the computers and sensors systems that run things like production lines, mineral processing plants, water treatment plants,...etc. Occasionally we have to write special algorithms to make some process work because all the standard functions just won't work. I have written algorithms that if I were describe what they did you'd all think I work in AI and I don't. I HAVE NEVER written any software that thinks, but I have written code that MIMICS what a person could do to make a certain process work how we wanted.
Here's the problem with autopilots for cars.
We don't have the technology to mimic r=what a human does when they drive a car. We don't have camera systems that can operate anything like a human eye and we don't have computers that can process that sort of data in the way the human brain does at the rate that the human brain does.
The bit near the end where Kyle describes the human brain as a super computer is a gross understatement when we consider what our visual cortex does every second.
I have a pilots license and we've had autopilots for planes for decades and they work incredibly well BUT the autopilot in a plane simply has to fly along a straight line. A course is simply a set of straight lines and the AP just goes point to point. I have programmed industrial robots for a living and they are quite similar. We define a set of points and orientations and have the robot move from point to point and at some of those points "do things." Autopilots in planes and Industrial robots DO NOT THINK they just move from point to point in fairly simple ways - speed and direction.
Driving a car is in some ways a far more complex task to define than flying a plane or programming a robot. The car itself is far easier to operate BUT the interactions with the environment ARE NOT. Airplanes don't go down streets where there are parked cars, gutters, trees, kids, dogs, cats and other cars coming the other way (except in incredibly rare cases).
Irrespective of if you believe in God or Evolution the human visual cortex can take snap shots of our environment at a rate of around 50 times a second. Its broken into 2 sections - focused and peripheral. The incredible thing our visual cortex does is assess threats by clumping things together so that it has fewer things to assess. Our brain does not see several million straw colored hairs and tries to figure out what they each mean. Our brain just sees a lion and knows to avoid. When we drive a car our brain does not see 100,000 green things and tries to figure out what they mean it just sees a tree and we avoid it.
This all happens very fast in our peripheral vision system at a rate of about 50 times per second.
Plus once our peripheral system detects something we can move our eyes and focus on that threat and then re-assess that threat as to things like distance and speed while at the same time comparing that threat to previous experiences or knowledge we might have. This is why distracting a driver can increase the risk of serious accidents by orders of magnitude because you take that system off line and it then needs time to get back online.
So here's why this was never going to work. We don't have camera's as good as human eyes and we don't have computers that can process that much data fast enough. Most of all we don't yet know how the human brain actually does what it does. So its pretty hard to write an algorithm to mimic what it does.
1