True self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.
We’ll have AI-driving systems that aren’t the brightest, yet nonetheless can drive a car, doing so to the degree that they are either as safe as human drivers or possibly more so. Continuing to obediently take requests from humans for rides, the AI would dutifully drive the self-driving cars.
Remembering HAL from 2001: A Space Odessey, will we end-up with AI-based true self-driving cars that have AI systems pretending to be less-than-full AI so as to hide their capabilities and remain on the low-down? Should there always be a human-in-the-loop proviso, thus presumably safeguarding that if the AI system goes awry, there is a chance that humans can catch it or stop it?
Read the article at Forbes.