On the morning of March 23rd, a Tesla with Autopilot engaged was involved in a fatal collision, which has once again brought up concerns about the feature and the way that some drivers are using it.
The Model X SUV crashed into a freeway lane divider near Mountain View, California and killed the driver – the second time that Autopilot has been involved in a fatal accident. This news comes off the back of Uber’s own autonomous fatality, where a self-driving Volvo killed a pedestrian in Tempe, Arizona. Understandably, concerns over the technology have heightened, with a handful of automakers and software specialists pulling autonomous testing from public roads.
According to Tesla, the driver had received several visual and one audible hands-on warning earlier in the drive and his hands were not detected on the wheel for six seconds prior to the collision. The driver had about five seconds and 150 meters of unobstructed view of the concrete divider with the crushed crash attenuator, but the vehicle logs show that no action was taken.
We cannot guess what exactly happened due to a range of different possibilities, although this once again shows the common misconception of Tesla’s Autopilot. The expectation of the vehicle driving itself has led to a false sense of security, where the driver believes that they can sit back and do absolutely nothing. There was always going to be a learning curve with this kind of innovation, but it seems to me that there is a significant lack of information and education of operating an autonomous vehicle – especially in these primal stages.