Handling the inevitable driverless tech accident
Many of the proponents for autonomous vehicles say they are safer than human drivers, reducing accidents, deaths, and injuries.
However, an article by Tony Gillespie, visiting professor, electronic and electrical engineering at the University College London (UCL) in Engineering and Technology magazine says it is inevitable that the vehicles themselves will be involved in accidents. The potential reasons why they could crash range from software bugs, operator failure, to electronic and mechanical issues.
Speaking to TU-Automotive, he explains: “Safer does not mean completely safe. Most current accidents are owing to human error while driving. Accidents because of a vehicle failure or the road infrastructure are relatively rare. A few are down to lack of maintenance, but MOT testing has reduced their frequency.”
“However, autonomous vehicles (AVs) will have both increased numbers and complexity of their critical systems compared with current vehicles. Therefore, it is very optimistic to expect current low failure rates to apply to AVs even without the complexities from the interconnections for connected AVs (CAVs).”
Software-intensive critical systems
Most critical systems will be software-intensive, requiring frequent upgrades. They will in effect be part of a system of systems and he claims that each one will have a non-zero failure rate. They will have several interacting and changing systems, as well as subsystems. This will make accidents inevitable. He explains: “It is anticipated that overall safety will improve significantly but only when the human driver is removed from all road vehicles. This situation (SAE autonomy Level 5 for all vehicles) is unlikely to be reached for decades.” Human error will be the culprit up to that point.
Dr Ehsan Torieni, assistant professor in the department of computer science at Durham University, believes that safety is not an objective: “There will never be a faultless driving system mainly since at the end of the day, they are implemented by humans. You can’t prevent human errors in implementations or testing.” He, nevertheless, agrees that autonomous driving will be safer than human driving as machines can’t be distracted by talking or tiredness.
Transfer of responsibility
The synopsis of Gillespie’s book, Systems Engineering for Ethical Autonomous Systems, published by the Institution of Engineering and Technology (IET), also argues that the transfer of responsibility from human to machines presents some significant issues “for all those concerned with new concepts, their development and use”. He argues that is it vital to look beyond connected and autonomous vehicles, to consider the evolving ethical and legal environment, and systems engineering approaches, which are used in the development of weapon systems for the military – as well as for civil applications.
To read the complete article, visit TU-Automotive.