showcase
Steering discussions on automated driving
Automated driving comes with a variety of philosophical, psychological, ethical and legal issues to be solved. The NWO Responsible Innovation project Meaningful Human Control over Automated Driving Systems explores questions that arise in the context of self-driving vehicles.
Who is responsible when something goes wrong with a vehicle that is driving on its own? Whose insurance company should cover the costs in case a series of trucks is connected through wifi and automatically mimics the behavior of the first in row? Does a self-driving car need a drivers’ license? These and other types of questions will be addressed in the research project of Bart van Arem, Marjan Hagenzieker and Filippo Santoni de Sio from Delft University of Technology.
Project leader Bart van Arem starts with an important note: ‘In this project, we explicitly rule out self-driving vehicles based on self-learning systems, like artificial intelligence and machine learning. We focus on systems which obey to the letter rules defined by human operators or programmers.’
Three levels
The research project aims to develop and test theories on three different levels, he explains. The researchers will make an inventory of which responsibilities and values are at stake when a vehicle drives itself in one way or another. Furthermore, they are looking at the humans involved. In what way are they in control, what do they need to be able to meet the requirements the technology sets on them, and how can you make sure such a person is qualified for that? And thirdly, the researchers will look into the consequences of automatic driving for traffic engineering. Under which condition can automatic driving make traffic safer, dissolve traffic jams or lead to more efficient transport of goods?
Different scenarios for automatic driving all come with their own issues, Van Arem says. ‘The simplest step is a combination of adaptive cruise control and lane keeping. In that case, the human driver has to stay alert. The second step is that a system also monitors its environment, and urges the human driver to take over in case the driving becomes more complicated; for example because you are leaving the highway, and heading for a roundabout. Finally, we look at the fully automated car, which is able of driving public roads without any human interference. The main question in all of these cases is: Who should be responsible for the control, and what is necessary for that person to be equipped to indeed carry that responsibility?’
Trace the human involvement
Even if a car can make decisions on its own, there is always a human actor involved. ‘A Mercedes official’, says Van Arem, ‘has stated that when in doubt, the car will follow the rule “passengers’ safety first”. That is a principal choice made somewhere in the management of that company. So they may be held accountable in case the car decides to save the passengers from hitting a tree and drive into a group of pedestrians.’
The timing of the project is spot-on, says Van Arem. ‘We have conducted several other research projects, related to technology development and spatial and transport impacts. In these projects, we noticed all parties involved want to address ethical and psychological issues. The technology has developed far enough for us to be able to experiment with it, but it will still take quite some time before we will have highly automatic vehicles on the public roads. This gives our partners like the CBR (responsible for issuing drivers’ licenses), RDW (which approves vehicles before entering the public roads), and insurance companies (which have to decide on liability issues), time to use our insights to prepare themselves for the new reality in transport.’