Self-driving cars continue to attract the attention of the media, the imagination of pundits and the money of investors and governments alike. For instance, earlier this year, the UK government announced its plan to test self-driving cars on public roads.

Self driving cars present various technical challenges, such as how to ensure passenger safety when no one (i.e., no human being) is actively steering the car. They also bring various social challenges, such as the market consequences of the short expected useful life of self-driving cars (4 years vs. the 11+ years of traditional cars), or the biases shaping the algorithms at the heart of autonomous vehicles. Noah J. Goodall presented an interesting discussion of the social risks of self-driving cars, in a paper published in the Applied Artificial Intelligence journal. The paper is entitled “Away from Trolley Problems and Toward Risk Management“, and was published in volume 30, issue 8 (a free pre-print version of the paper is available here – thank you very much to Noah for directing me to this free version).
Goodall notes two important characteristics of the algorithms steering self-driving cars. The first characteristic is that algorithms in self-driving cars calculate the risk of collision or fatality based on average probabilities learned from analysis of the training data set(s). For instance, they calculate how characteristics such as the type of car, other cars’ distance and speed, the age of the cars’ occupants, the weather and so on, are correlated with the outcomes of collisions. This means that the calculations are only an approximation to the ‘average’ situation, and may not accurately reflect the risk faced by the driver and others. However, actual fatality rates vary widely depending on a broad range of factors, including “whether a passenger is sober or drunk, male or female, young or old” (p. 813).
This problem is further accentuated by the fact that training datasets tend to be biased, and based on a very narrow segment of the population. For more on this, check Carolina Criado-Perez’s discussion of how a world built on samples consisting of mostly male, young, white and healthy persons can have dangerous, or even fatal, consequences for others.
The second characteristic is that the algorithms are optimised to minimise risk to the occupants of the vehicle. This means that if the car finds itself at risk of collision with one of two other vehicles, the algorithm would choose to steer away from the one that would cause the most damage. In practice, this means steering away from vehicles with large masses like trucks, and towards vehicles with small masses like smaller, cheaper cars because “a crash with the small car would be less severe and safer overall” (p. 817)
While this may be a logical and reasonable decision at the individual level, it creates significant social distortions when it is extrapolated to the collective level. The learning outcomes from one instance are fed into the algorithm through machine learning, replicating and amplifying the initial effect. Over time, large vehicles would be involved in fewer and fewer crashes, and smaller ones the opposite. With this choice, the algorithm is transferring risk from one group of drivers to another, without the latter group’s consent.
Moreover, smaller, cheaper cars tend of be owned by individuals with limited income. Hence, ultimately, drivers in low income groups would bear a disproportionally high portion of the risk of the popularisation of self-driving cars.
Another reminder that technology in itself is neither good nor bad; but it is never neutral.
2 thoughts on “Self-driving cars may disproportionally harm those in minority and in low income groups”