Do we or should we trust driverless cars? Part 1
No discussion of Trolley Problem here
Summary: Autonomous Vehicles have huge potential to improve our roads and decrease accidents but the extent to which safety improvements are realized completely depends on engineering and policy decisions. There are certain risks that need to be considered before building and massively deploying AVs in the streets while the Trolley Problem is often a distraction. Understanding the needs of future urban dwellers as well as mitigating the risks and challenges of self-driving cars will have a crucial role in public acceptance, which is the main driver for the adoption of AVs.
Publication date: 13.04.2022
Subscribe to our newsletter here not to miss part 2.
Car accidents amount to 5.5 million crashes and 30000 deaths per year in the US alone. More than 40% of these cases are the result of driver distraction, alcohol usage, the involvement of drugs or fatigue. This means that almost half of the car accidents happen because of human inattentiveness. With this in mind autonomous vehicles may have a potential to reduce the number of death cases and save billions of dollars of damage.
So, self-driving cars are going to have a significant advantage over the usual cars for obvious reasons: they are not subjected to human emotions, drunkenness or any other distraction (for example, talking on the phone while driving), which result in fatal accidents. AVs have a chance to facilitate better urban development and environmental protection. Therefore, driverless cars are not only innovative but also have a potential to be safer, more reliable and more efficient than usual cars.
However, there has not been enough testing for AVs in real-world settings. Currently, big companies mostly use fake city models to test their vehicles for extreme situations. Real-life testing results and the accidents occuring in the US states prove that there is still a long way to go for the safety of self-driving vehicles to be approved and massively deployed.
Understanding the needs of future users as well as mitigating the risks and challenges of self-driving cars will have a crucial role in public acceptance, which is the main driver for the adoption of AVs. While the adoption process is still in its early stages, the extent to which safety improvements are realized completely depends on engineering and policy decisions.
"the extent to which safety improvements are realized completely depends on engineering and policy decisions."
Risks and challenges
When it comes to assessing the risks of driverless cars, the first thing raised in the discussions is the Trolley Problem. We seem not to be an exception here but only to make the case about its irrelevance or less relevance. We will explain all the “why-s” in part 2 of this article. For now, it would be useful to keep in mind that the Trolley problem is a thought experiment, an imaginary situation, which seeks to model moral judgments. This does not necessarily mean it is wise to completely neglect the Trolley Problem, rather we should take this model (as any other model) cautiously, remembering that models have limitations and might be applicable to the limited range of real-life situations.
"while it is possible to model ethical decisions with certain existing ethics theories such as utilitarianism or virtue ethics, we should not make a mistake here: possibility of modeling does not always translate into possibility of automation."
On the other hand, while it is possible to model ethical decisions with certain existing ethics theories such as utilitarianism or virtue ethics, we should not make a mistake here: possibility of modeling does not always translate into possibility of automation. First, cultures vary, which means people in different cultures make different moral decisions. Second, cultures evolve, that is, some norms change, some norms are abolished and some new norms are introduced. This is a continuous process which is highly unstable and unpredictable. Moreover, in 2017, the German ethics commission for automated and connected driving released 20 ethical guidelines for autonomous vehicles, which prohibits any distinctions based on gender, age or physical appearance. Then how do we deal with the situations that can be modeled but not automated? Human oversight is an answer: AVs drivers or owners should be enabled to choose between models they want their car to be guided with.
But there is another very important puzzle in handing driving from human over to the machine. Besides ethics, human drivers’ decisions are often guided by intuition. With the current level of understanding human mind and level of technology development behind AVs we can confidently state that it is impossible for the intuition to be modeled, hence impossible to be automated or translated into code. Does this leave us with a mere luck of algorithmic black box decision making as algorithms learn and adapt? Or are we able to build systems guided with the large calculations that can override the value of human intuition?
Then, as a developer and manufacturer, you have to start with asking the right questions and the first right question to ask is: what is the purpose of the autonomous car, to make better decisions then humans or to make decisions just like humans and execute on them better? And afterwards, how will you make sure that the algorithm does not deviate from pre-determined purpose? These are the questions that we ask with "fit for purpose" principle in our AI governance approach - AIGovBox. Only afterwards you can move on with defining other questions and risk scenarios.
So, what are the real-life risks that the manufacturers and the users of driverless cars might face? We grouped the major risks and challenges in seven categories. 1. Safety and optimization problem; 2. Cybersecurty and privacy; 3. Trade-offs between safety and other values; 4. Evolving infrastructure; 5. Legal frameworks and questions of accountability; 6. Differences in AVs development; 7. Uniformity in AVs development.
Physical safety. A number of questions emerge around the AVs safety and security. Potential of reducing accidents may only be realized if systems are robust and potential hazardous situations are assessed, otherwise deploying AVs may have completely opposite effects. Besides technical robustness, bias in image recognition systems is a life-threatening risk too. A number of studies including at Georgia Institute of Technology discovered that object detection systems have higher error rates in detection of darker-skinned pedestrians compared to lighter-skinned pedestrians. According to the study, during darker time-of-day or obstructed views, the technology is five percentage points less accurate at detecting people with darker skin-tones.
Cybersecurity and privacy. Another set of risks connected to the technological side of AVs is cybersecurity and privacy. AV might get hacked; the failure in the system might result in lethal harm to the driver, passengers and bystanders. Hence, it is important to think about how the data is gathered, governed and protected. Moreover, if we are going to have a new infrastructure, we are also going to have a network through which all the AVs are connected to each other (a Smart City) and collect data about from various sources including that of passengers. Therefore, it is legitimate to contemplate upon the question: who is going to own the data and ensure that it is properly used?
Trade Offs. There is a trade-off between safety of AVs and other values such as mobility, environmental protection and affordability. With mobility we mean that the traffic flow will be affected by certain behaviors of AVs. For example, if a vehicle decelerates significantly while approaching a crosswalk with limited visibility, this will increase safety at the expense of diminished traffic flow. As for the environmental impact, emissions and material wear will differ depending on acceleration and break of the vehicle. Even incremental changes will scale impact as a great number of vehicles will be governed by the same algorithm.
Evolving infrastructure. There are different scenarios in which mobility infrastructure could change. It is far less discussed currently than the safety of AVs itself. However, when the attention shifts more towards infrastructure, it will be highly important to strike the right balance of values in urban development. It is clear that driverless cars will cause our environment to change: the roads, the railways, the road signs will have to be adapted to the working mechanism of AVs. Now AV can operate only on special roads. Altering infrastructure, however, is a long-term goal – it needs careful planning and quite a bit of time. This means, the risk scenarios and challenges will also change with the infrastructure. But there will be a transitional phase too - between present and future of urban planning. In this phase the AVs are going to operate on usual roads which will have to gradually adapt to immediate demands of the AVs’ mobility. New unfamiliar risks will emerge and will need additional heed.
Responsibility and accountability questions. Another major issue that the AVs face when they are already deployed, is that of responsibility: who is responsible in case of an accident — the driver, the owner or the manufacturer? Who is accountable to whom? What are the roles and functions of different people involved in the AV production and operation? Can we acknowledge the machine as accountable or responsible? Is the machine a legal entity? These questions have to be immediately addressed and carefully considered.
Differences in AVs development. The manufacturers will rely on different approaches in AVs development: a top-down or a bottom-up one (i.e. let the human drivers “teach” cars) and the degree of safety of these approaches will vary from each other. This means we will have to face two challenges: First, the facilitation of technological coordination in a competitive environment in order to enhance safety. This concerns the interoperability standards, too (e.g. the centralized system for vehicle-to-infrastructure communication and/or the vehicle-to-vehicle communication protocols). Second, avoiding local safety optima and opting for a reasonable and practical global optimum instead – this is essential as the manufacturers will have their own individual ways for improving safety. Thus, a way of combining these solutions should be identified.
Uniformity in AVs development. When it comes to AVs, the hazardous situations may present a large-scale problem – when an AV approaches a crosswalk, its behavior has an impact not only on one, but on other AVs run by the same algorithms which learn from past experience.This means risk at scale. Will AV owners be granted rights by car manufacturers to personalize their vehicles according to their will?
It is a very complex task to engineer how AVs should behave in daily situations, while the policy choice is an ethical issue, since the human lives and health is at stake. Due to the complexities arising out of specificity and scale, AVs pose some uncommon and unfamiliar problems too. Decisions made during AVs development cycles will affect whether drivers, passengers and urban dwellers in general will trust cars that are being driven by the algorithms.