Will people learn to trust self-driving vehicles? To answer this question, and to discuss the technological challenges concerning "driverless cars," Audi involved several experts in the SocAlty study.
Distrust of the unknown? It is not an insurmountable obstacle. It is one of the issues surrounding autonomous driving, an emotional factor that emerges whenever a new technology dawns in the human experience. In the past, it has happened with elevators and airplanes, i.e., a commodity and a means of transportation that are absolutely commonplace today. The Audi SocAlty study, which involved several experts, also focused on this topic. Many people, for example, wonder whether a self-driving vehicle can make the right choice in an emergency situation. But for those involved in the development of "driverless cars," this is certainly not new.
Autonomous driving and the trolley problem
The "trolley problem" is a thought experiment that perfectly illustrates these kinds of concerns. We have to imagine a runaway trolley and the possibility of diverting it to a side track where it would hit one person, thereby saving the lives of five other people who are instead on a collision course on the original track. What is the right solution? Choose the minor harm or not act at all?
This is actually an old dilemma, but it has come back into focus with the development of autonomous driving. In this latter case, the focus of the question is another, because the car would not decide on its own at all, but would behave as decided by those who wrote its control software. Self-driving vehicles, then, reflect the software choices its creators endowed it with.
Priority to human life
The German Federal Ethics Commission began grappling with these questions back in 2017, setting guidelines right away but also calling for specific development from a technological and social point of view. After that, it drafted a report where 20 "Ethical Rules for Automated and Connected Vehicular Traffic" are defined. One of these states that self-driving vehicles are only justifiable if they promise a reduction in harm compared with human-driven vehicles. In other words, reducing accidents and protecting human life are the two top priorities.
In this regard, software should not make any distinctions based on characteristics such as age, gender, or physical or mental constitution. Another rule is that you cannot offset one life against another. In this way, according to many experts, it will be easier to solve dilemmas such as that of the trolley problem.
The European AI4People initiative
The European Parliament addressed the same issue in 2018 with the AI4People initiative, which advances ethical standards in artificial intelligence (AI). The purpose was to support the development of principles, guidelines and practices necessary to establish a "good society" in artificial intelligence and to make concrete suggestions for companies and the economy at large. Again, the conclusion was the same: protecting human life is the greatest good, and autonomous vehicles are only ethically justifiable only if they lead to fewer injuries and fatalities compared with human driving.
Considering that in Germany more than 85% of all accidents resulting in personal injury in 2020 were caused by human error and that worldwide one person dies in a road accident every 24 seconds (statistics by World Health Organization), it is easy to see why the experts involved in the Audi SocAlty study believe that autonomous driving can improve road safety.
Why is automated driving safer?
Currently, under certain circumstances, self-driving vehicles move more safely than people. In a familiar environment with clearly defined parameters, the technology is highly reliable, this is because in automated vehicles, the computer is always active and the system never stops working.Â
"I am convinced that highly automated driving will make our streets safer, also due to advanced sensor technology. Audi relies on a variety of different systems such as radar, cameras and lidar, allowing the vehicle to assess every situation with significantly greater accuracy and enabling it to brake and avoid obstacles, reacting to unexpected events. In addition, there is V2X (vehicle-to-everything) technology that allows a car to connect with other vehicles, infrastructure and its surroundings, including other road users such as cyclists and pedestrians", explains Oliver Hoffmann, Member of the Audi Board of Management for Technical Development.
Vision zero: the human factor
According to experts, "Vision zero", or the total absence of accidents, will never be 100 percent feasible because people will always remain the greatest factor of uncertainty. In fact, one of the challenges that will have to be addressed in the coming years will be the mixed traffic, where autonomous and conventional vehicles will circulate together. While safety will continue to improve, new types of accidents might occur because self-driving vehicles will have to contend with other vehicles that do not play by the rules.
Software will have to be able to react as well to very common risk factors, such as vehicles that do not obey speed limits, and will have to do so for the safety of all users. This is one of the biggest technical challenges.
Convincing skeptics of autonomous driving
According to the Audi SocAlty study, convincing skeptics requires demonstrating the advantages and personal benefits of self-driving cars, such as time saved in commuting or increased comfort on board. Not to mention the potential in terms of inclusive mobility, for example with regard to persons with disabilities.
One concrete way in which users can begin to familiarize themselves with this technology is through "autonomous driving experiences," or those situations in which they can get into a driverless shuttle in person or deliver their car to automated parking.
Source:Â AUDI AGÂ
VGI | Responsible OU: VP | Creation date: article date | Class 9.1