What will happen when our vision of the future becomes reality, when cars will drive autonomously on our roads and people will trust this technology?
The “time travel” begins in the creative spaces of Audi’s Technical Development at its Ingolstadt headquarters. Here, philosophers join software engineers, psychologists and lawyers in trying to fully understand the societal implications of autonomous driving and the spread of artificial intelligence. The main hurdle? Social acceptance. A concrete example of this is that people will likely feel far more emotional about an accident caused by a machine rather than a person. The central topic of the beyond Initiative, which Audi established two years ago by creating this interdisciplinary network of international experts, is the social impact of artificial intelligence and new automated technologies.
The background to this change is the conviction that this technology can substantially improve not only our mobility, but also our work and our lives in general, as long as various sectors – more specifically science, commerce, politics and society in general – share the same goals. In the automotive industry, artificial intelligence is a key technology, as it helps the car to perceive and interpret its environment and to take decisions. Automated and autonomous driving will deliver more comfort and efficiency, but also greater safety, given that ninety per cent of all accidents are currently caused by human error. Nevertheless, there are ethical concerns.
The best-known example is a situation where an accident is unavoidable, in which an autonomous car is faced with three options – it either steers left and hits person A or it steers right and hits person B. If it drives straight ahead, it puts the safety of its own occupants at risk. In a real-life situation like this, a person would act instinctively. However, we expect machines governed by AI to make the right decision. These risk scenarios are decidedly complex; going back to the previous situation, when it is not completely clear what will happen if a car steers in a particular direction, is it ethically justifiable to opt for the unknown?
From a legal point of view, public focus is primarily on who is responsible in the event of an accident. Previously, the law has always applied to the individual. But in the case of a self-driving car, with whom does the responsibility lie? Within the context of increasingly intelligent machines, we may end up with a so-called “e-person”: in other words, an inanimate object that has legal personality with independent liability. But it goes much deeper than that – we will also have to take into consideration other implications regarding permission, data protection, road traffic safety, criminal law and constitutional law. Autonomous driving touches a wide range of legal areas, and we must already start to get ready in this regard for the near future. This is precisely the objective of the beyond Initiative: anticipating the challenges of tomorrow, today.
At Audi, the transition towards artificial intelligence is already evident in terms of the company’s internal processes. In production, people are increasingly working alongside robots, while in the offices intelligent algorithms are helping with the analysis of massive amounts of data. A good example of the Smart Factory is modular assembly, an evolution of traditional production lines: bodyshells will be brought from workstation to workstation on self-driving transport systems, all managed by a self-learning algorithm which ensures that all processes run efficiently and flexibly for the employees.
Source: Encounter – AUDI AG