Fully self-driving vehicles will have to be able to handle even the most complex situations and not be surprised by the unexpected, thanks to a high degree of what is called robustness.
Driving a car is not always easy: there are conditions in which it can be particularly challenging, such as in fog. In addition to reducing speed and turning on fog lights, you have to be alert for road markings and deal with limited visibility. This is not an optimal or even pleasant situation, but you can still manage it, within certain limits.
In other words, there are situations where driving is more complicated but it is still possible - and of course autonomous driving software must also be able to deal with them optimally. This functional stability that a system must have even under perturbation, meaning when external factors make its task more difficult, is described as robustness.
Handling the unexpected
When it comes to autonomous driving, robustness is about unforeseen events such as a dirty camera lens or a damaged sensor; in these cases, a fully self-driving car will still have to be able to drive correctly and safely. Even details such as the colors of other cars on the road or pedestrians' clothing should not cause problems: the system has to identify them correctly.
These elements do not create difficulties for human drivers, but they can become a challenge for the vehicle's software, which makes decisions based on artificial intelligence. This is precisely why AI-based systems need to be trained and have robustness so that they are not affected by external 'perturbations,' which could lead them to make incorrect and potentially dangerous decisions.
At present, even the most advanced AI systems need to be optimized on this aspect: minor changes to the data processed by an AI module, for example a difference of two pixels in an image, are enough to confuse the system and lead it to misleading conclusions. Building reliable and secure machine learning algorithms is therefore an important area of scientific research, not only for self-driving vehicles, but across all industries.
But how does an algorithm come to misleading conclusions? The reason is "classification", which is an aspect of supervised learning techniques for artificial intelligence. If there is a group of images that are all similar to each other, for example, of dogs and cats, which are mammals and quadrupeds, with one head and one tail but still different, the differentiation between the two animals is the limit of classification. If the image of a cat is changed slightly to make it look like a dog, the AI will change its mind and classify it as a dog.
Training the AI
To train the robustness of an AI system and test its ability to classify, you can manipulate an image: you can rotate it and change its colors, training the AI to constantly make the right decisions. But there are many more factors to consider when ensuring robustness for artificial intelligence in self-driving vehicles, and time is one of them. If an AI system does not recognize a pedestrian within 5 milliseconds it may not be a problem, but if it does not recognize it within 10 seconds it certainly is a problem.
Time, however, is not the only factor: events also have their effect on the correct labeling by artificial intelligence. A system might be better at identifying a pedestrian crossing the road than a cyclist behind a car.
Determining the robustness of a system means considering it in the context of the overall safety of a function - and safety is not an abstract concept, but always a system property. Neural networks and perception functions are never safe or unsafe, because any errors they may contain do not pose an immediate risk to the vehicle’s environment or occupants.
Thus, a key part of the safety process is the analysis and evaluation of performance limitations, i.e., possible sources of errors and their effects. On this aspect, the robustness of neural networks is crucial, and the requirements that determine this are based on both the context - the environment and driving situation - and on he safety goals, in order to minimize residual risk of the overall function.
The ultimate aim is to maximize the number of conditions an autonomous driving software can handle with a high degree of robustness.
With all this in mind, it is easy to see why training an AI module for automated vehicles is an extremely long and complex process. However, it is a necessary prerequisite to help self-driving vehicles become robust; only in this way can they become reliable chauffeurs under all conditions.
VGI | Responsible OU: VP | Creation date: article date | Class 9.1