Dr. Arens, AI processes, for instance those used to classify images, are very efficient. In spite of this, they are only hesitantly making their way into practical applications. Why is this?
Michael Arens: Despite their renowned performance, deep learning processes also continuously attract attention when they make serious mistakes: AI processes are often overwhelmed by specifically selected input data.
This is why a key issue for us is robustness: How can we make sure that AI will deliver the desired outcome for any situation and that it will not, for example, think that a picture of a butterfly is actually a plane because of a certain
cloud texture in the background? Something like this could happen as a result of a paradigm change borne by machine learning: Rather than on algorithms, results depend on training data. When the AI system is learning, millions of parameters are set based on the training data, resulting in a type of black box that will determine – in complex and barely comprehensible ways – which outputs are caused by specific inputs at a later date.