Mr. Frey, why is the concept of explainability such a big issue when it comes to AI methods?
Christian Frey: Although AI algorithms such as deep learning methods often deliver impressively good results, it is usually hard to understand how and why these algorithms produce certain results. XAI aims to fill this gap. XAI stands for a group of methods that in a sense shed light on the black box and are designed to make it easier to interpret the decisions of AI models. This is not only important for the issue of acceptance – in other words, that humans trust and accept the decisions made by an AI. It is also an important tool for the development phase and the subsequent life cycle of AI components in the context of AI systems engineering, for example to track down the causes of errors.