The importance of explainable AI
AI offers great potential to assist humans in many areas. Doctors can use AI to determine the best therapeutic option, inefficiencies in industrial productions can be detected at an early stage, and AI-equipped vehicles can relieve humans during their driving tasks.
The key to a successful use of AI is not only to maximize performance and achieve the best results possible, but also to build trust in the AI's decisions. It is therefore essential that AI’s decisions should be trustworthy, transparent and comprehensible for humans. Thus, at Fraunhofer IOSB, we not only develop AI methods, but also the methods that explain the AI's decisions.
Examples of applications include the analysis of patient data with regard to therapy recommendations, the analysis of ship navigation data to assist operators in the monitoring of maritime areas, or the analysis of cyber data to provide indications of cyber attacks.