Project goals
The project INITIATIVE, funded by the BMWi, aims at developing an AI-based adaptive communication for the integration of automated vehicles in mixed traffic scenarios. For this purpose, the automated vehicle must provide appropriate communication interfaces for external road users (external human-machine interfaces, in short HMI) and for occupants of the vehicle (internal HMI). These interfaces are to be developed and validated in the project, taking into account angle- and time-of-day-dependent recognizability.
In addition, camera-based methods will be used to capture the intention of communication participants in road traffic and take this into account in AI-based communication with each other. This will ensure that messages are transmitted in a way that is adapted to the situation. To avoid misunderstandings between the participants, the messages of the external and internal interaction interfaces must be synchronized accordingly.
The function of the systems and sensor technology is also evaluated across projects. For the identification of relevant participants of a communication, metadata from a networked infrastructure (external sensor technology) is used, which is transmitted via C2X communication. In the overall context of a mixed traffic scenario, AI methods are suitable for synchronizing messages, detecting the intentions of communication participants, and preselecting relevant traffic participants.
Contributions and methodology
The scientific contribution of IOSB lies in the camera-based detection of vehicle occupants and weaker road users, such as pedestrians, in the immediate vicinity of the vehicle. The detection is performed withof machine learning methods, e.g. Deep Learning.
The planned procedure consits of two steps: In the first step, suitable camera systems for the interior and exterior are identified. Each camera and each person involved in the scenario (drivers, pedestrians, etc.) will be detected and individually analyzed using real-time AI methods. For this purpose, basic features are first captured (e.g. body posture) and the relevant gestures for communication in the scenario at hand are evaluated using machine learning techniques.
In the second step, the features are fused in a scene model. Based on this model, further analyses of the overall situation take place. For example, it is necessary to know which persons would like to enter into a dialog with the automated vehicle and which goals these persons pursue. Machine learning methods are going to be used for this. The data processed in this way will then be used by project partners for decision-making by the automated vehicle.