Advanced Occupant Monitoring System for activity recognition in cars - read our new white paper!

What is an Occupant Monitoring System?

An occupant monitoring system supports assistance systems in the vehicle interior. It provides information about the occupants present. Typically, current systems rely on simple sensors for detecting steering times, steering movements, seat occupancy and gaze detection. This provides rudimentary detection of whether the driver is showing signs of fatigue or is too distracted.

Our latest white paper with insights of many automotive experts!

Advanced In-Cabin Monitoring Systems will significantly change the user experience of passenger cars and promise to become the next fundamental milestone for vehicle safety.

In this white paper "Pioneering In-Cabin Monitoring — Unmasking the Power of 2D and 3D Cameras", Fraunhofer IOSB experts on computer vision and automotive in-cabin sensing provide helpful insights to

  • Current and future legislation on driver monitoring systems
  • Current and future applications and functions of occupant monitoring
  • Roadmap towards the future of in-cabin monitoring
  • Sensor comparisons, especially benefits of 2D vs 3D sensing
  • Outlook to the impact of generative AI, LLM and multi modal vision models.

Contact us for more information, research collaboration and contracted research. 

  • Pressemitteilung zur ROBDEKON-Partizipationsveranstaltung 2023 (iosb.fraunhofer.de)
  • What is the Fraunhofer IOSB's Advanced Occupant Monitoring System?

    Fraunhofer IOSB's Advanced Occupant Monitoring System goes significantly further than previous state-of-the-art systems and uses optical sensors in the vehicle interior, as they are becoming increasingly common in modern vehicles. The Advanced Occupant Monitoring System detects the driver and all occupants equally. It recognizes the 3D body pose of all persons, analyzes their movement behavior and classifies the activity of each individual person detected. This makes it possible not only to detect critical situations such as a driver falling asleep, but also to distinguish between different activities and the associated levels of distraction. This benefits safety systems and comfort functions in the vehicle interior in equal measure.

    © Fraunhofer IOSB

    3D body pose detection

    The basis of the Advanced Occupant Monitoring System is the real-time detection of body pose in 3D. For all vehicle occupants captured by cameras, the body pose is recognized as a 3D skeletal model using machine learning techniques. The resulting image of the captured occupants includes the position of the eyes and head, neck, shoulders, elbows, wrists, torso, pelvis, and upper and lower legs - as soon as they are visible in the camera image. The recording does not require biometric data and is therefore particularly privacy-friendly.

    The system is capable of using individual 3D cameras or several 2D cameras, from whose perspectives the 3D joint points are reconstructed. The cameras can be mounted in any position as long as the view of the respective persons is sufficient.

    Gesture recognition

    The positions of the eyes, the elbows as well as the wrists also emerge from the determined body pose skeleton. This makes it possible to interpret both the direction of the forearm and the pointing direction of the eye-hand extension as pointing gestures. Both variants are available as a 3D vector. Pointing gestures can therefore be mapped directly and with centimeter accuracy to known objects inside or outside the vehicle. Since pointing gesture detection is based on 3D body pose recognition, left arms can also be distinguished from right arms and pointing gestures can be recognized anywhere in the interior - not just at conventional seating positions or in prescribed interaction areas.

    Gesture recognition for interaction with (partially) automated vehicles

    It is to be expected that as the vehicle becomes increasingly automated, the driver and other occupants will also be allowed and offered more freedom for secondary activities in the interior. Concept vehicles are already breaking with the classic seating arrangement and extending the driver's range of movement and action, for example, to the entire interior by allowing seats to be turned and moved. Against the background of such freedom of movement, the question arises as to how interaction with services in the interior should be realized. Pointing gestures and voice input are obvious options, but they must offer both the necessary robustness and freedom to allow occupants the room to maneuver in the interior.

    Fraunhofer IOSB's Advanced Occupant Monitoring System enables the capture of free-space gestures in 3D from all occupants.

    Activity detection in the vehicle interior

    In manual driving situations, the driver ideally focuses his full attention on the road. He drives and steers the car and does not engage in any secondary activity. As the level of vehicle automation increases, however, the driver is freed from his driving responsibility. He is given the freedom to pursue secondary activities. Partially automated vehicles must take this fact into account when dealing with situations in which driving responsibility is to be handed back to the driver. The driver may be distracted, asleep or even have a medical emergency.

    Fraunhofer IOSB's Advanced Occupant Monitoring System detects the activity of all occupants inside the vehicle. It is able to distinguish between up to 35 activities, including drinking, eating, sleeping, reading, making phone calls, and more. For this purpose, state-of-the-art machine learning processes and fuses the 3D body skeleton recognition of the occupants in combination with object detection and intelligent analysis of the movement behavior of all detected persons. This makes it possible to reliably distinguish whether someone is reaching for a cell phone and making a call or opening a bottle and bringing it to their mouth. The Advanced Occupant Monitoring System thus provides important information on the driver's distracted state. In addition, the system also provides important information on the prevailing situation in the vehicle interior and the context of a person's actions. This makes it possible, for example, to distinguish unintentional from intentional pointing gestures or to offer innovative assistance functions tailored to the individual needs of the occupants.

    Intention recognition of the driver and the vehicle occupants

    Activity recognition forms the basis for predicting the intention of the driver or vehicle occupants. Because the driver's activity recognition tells us what the driver is doing or with whom, the driver's next actions can be predicted or narrowed down.

     

    Department HAI of Fraunhofer IOSB

    Would you like to learn more about our competence and service spectrum in the field of Human-AI Interaction? Then visit the HAI department page.

     

    Interested in a non-binding conversation?

    Would you like to learn more about our activities in the automotive sector? Do you want to solve a problem and need a competent contact person?

    We can help you.