Marcus Cooksey
All drivers have experienced some manner of driver monitoring: a spouse inquiring if you needed a break when driving long distances, a friend asking if you’re okay to drive or a companion who alerts you of some danger ahead. As helpful as your companions may be for alerting you to obstacles or potential threats, driver monitoring by another person in the vehicle is impractical. Nevertheless, it is at these moments of distraction when we are inclined to make critical mistakes, which could result in an accident.
According to a Euro New Car Assessment Program (NCAP) 2025 roadmap report, an estimated ninety percent of annual road accidents are caused by human error. The report goes on to say that, “In general, two kinds of mistakes can be observed: violations, of which speeding and driving under the influence of alcohol or drugs are most common; and human ‘errors,’ in which the driver state – inattentiveness, fatigue, distraction – and inexperience play an important role. In an aging society, sudden medical incapacitation is also a growing cause of road crashes.”
Drivers, not equipment, are the most common failure point in vehicle-related accidents, stimulating more research and Euro NCAP safety initiatives in an attempt to eliminate common driver errors.
Although semi-autonomous and autonomous operation functions may help reduce driver errors, recent news has shown that unpredictable road conditions and road obstructions (i.e. pedestrians on bicycles, debris, etc.) still contribute to vehicle collisions. Until autonomous driving modes of operation can overcome these unknowns, the driver is required to remain in the vehicle control loop.
A Vision-based driver monitoring system (DMS) provides a significant level of feedback to advanced driver assistance systems (ADAS), electronic control units (ECUs), and autonomous driving systems to compensate for and help correct common errors introduced by drivers. An example would be a scenario where the driver is distracted and the vehicle alerts the driver or maneuvers to avoid a collision.
DMS solutions are comprised of one or more cameras equipped with infrared illuminators directed toward the driver; enabling quality images to be captured and processed in suboptimal lighting condition. A TDA3 automotive processor can stream camera images to a computing unit at a rate between 20 to 60 frames per second and run vision algorithms to detect the presence of the driver and key facial markers such as eye and mouth aperture, eye gaze and head position (see Figure 1). Drowsiness, for example, can be detected by the rate at which points surrounding the eyes move up and down or the orientation of the head; either up, right, or tilted.
As the vision algorithm continuously collects and analyzes key facial markers, it outputs perceptive indicators of the driver’s state such as level of vigilance, fatigue, distraction, visual focus area, etc. These indicators, when analyzed with ADAS ECUs, help control automatic maneuvering and braking decisions to minimize or reduce collisions. Additionally, autonomous driving functions can be deactivated or activated depending on the driver’s attentiveness.
Feedback from the DMS improves the conveniences that autonomous driving offers (such as hands-off wheel operation) by interpreting whether autonomous modes are still operating within an acceptable boundary condition, such as driver alertness, and if not, take the necessary steps to alert the driver that control is being transferred back to them. These systems can also do the opposite in case of an emergency, i.e. transfer control to the autonomous system that may be able to help maneuver the vehicle to a safe resting location if the driver does fall asleep or becomes incapacitated.
DMS serves a pivotal role to deliver precise and real-time feedback to vehicle steering and control systems. The efficiency and precision with which the DMS feedback may be acted on require a high-performance compute platform. Yet the system must fit within the tight space constraints of the vehicle. Our next blog will provide some guidance on system level requirements to develop a driver monitoring system.
TI PROVIDES TECHNICAL AND RELIABILITY DATA (INCLUDING DATASHEETS), DESIGN RESOURCES (INCLUDING REFERENCE DESIGNS), APPLICATION OR OTHER DESIGN ADVICE, WEB TOOLS, SAFETY INFORMATION, AND OTHER RESOURCES “AS IS” AND WITH ALL FAULTS, AND DISCLAIMS ALL WARRANTIES, EXPRESS AND IMPLIED, INCLUDING WITHOUT LIMITATION ANY IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NON-INFRINGEMENT OF THIRD PARTY INTELLECTUAL PROPERTY RIGHTS.
These resources are intended for skilled developers designing with TI products. You are solely responsible for (1) selecting the appropriate TI products for your application, (2) designing, validating and testing your application, and (3) ensuring your application meets applicable standards, and any other safety, security, or other requirements. These resources are subject to change without notice. TI grants you permission to use these resources only for development of an application that uses the TI products described in the resource. Other reproduction and display of these resources is prohibited. No license is granted to any other TI intellectual property right or to any third party intellectual property right. TI disclaims responsibility for, and you will fully indemnify TI and its representatives against, any claims, damages, costs, losses, and liabilities arising out of your use of these resources.
TI’s products are provided subject to TI’s Terms of Sale (www.ti.com/legal/termsofsale.html) or other applicable terms available either on ti.com or provided in conjunction with such TI products. TI’s provision of these resources does not expand or otherwise alter TI’s applicable warranties or warranty disclaimers for TI products.
Mailing Address: Texas Instruments, Post Office Box 655303, Dallas, Texas 75265
Copyright © 2023, Texas Instruments Incorporated