DeepSafeDrive: A Grammar-aware Driver Parsing Approach to Driver Behavioral Situational Awareness (DB-SAW)
Driver safety is one of the major concerns in today's world. With the number of vehicles on the road increasing on a daily basis, the ability to use computer vision and machine learning algorithms to automatically assess a driver's vigilance is extremely important. One of the biggest challenges in this topic is that drivers in videos are usually recorded under weak lighting, low resolution and poor illumination control of day and night modes. A person drives through various terrain, e.g. open roads, under trees, etc.., causing constant and often stark variations in illumination conditions. Furthermore, in the automotive field, lower-resolution cameras and lower-power embedded processors, which are preferred due to cost constraints, create additional challenges.
In order to meet the goals of assessing driver safety, we propose a fully automatic system that is able to (1) simultaneously detect and segment the seat belt of a driver to see if the driver is wearing it; (2) analyze the upper parts of the driver usually recorded from cameras, including head, body (body) and seat belt in order to detect if the driver is looking forward and keeping their eyes on the road; (3) determine if the drive is tired or starting to fall asleep, distracted due to a conversation, using a hand-held device, i.e. a cell phone; and (4) detect whether the driver's hands are on the steering wheel.