Methods to estimate a driver’s visual attention from video images have received increased research interest. Such methods are especially important for detecting inattentive drivers in partially automated vehicles. The current study compares different driver gaze region estimation techniques, which may serve as a basis for detecting inattentive drivers. The accuracy of these techniques was evaluated on data from automated drives in a driving simulator. The examined techniques include a classical, state-of-the-art eye tracking approach, two data-driven approaches that rely on eye tracking data, a data-driven approach that only considers the driver’s facial configuration, and an end-to-end approach based on a convolutional neural network. The results showcase the advantages of data-driven approaches over a classical geometric interpretation of the eye tracking data. The results also highlight challenges regarding generalization for purely data-driven approaches and the benefits of data-driven approaches that operate on eye tracking data rather than video image data alone.
Links and resources