Conference Proceeding

Modeling of Stimulus-Response Secondary Tasks with Different Modalities while Driving in a Computational Cognitive Architecture

Authors
  • Heejin Jeong (University of Michigan, Ann Arbor, Michigan)
  • Yili Liu (University of Michigan, Ann Arbor, Michigan)

Abstract

This paper introduces a computational human performance model based upon the queueing network cognitive architecture to predict driver’s eye glances and workload for four stimulus-response secondary tasks (i.e., auditorymanual, auditory-speech, visual-manual, and visual-speech types) while driving. The model was evaluated with the empirical data from 24 subjects, and the percentage of eyes-off-road time and driver workload generated by the model were similar to the human subject data. Future studies aim to extend the types of voice announcements/commands to enable Human-Machine-Interface (HMI) evaluations with a wider range of usability test for in-vehicle infotainment system developments.

How to Cite:

Jeong, H. & Liu, Y., (2017) “Modeling of Stimulus-Response Secondary Tasks with Different Modalities while Driving in a Computational Cognitive Architecture”, Driving Assessment Conference 9(2017), 58-64. doi: https://doi.org/10.17077/drivingassessment.1615

Rights: Copyright © 2017 the author(s)

Downloads:
Download pdf
View PDF

161 Views

247 Downloads

Published on
27 Jun 2017
Peer Reviewed