“…Multimodal displays and multisensory information processing (i.e., the concurrent presentation and processing of information in vision, audition, and touch, in particular) have received considerable attention in the field of cognitive ergonomics over the past decade (e.g., Calvert, G. A., Spence, C., & Stein, B. E., 2004;Ferris & Sarter, 2008;Sarter, 2006). Benefits of distributing information across modalities include improved time-sharing and more effective attention and interruption management (e.g., Brickman, B. J., Hettinger, L. J., & Haas, M. W., 2000;Ho, C. Y., Nikolic, M. I., & Sarter, N. B., 2001;Latorella, 1999). However, with few exceptions (e.g., Brill et al, 2007Brill et al, , 2008Brill et al, , 2009Garcia, A., Finomore, V., Burnett, G., Baldwin, C., & Brill, C., 2012), studies on multimodal information processing have not performed (or, at least, not reported) any cross-modal matching procedure prior to conducting an experiment.…”