This research reports the development and evaluation of a Korean Author Recognition Test (KART), designed as a measure of print exposure among young adults. Based on the original, English-language version of the Author Recognition Test (ART), the KART demonstrates significant relationships with offline measures of language ability, as well as online measures of word recognition. In particular, KART scores were related to participants’ responses on the Comparative Reading Habits (CRH) checklist, suggesting that KART is a valid measure of print exposure. In addition, KART scores showed reliable correlations with offline measures of vocabulary knowledge and language comprehension. Finally, results from a lexical decision task showed that KART scores modulated the magnitude of the word familiarity effect, such that the effect was smaller for participants with higher KART scores The results suggest that the ART is a language-universal task that measures print exposure, which is useful for explaining individual differences in language comprehension abilities and word recognition processes.
Natural user interfaces (NUI) have been used to reduce driver distraction while using in-vehicle infotainment systems (IVIS), and multimodal interfaces have been applied to compensate for the shortcomings of a single modality in NUIs. These multimodal NUIs have variable effects on different types of driver distraction and on different stages of drivers' secondary tasks. However, current studies provide a limited understanding of NUIs. The design of multimodal NUIs is typically based on evaluation of the strengths of a single modality. Furthermore, studies of multimodal NUIs are not based on equivalent comparison conditions. To address this gap, we compared five single modalities commonly used for NUIs (touch, mid-air gesture, speech, gaze, and physical buttons located in a steering wheel) during a lane change task (LCT) to provide a more holistic view of driver distraction. Our findings suggest that the best approach is a combined cascaded multimodal interface that accounts for the characteristics of a single modality. We compared several combinations of cascaded multimodalities by considering the characteristics of each modality in the sequential phase of the command input process. Our results show that the combinations speech + button, speech + touch, and gaze + button represent the best cascaded multimodal interfaces to reduce driver distraction for IVIS. INDEX TERMS Cascaded multimodal interface, driver distraction, head-up display (HUD), human-computer interaction (HCI), in-vehicle infotainment system (IVIS), learning effect, natural user interface (NUI).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.