Interaction with communication and infotainment systems in the car is common while driving. Our research investigates modalities and techniques that enable interaction with interactive applications while driving without compromising safety. In this paper we present the results of an experiment where we use eyegaze tracking in combination with a button on the steering wheel as explicit input substituting the interaction on the touch screen. This approach combines the advantages of direct interaction on visual displays without the drawbacks of touch screens. In particular the freedom of placement for the screen (even out of reach from the user) and that both hands remain on the steering wheel are the main advantages. The results show that this interaction modality is slightly slower and more distracting than a touch screen but it is significantly faster than automated speech interaction.
Abstract. The AMIDA Automatic Content Linking Device (ACLD) is a just-in-time document retrieval system for meeting environments. The ACLD listens to a meeting and displays information about the documents from the group's history that are most relevant to what is being said. Participants can view an outline or the entire content of the documents, if they feel that these documents are potentially useful at that moment of the meeting. The ACLD proof-of-concept prototype places meeting-related documents and segments of previously recorded meetings in a repository and indexes them. During a meeting, the ACLD continually retrieves the documents that are most relevant to keywords found automatically using the current meeting speech. The current prototype simulates the real-time speech recognition that will be available in the near future. The software components required to achieve these functions communicate using the Hub, a client/server architecture for annotation exchange and storage in real-time. Results and feedback for the first ACLD prototype are outlined, together with plans for its future development within the AMIDA EU integrated project. Potential users of the ACLD supported the overall concept, and provided feedback to improve the user interface and to access documents beyond the group's own history.
We present an experimental study on the effectiveness of five modality variants (speech, text-only, icon-only, two combinations of text and icons) for presenting local danger warnings for drivers. Hereby, we focus on sudden appearing road obstacles within a maximum up-to-date scenario as it is envisaged in Car2Car communication research. The effectiveness is measured by the minimum time necessary for fully interpreting the content. Results show that text-only requires the most time while icon only is perceived the fastest. The two combined versions lie in between. The minimum length for speech is determined by the duration of the utterance, which is longer than perception time of text-only in this case. However, speech could be decoded reliably by nearly all subjects. Results indicate further that a blinking visual cue provided through the periphery visual channel is able to enhance the saliency of visual modalities. Subjective judgements by the subjects furthermore suggest a combined use of visual and auditory modalities.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.