Proceedings of the SIGCHI Conference on Human Factors in Computing Systems 2010
DOI: 10.1145/1753326.1753647
|View full text |Cite
|
Sign up to set email alerts
|

Knowing where and when to look in a time-critical multimodal dual task

Abstract: Human-computer systems intended for time-critical multitasking need to be designed with an understanding of how humans can coordinate and interleave perceptual, memory, and motor processes. This paper presents human performance data for a highly-practiced time-critical dual task. In the first of the two interleaved tasks, participants tracked a target with a joystick. In the second, participants keyed-in responses to objects moving across a radar display.Task manipulations include the peripheral visibility of … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
20
0

Year Published

2011
2011
2024
2024

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 17 publications
(20 citation statements)
references
References 12 publications
0
20
0
Order By: Relevance
“…These constraints would be hard to implement because they involve many experimental details-for example, when a blip changed color or how many active blips were on the radar display at a specific moment. Experimental analysis (Hornof et al, 2010) also demonstrated that when participants keyed-in a response, they were not even looking at the blips but, rather, were back on the tracking task, further illustrating the challenge in applying the RFL technique to the experiment. The MoD error correction method, however, needs to know only the locations and times of the fixations and the stimuli.…”
Section: Validation Resultsmentioning
confidence: 98%
See 1 more Smart Citation
“…These constraints would be hard to implement because they involve many experimental details-for example, when a blip changed color or how many active blips were on the radar display at a specific moment. Experimental analysis (Hornof et al, 2010) also demonstrated that when participants keyed-in a response, they were not even looking at the blips but, rather, were back on the tracking task, further illustrating the challenge in applying the RFL technique to the experiment. The MoD error correction method, however, needs to know only the locations and times of the fixations and the stimuli.…”
Section: Validation Resultsmentioning
confidence: 98%
“…The MoD error correction method was used to correct the eye movement data collected in an experiment (Hornof, Zhang, & Halverson, 2010). The next section presents the experimental setup that was used to both illustrate and validate the technique.…”
Section: The Mode-of-disparities Error Correction Methodsmentioning
confidence: 99%
“…The work presented here models the data collected from our replication of the experiment [8], in which eye movement data were collected to inform detailed analysis and modeling. Figure 1 shows a screenshot of the display used in the dual task experiment.…”
Section: The Multimodal Dual Taskmentioning
confidence: 99%
“…The data presented here are from the third day, and from the ten participants who achieved good overall performance. More details of this experiment are discussed in [8]. Figure 2 shows the overall dual task performance by plotting the average classification time against the rootmean-square (RMS) tracking error, for each of the four conditions, with a different plot symbol for each of the two kinds of blips.…”
Section: The Multimodal Dual Taskmentioning
confidence: 99%
See 1 more Smart Citation