2016
DOI: 10.3758/s13423-016-1047-0
|View full text |Cite
|
Sign up to set email alerts
|

Visual routines for extracting magnitude relations

Abstract: Linking relations described in text with relations in visualizations is often difficult. We used eye tracking to measure the optimal way to extract such relations in graphs, college students, and young children (6-and 8-year-olds). Participants compared relational statements (BAre there more blueberries than oranges?^) with simple graphs, and two systematic patterns emerged: eye movements that followed the verbal order of the question (inspecting the Bblueberry^value first) versus those that followed a left-fi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
21
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 25 publications
(21 citation statements)
references
References 33 publications
0
21
0
Order By: Relevance
“…In support of this idea, we recently found that interpretations of magnitude relations depended on which item people attended to first (Michal et al, 2016). Participants were asked to verify whether a two-bar graph matched a statement such as, “Are there more blueberries than oranges?,” and visual routines that mimicked the linguistic order within the question led to faster responses.…”
Section: Introductionmentioning
confidence: 76%
See 3 more Smart Citations
“…In support of this idea, we recently found that interpretations of magnitude relations depended on which item people attended to first (Michal et al, 2016). Participants were asked to verify whether a two-bar graph matched a statement such as, “Are there more blueberries than oranges?,” and visual routines that mimicked the linguistic order within the question led to faster responses.…”
Section: Introductionmentioning
confidence: 76%
“…People tend to process graph relations using systematic visual routines, particularly when coordinating data with text (e.g., Michal et al, 2016). Here we show that people exhibited idiosyncratic but highly consistent feature preferences (“anchor points”) to guide visual routines when judging graph relations, even when there was no verbal component to the task.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…Although relational attention is a key component of visual perception, studies of attention have traditionally focused on perception of individual features or locations (Kastner & Ungerleider, 2000;Maunsell & Treue, 2006). As a consequence, the neural substrates of relational judgments in online visual perception have been largely unexplored (though see Franconeri et al, 2012;Michal et al, 2016). One candidate system for supporting relational attention is the hippocampus, a structure traditionally studied for its role in long-term memory, and particularly for relational forms of long-term memory (Eichenbaum & Cohen, 2014).…”
Section: Introductionmentioning
confidence: 99%