2016
DOI: 10.1037/xhp0000171
|View full text |Cite
|
Sign up to set email alerts
|

Effective scheduling of looking and talking during rapid automatized naming.

Abstract: Rapid Automatized Naming (RAN) is strongly related to literacy gains in developing readers, reading disabilities and reading ability in children and adults. Because successful RAN performance depends on the close coordination of a number of abilities, it is unclear what specific skills drive this RAN-reading relationship. The current study used concurrent recordings of young adult participants’ vocalizations and eye movements during the RAN task to assess how individual variation in RAN performance depends on … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
62
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
8
1
1

Relationship

1
9

Authors

Journals

citations
Cited by 36 publications
(69 citation statements)
references
References 89 publications
(165 reference statements)
2
62
0
Order By: Relevance
“…For example, one item may be mapped onto its phonological representation while the previous one is articulated and the next one viewed. Consistent with this idea of internal buffering, recent studies of eye movements have demonstrated a tight control of the gaze during reading (Laubrock & Kliegl, 2015) and naming (Gordon & Hoedemaker, 2016), in that participants regulate look-ahead to maintain a fixed distance between the currently viewed and the currently named item. This complementary approach to RAN considers it an indicator of inter-word multi-item processing.…”
Section: Differential Associations With Serial and Discrete Namingmentioning
confidence: 77%
“…For example, one item may be mapped onto its phonological representation while the previous one is articulated and the next one viewed. Consistent with this idea of internal buffering, recent studies of eye movements have demonstrated a tight control of the gaze during reading (Laubrock & Kliegl, 2015) and naming (Gordon & Hoedemaker, 2016), in that participants regulate look-ahead to maintain a fixed distance between the currently viewed and the currently named item. This complementary approach to RAN considers it an indicator of inter-word multi-item processing.…”
Section: Differential Associations With Serial and Discrete Namingmentioning
confidence: 77%
“…one (Levelt & Meyer, 2000;Meyer, 1996;Meyer, Sleiderink, & Levelt, 1998) and that speakers typically move their eyes from the first to the second object before initiating the first of the two object names (Gordon & Hoedemaker, 2016;Griffin, 2001;Meyer, Belke, Häcker, & Mortensen, 2007;Roelofs, 2007Roelofs, , 2008. The interpretation of these findings is that in order to ensure a fluent production of the two names, speakers do not simply begin to speak when the first object name is available for pronunciation, but begin to plan the name of the second object before initiating their utterance.…”
Section: Methodsmentioning
confidence: 93%
“…By reading fluency, Norton and Wolf ( 2012 ) mean “fluent comprehension” (Wolf and Katzir-Cohen, 2001 ), that is, “a manner of reading in which all sublexical units, words, and connected text and all the perceptual, linguistic, and cognitive processes involved in each level are processed accurately and automatically so that sufficient time and resources can be allocated to comprehension and deeper thought” (Norton and Wolf, 2012 , p. 215). Even though RAN tasks are usually used to study reading development and dyslexia, a few studies have shown that RAN is also predictive of some characteristics of reading fluency for non-college bound participants aged between 16 and 24 (Kuperman and Van Dyke, 2011 ), for undergrad students (Al Dahhan et al, 2014 ; Kuperman et al, in press ), and for adults aged between 36 and 65 (van den Bos et al, 2002 ). In addition, some imaging studies performed in young adults have also shown that RAN and reading activate similar neural networks of neural structures (Misra et al, 2004 ; Cummine et al, 2015 ).…”
Section: Introductionmentioning
confidence: 99%