2020
DOI: 10.1016/j.jecp.2019.104740
|View full text |Cite
|
Sign up to set email alerts
|

Communicative cues in the absence of a human interaction partner enhance 12-month-old infants’ word learning

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

2
19
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 19 publications
(24 citation statements)
references
References 40 publications
2
19
0
Order By: Relevance
“…Together, these results indicate that infants identified the objects targeted by pointing as referents and linked them with the co-occurring words, albeit their looking response was short-lived. Such temporal profile of response conforms with the dynamics of infants' looking behavior reported in the looking-while-listening tasks (e.g., Schaffer& Plunkett, 1998;Tsuji et al, 2020). A complementary analysis of infants' first looks performed after the test question indicated that their referent selection and ensuing word mapping were also evident at the level of first gaze shifts executed in response to the test words (i.e., more saccades directed to the targets of pointing than to distractors in the trained-word condition, see Supplementary Materials SM2 First Looks).Interestingly, upon hearing the novel words, infants initially oriented towards the distractor objects (test bin 0-1 s: M = −.29, SD =.51).…”
supporting
confidence: 81%
See 2 more Smart Citations
“…Together, these results indicate that infants identified the objects targeted by pointing as referents and linked them with the co-occurring words, albeit their looking response was short-lived. Such temporal profile of response conforms with the dynamics of infants' looking behavior reported in the looking-while-listening tasks (e.g., Schaffer& Plunkett, 1998;Tsuji et al, 2020). A complementary analysis of infants' first looks performed after the test question indicated that their referent selection and ensuing word mapping were also evident at the level of first gaze shifts executed in response to the test words (i.e., more saccades directed to the targets of pointing than to distractors in the trained-word condition, see Supplementary Materials SM2 First Looks).Interestingly, upon hearing the novel words, infants initially oriented towards the distractor objects (test bin 0-1 s: M = −.29, SD =.51).…”
supporting
confidence: 81%
“…However, no studies to date have provided evidence that the expectation of co-reference between words and actions contributes to referent selection for novel words (Hollich et al, 2000). Although 12-to 13-month-olds were shown to acquire word-object mappings coupled with communicative actions (Woodward et al, 1994;Tsuji et al, 2020), their performance could be explained without appealing to action interpretation or reference. Since even non-communicative object-directed actions orient infants' attention towards targeted items (Daum & Gredebäck, 2010;Daum et al, 2009), successful word mapping following gaze shifts or pointing might have been supported solely by the formation of associative links between stimuli that co-occur (i.e., the attended objects and the concurrently uttered labels).…”
Section: Introductionmentioning
confidence: 97%
See 1 more Smart Citation
“…Thus, it is still an open question whether the temporal contingency manipulation would be successful in the absence of a broader social context containing human agents. A more recent study controlled for these factors by displaying a virtual agent that was contingently reacting to 12-month-old infants' gaze via gaze-contingent eyetracking, and teaching them novel word-object associations (Tsuji, Jincho, Mazuka, & Cristia, 2020). The contingent reactions displayed by the on-screen avatar included mutual gaze and gaze following, but no broader social context such as a prolonged preceding interaction phase.…”
Section: Learning From Interactive Media In the Absence Of Humansmentioning
confidence: 99%
“…Thus, instead of seeing the toddler displayed on screen in real time, the experimenter saw the toddler's gaze position in real time, and was instructed to react accordingly. The third group of toddlers saw a virtual agent identical to the one used in Tsuji et al (2020). Script and reactions of the virtual agent were matched to those of the experimenter in the video chat group.…”
Section: The Present Studymentioning
confidence: 99%