2011
DOI: 10.1037/a0023700
|View full text |Cite
|
Sign up to set email alerts
|

Learning across senses: Cross-modal effects in multisensory statistical learning.

Abstract: It is currently unknown whether statistical learning is supported by modality-general or modality-specific mechanisms. One issue within this debate concerns the independence of learning in one modality from learning in other modalities. In the present study, the authors examined the extent to which statistical learning across modalities is independent by simultaneously presenting learners with auditory and visual streams. After establishing baseline rates of learning for each stream independently, they systema… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

8
58
1

Year Published

2015
2015
2022
2022

Publication Types

Select...
7
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 64 publications
(72 citation statements)
references
References 44 publications
8
58
1
Order By: Relevance
“…In the present study, participants used cues in the visual input to help segment an auditory speech stream, suggesting that participants integrated knowledge of word boundaries across modalities. Thus, the present study supports the notion that the mechanisms underlying speech segmentation are interactive across modalities (see Emberson, Conway, & Christiansen, 2011;Mitchel & Weiss, 2011;Mitchel, Christiansen, & Weiss, in review), and are not modality independent (cf. Seitz, Kim, van Wassenhove, & Shams, 2007).…”
Section: Discussionsupporting
confidence: 87%
See 1 more Smart Citation
“…In the present study, participants used cues in the visual input to help segment an auditory speech stream, suggesting that participants integrated knowledge of word boundaries across modalities. Thus, the present study supports the notion that the mechanisms underlying speech segmentation are interactive across modalities (see Emberson, Conway, & Christiansen, 2011;Mitchel & Weiss, 2011;Mitchel, Christiansen, & Weiss, in review), and are not modality independent (cf. Seitz, Kim, van Wassenhove, & Shams, 2007).…”
Section: Discussionsupporting
confidence: 87%
“…In addition to temporal synchrony, previous work has highlighted the role of temporal contiguity between visual and auditory boundary events (Cunillera et al, 2010;Mitchel & Weiss, 2011). Cunillera and colleagues (2010) found that a contiguous visual cue (a static image) enhanced the segmentation of an auditory stream beyond the level of learning exhibited in isolation.…”
Section: Discussionmentioning
confidence: 99%
“…At the test stage, performance for audiovisual pairs was significantly reduced as compared to a baseline auditory condition that did not include visual stimuli. Yet, another study (Mitchel & Weiss, 2011) compared statistical learning performance for auditory, visual, or audiovisual streams (the latter composed of fixed audiovisual pairs as in our current study) but showed no reduction in performance in the audiovisual condition. Thus, prior behavioral work does not provide a basis for thinking that sensitivity to statistical features of multisensory streams relies on fundamentally different computations.…”
Section: Limitations and Future Directionscontrasting
confidence: 43%
“…Transfer to new stimuli with acoustically different properties (but still within the auditory domain) was seen in this study, but performance was weaker than for the original stimulus set. Similarly, a study looking at multisensory integration of statistical learning found that performance was impeded when multiple stimulus streams in different modalities presented conflicting segment boundaries, suggesting that they were not being encoded in an entirely modality-specific manner (Mitchel and Weiss 2011). However, ours is, to the best of our knowledge, the first study using a statistical learning paradigm (rather than artificial grammar learning) which explicitly examines the question of transfer from one modality to another.…”
Section: Accepted Manuscriptmentioning
confidence: 79%