2009
DOI: 10.1016/j.ijpsycho.2009.01.005
|View full text |Cite
|
Sign up to set email alerts
|

Distraction in a visual multi-deviant paradigm: Behavioral and event-related potential effects

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
19
0

Year Published

2010
2010
2022
2022

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 30 publications
(20 citation statements)
references
References 42 publications
1
19
0
Order By: Relevance
“…They reported that, compared to standard stimuli, deviant stimuli elicited visual MMN. However, unlike in the comparison of the lower-and upper-visual-field conditions, visual MMN did not differ between the left-and right-visual-field conditions (for similar findings, see Grimm et al, 2009). A unified explanation for these paradoxical findings has not yet been proposed.…”
Section: Neural Generatorsmentioning
confidence: 84%
See 1 more Smart Citation
“…They reported that, compared to standard stimuli, deviant stimuli elicited visual MMN. However, unlike in the comparison of the lower-and upper-visual-field conditions, visual MMN did not differ between the left-and right-visual-field conditions (for similar findings, see Grimm et al, 2009). A unified explanation for these paradoxical findings has not yet been proposed.…”
Section: Neural Generatorsmentioning
confidence: 84%
“…The second half of this review discusses the nature of the unintentional temporal-context-based prediction in vision. Here, on the basis of several key findings provided from visual MMN and other prediction-related studies, the nature of the unintentional prediction is discussed in terms of (1) behavioral indicators, (2) cognitive properties, and Schröger, , 2004and Schröger, , 2006Boll and Berti, 2009;Grimm et al, 2009), direction of motion (Amenedo et al, 2007;Hosák et al, 2008;Kremlácek et al, 2006;Lorenzo-Lopéz et al, 2004;PazoAlvarez et al, 2004aPazoAlvarez et al, , 2004bUrban et al, 2008), orientation (Astikainen et al, 2004(Astikainen et al, , 2008Czigler and Pató, 2009;Czigler and Sulykos, 2010;Flynn et al, 2009;Kimura et al, , 2010aKimura et al, , 2010bSulykos and Czigler, 2011), spatial frequency (Heslenfeld, 2003;Kenemans et al, 2003Kenemans et al, , 2008Maekawa et al, 2005Maekawa et al, , 2009Sulykos and Czigler, 2011; for a corresponding magnetoencephalography (MEG) study, see Kogai et al, 2011), contrast/ luminance (Kimura et al, 2008c(Kimura et al, , 2008d(Kimura et al, , 2010c(Kimura et al, , 2010dStagg et al, 2004;Wei et al, 2002), color (Czigler et al, 2002(Czigler et al, , 2004Czigler and Sulykos, 2010;Grimm et al, 2009;…”
Section: Introductionmentioning
confidence: 99%
“…The notion that automatic change detection in the visual modality does not operate only at the level of simple sensory features such as color (Czigler et al, 2002, 2004, 2006a; Horimoto et al, 2002; Mazza et al, 2005; Kimura et al, 2006b; Liu and Shi, 2008; Grimm et al, 2009; Thierry et al, 2009; Czigler and Sulykos, 2010; Müller et al, 2010; Mo et al, 2011; Stefanics et al, 2011), line orientation (Astikainen et al, 2004, 2008; Czigler and Pató, 2009; Flynn et al, 2009; Kimura et al, 2009, 2010a, 2006b; Czigler and Sulykos, 2010; Sulykos and Czigler, 2011), or spatial frequency (Heslenfeld, 2003; Kenemans et al, 2003, 2010; Maekawa et al, 2005, 2009; Sulykos and Czigler, 2011), but also at higher cognitive levels, has been supported by several visual MMN studies. Recent studies demonstrated that object-based irregularities are automatically detected by the visual system (Müller et al, 2013), as well as irregular lexical information (Shtyrov et al, 2013).…”
Section: Introduction—what Is Visual Mmn and What Is It Good For?mentioning
confidence: 99%
“…MMN is thought to reflect memory-comparison-based automatic processing. Although the MMN component has been widely investigated in auditory modality, analogue of auditory MMN was also found in response to visual deviants such as color (Czigler et al, 2002), size (Kimura et al, 2008), shape (Grimm et al, 2009), duration (Qiu et al, 2011), even complex visual stimuli such as facial expressions (Zhao and Li, 2006) and especially orientation (Czigler and Pato, 2009;Flynn et al, 2009;Kimura et al, 2010;Sulykos and Czigler, 2011). Since vMMN is elicited by discriminable changes in vision irrespective of the participants' attention, it is not surprising that the vMMN has recently received considerable attention as a tool of visual cognitive sciences (see for a review (Kimura, 2012;Kimura et al, 2011) and clinical research (Maekawa et al, 2013).…”
Section: Introductionmentioning
confidence: 99%