2020
DOI: 10.3389/fncom.2020.541581
|View full text |Cite
|
Sign up to set email alerts
|

Proto-Object Based Saliency Model With Texture Detection Channel

Abstract: The amount of visual information projected from the retina to the brain exceeds the information processing capacity of the latter. Attention, therefore, functions as a filter to highlight important information at multiple stages of the visual pathway that requires further and more detailed analysis. Among other functions, this determines where to fixate since only the fovea allows for high resolution imaging. Visual saliency modeling, i.e. understanding how the brain selects important information to analyze fu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
12
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
1
1

Relationship

2
5

Authors

Journals

citations
Cited by 7 publications
(12 citation statements)
references
References 100 publications
0
12
0
Order By: Relevance
“… Poort et al (2012 , 2016 ) performed neurophysiological experiments that indicated that the neural responses in V1 for representing the figure occurred according to the process of edge detection. Furthermore, biologically plausible saliency map models have implied that the neural mechanism of figure–ground segregation plays an important role in predicting the locations of attentional selection and in improving the prediction accuracy of the human gaze ( Li, 1999a ; Zhaoping, 2003 , 2014 ; Russell et al, 2014 ; Wagatsuma, 2019 ; Uejima et al, 2020 ). Our analyses demonstrated that the responses of intermediate and higher-intermediate layers (layer 4 ReLU, layer 5 ReLU, and layer 6; see Fig.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“… Poort et al (2012 , 2016 ) performed neurophysiological experiments that indicated that the neural responses in V1 for representing the figure occurred according to the process of edge detection. Furthermore, biologically plausible saliency map models have implied that the neural mechanism of figure–ground segregation plays an important role in predicting the locations of attentional selection and in improving the prediction accuracy of the human gaze ( Li, 1999a ; Zhaoping, 2003 , 2014 ; Russell et al, 2014 ; Wagatsuma, 2019 ; Uejima et al, 2020 ). Our analyses demonstrated that the responses of intermediate and higher-intermediate layers (layer 4 ReLU, layer 5 ReLU, and layer 6; see Fig.…”
Section: Discussionmentioning
confidence: 99%
“…In this study, we used RDMs ( Kriegeskorte et al, 2008 ) to compare the characteristics of the responses of the DCNN saliency map model with those of the neural representation in visual cortices. Our analysis methods and metrics used in this study are applicable to various other saliency map models ( Itti and Koch, 2000 ; Kümmerer et al, 2014 , 2017 ; Russell et al, 2014 ; Pan et al, 2017 ; Liu and Han, 2018 ; Wagatsuma, 2019 ; Uejima et al, 2020 ). Our analysis results and the V1 saliency hypothesis implied that the activities of model neurons with similar characteristics to V1 responses were the basis for better gaze prediction accuracy for the saliency map models.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…It further performed better than all models except for Fang et al [34] for the KLD metric. This is most likely due to the additional texture feature computed in the Fang et al model, which is not considered in this model, but can easily be integrated [19]. Studies have shown texture plays an important role in early vision and perception [73].…”
Section: Table II Average Auc-roc and Kld Scores On Crcns Datasetmentioning
confidence: 99%
“…A graph-based saliency map (GBVS) is constructed by introducing a fully connected graph into Itti’s model [ 2 ]. The proto-object saliency map model employs a biologically plausible feature [ 3 , 4 ]. The hybrid model of visual saliency was developed by using low, middle, and high-level image features [ 5 ].…”
Section: Introductionmentioning
confidence: 99%