2018 25th IEEE International Conference on Image Processing (ICIP) 2018
DOI: 10.1109/icip.2018.8451338
|View full text |Cite
|
Sign up to set email alerts
|

Learning to Predict where the Children with Asd Look

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
1
5

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
2
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 34 publications
(20 citation statements)
references
References 12 publications
0
14
1
5
Order By: Relevance
“…Nebout et al [75] designed a coarse-to-fine convolutional neural network (CNN) to predict saliency maps for ASD children that provides better results than 6 of existing saliency models. This study reported that no center bias is applicable for the visual attention of individuals with ASD, which contradicts the findings of other studies in [26,116]. Dris et al [25] proposed a method of classifying ASD using fixation duration on different regions of interest in the image and got 88.6% specificity using an SVM classifier.…”
Section: Analyzing Gaze Patterncontrasting
confidence: 65%
“…Nebout et al [75] designed a coarse-to-fine convolutional neural network (CNN) to predict saliency maps for ASD children that provides better results than 6 of existing saliency models. This study reported that no center bias is applicable for the visual attention of individuals with ASD, which contradicts the findings of other studies in [26,116]. Dris et al [25] proposed a method of classifying ASD using fixation duration on different regions of interest in the image and got 88.6% specificity using an SVM classifier.…”
Section: Analyzing Gaze Patterncontrasting
confidence: 65%
“…A very popular fixation point dataset containing 2000 images from 20 different categories has been proposed by [95]. Some research groups focused their efforts on gathering eye-movement data from cohort of people affected by some cognitive disorders such as ASD (Authism Spectrum Disorder) [96], [97], [98].…”
Section: B Eye-tracking Datasetsmentioning
confidence: 99%
“…As shown in [19], by finetuning five state-of-the-art saliency prediction models on the ASD eye-tracking dataset, it was found that SalGAN [18] and SAM-VGG [12] outperform the other three models including SALICON [9], ML-Net [10] and SAM-ResNet [12]. Therefore, SalGAN and SAM-VGG are selected for comparison in this paper.…”
Section: Comparison With State-of-the-art Modelsmentioning
confidence: 99%