2020
DOI: 10.3390/jpm10030086
|View full text |Cite
|
Sign up to set email alerts
|

Precision Telemedicine through Crowdsourced Machine Learning: Testing Variability of Crowd Workers for Video-Based Autism Feature Recognition

Abstract: Mobilized telemedicine is becoming a key, and even necessary, facet of both precision health and precision medicine. In this study, we evaluate the capability and potential of a crowd of virtual workers—defined as vetted members of popular crowdsourcing platforms—to aid in the task of diagnosing autism. We evaluate workers when crowdsourcing the task of providing categorical ordinal behavioral ratings to unstructured public YouTube videos of children with autism and neurotypical controls. To evaluate emerging … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
25
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3
1
1

Relationship

6
4

Authors

Journals

citations
Cited by 44 publications
(33 citation statements)
references
References 53 publications
(56 reference statements)
1
25
0
Order By: Relevance
“…We are able to derive accurate diagnoses through feeding the crowd workers’ responses into a machine learning classifier. However, the recruitment of the crowd workforce was a crucial part of the process, as prior studies have shown that most crowd workers do not perform particularly well at labeling behavioral features from unstructured videos (54-56).…”
Section: Discussionmentioning
confidence: 99%
“…We are able to derive accurate diagnoses through feeding the crowd workers’ responses into a machine learning classifier. However, the recruitment of the crowd workforce was a crucial part of the process, as prior studies have shown that most crowd workers do not perform particularly well at labeling behavioral features from unstructured videos (54-56).…”
Section: Discussionmentioning
confidence: 99%
“…Several prior works, such as those by Tariq et al and Leblanc et al, performed manual annotation of behavioral features in home videos, which enabled the creation of classifiers that could identify ASD with high accuracy. [14][15][16][17][18][19] Chorianopoulou et al collected structured home videos from participants and had expert annotators label the dataset with the actions, emotions, gaze fixations, utterances, and overall level of engagement in each video; this information was then used to train a classifier to identify specific engagement features that could be correlated with ASD. 20 Rudovic et al trained a large and generalizable neural network to estimate engagement in children with ASD from different cultural backgrounds.…”
Section: Manual Annotation Methodsmentioning
confidence: 99%
“…can serve as a data acquisition tool and aggregate emotive videos for autism research that can be used to train a more effective automatic emotion recognition platform. The use of data collected from mobile devices, such as the built-in camera, allow for continuous phenotyping and repeat diagnoses in home settings [24][25][26][27][28][29][30][31][32][33][34][35][36][37][38][39]. This motivates the development of a new emotion classifier designed specifically for pediatric populations, trained with images crowdsourced from Guess What?.…”
Section: Guess What? Incorporates Two Teaching Methods Based On Aba Principles: Discrete Trialmentioning
confidence: 99%