2018
DOI: 10.1101/363382
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Combining citizen science and deep learning to amplify expertise in neuroimaging

Abstract: Research in many fields has become increasingly reliant on large and complex datasets. "Big Data" holds untold promise to rapidly advance science by tackling new questions that cannot be answered with smaller datasets. While powerful, research with Big Data poses unique challenges, as many standard lab protocols rely on experts examining each one of the samples. This is not feasible for large-scale datasets because manual approaches are time-consuming and hence difficult to scale. Meanwhile, automated approach… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
28
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 19 publications
(28 citation statements)
references
References 56 publications
0
28
0
Order By: Relevance
“…In addition to the need for objective exclusion criteria, the current neuroimaging data deluge makes the manual QC of every magnetic resonance imaging (MRI) scan impractical. For these reasons, there has been great interest in automated QC [5][6][7][8] , which is progressively gaining attention [9][10][11][12][13][14][15][16] with the convergence of machine learning solutions. Early approaches [5][6][7][8] to objectively estimate image quality have employed "image quality metrics" (IQMs) that quantify variably interpretable aspects of image quality [8][9][10][11][12][13] (e.g., summary statistics of image intensities, signal-to-noise ratio, coefficient of joint variation, Euler angle, etc.).…”
Section: Background and Summarymentioning
confidence: 99%
See 1 more Smart Citation
“…In addition to the need for objective exclusion criteria, the current neuroimaging data deluge makes the manual QC of every magnetic resonance imaging (MRI) scan impractical. For these reasons, there has been great interest in automated QC [5][6][7][8] , which is progressively gaining attention [9][10][11][12][13][14][15][16] with the convergence of machine learning solutions. Early approaches [5][6][7][8] to objectively estimate image quality have employed "image quality metrics" (IQMs) that quantify variably interpretable aspects of image quality [8][9][10][11][12][13] (e.g., summary statistics of image intensities, signal-to-noise ratio, coefficient of joint variation, Euler angle, etc.).…”
Section: Background and Summarymentioning
confidence: 99%
“…The convergence of limited size of samples annotated for quality and the labels noise preclude the definition of normative, standard values for the IQMs that work well for any dataset, and also, the generalization of machine learning solutions. Keshavan et al 16 have recently proposed a creative solution to the problem of visually assessing large datasets. They were able to annotate over 80,000 bidimensional slices extracted from 722 brain 3D images using BraindR, a smartphone application for crowdsourcing.…”
Section: Background and Summarymentioning
confidence: 99%
“…In brain imaging, recent work by Keshavan et al (2018) showed the advantages of using citizen science to rate brain images for issues related to head motion and scanner artifacts. These authors were able to gather 80,000 ratings on slices drawn from 722 brains using a simple web interface.…”
Section: Crowdsourced Qcmentioning
confidence: 99%
“…Although we maximized compatibility with clinical usage by drawing these data directly from MRI scanners using typical clinical acquisitions sequences, these QC fail images are very unlikely to represent the full spectrum of all the poor-quality images, emphasizing the importance and benefit of having sufficiently largely labeled data for all the necessary classes. It is our hope that with more extensive input of labeled poor-quality data such as those from open science quality control dataset and collective expert input (Keshavan, Yeatman, & Rokem, 2018), we can further extend the performance and generalization of these classifiers.…”
Section: Limitations/outlookmentioning
confidence: 99%