2018
DOI: 10.1038/s41416-018-0156-0
|View full text |Cite
|
Sign up to set email alerts
|

Harnessing citizen science through mobile phone technology to screen for immunohistochemical biomarkers in bladder cancer

Abstract: Background Immunohistochemistry (IHC) is often used in personalisation of cancer treatments. Analysis of large data sets to uncover predictive biomarkers by specialists can be enormously time-consuming. Here we investigated crowdsourcing as a means of reliably analysing immunostained cancer samples to discover biomarkers predictive of cancer survival. Methods We crowdsourced the analysis of bladder cancer TMA core samples through the smartphone app ‘Reverse the Odds’. S… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
20
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 13 publications
(20 citation statements)
references
References 25 publications
0
20
0
Order By: Relevance
“…Scores generated by a single scorer in Birmingham (using the scoring cards generated, but with less intensive training) were more discordant than those generated in Oxford and Manchester, reflecting the need for external quality assessment schemes. Although training can improve levels of concordance (eg, for EGFR staining 17 ) some stains are intrinsically more difficult to score than others, 18 including MRE11. Another issue is time taken for scoring, with the Leeds Consultant Histopathologist taking 25 to 30 minutes per case (see Material E1; available online at https://doi.org/10.1016/j.ijrobp.2019.03.015).…”
Section: Discussionmentioning
confidence: 99%
“…Scores generated by a single scorer in Birmingham (using the scoring cards generated, but with less intensive training) were more discordant than those generated in Oxford and Manchester, reflecting the need for external quality assessment schemes. Although training can improve levels of concordance (eg, for EGFR staining 17 ) some stains are intrinsically more difficult to score than others, 18 including MRE11. Another issue is time taken for scoring, with the Leeds Consultant Histopathologist taking 25 to 30 minutes per case (see Material E1; available online at https://doi.org/10.1016/j.ijrobp.2019.03.015).…”
Section: Discussionmentioning
confidence: 99%
“…Classify (Albarqouni et al, 2016a), (Brady et al, 2014), (Brady et al, 2017), , (dos Reis et al, 2015), (Eickhoff, 2014), (Foncubierta Rodríguez and Müller, 2012), (Gur et al, 2017), (de Herrera et al, 2014), (Holst et al, 2015), (Huang and Hamarneh, 2017), (Keshavan et al, 2018), (Lawson et al, 2017), (Malpani et al, 2015), (Mavandadi et al, 2012, (Mitry et al, 2013), (Mitry et al, 2015), (Nguyen et al, 2012), (Park et al, 2016), (Park et al, 2017), (Smittenaar et al, 2018), (Sonabend et al, 2017), (Sullivan et al, 2018) Segment (Roethlingshoefer et al, 2017), (Boorboor et al, 2018), (Bruggemann et al, 2018), (Cabrera-Bean et al, 2017), (Chávez-Aragón et al, 2013), (Cheplygina et al, 2016), (Ganz et al, 2017), (Gurari et al, 2015b), (Heller et al, 2017), (Irshad et al, 2015), (Lee and Tufail, 2014), (Lee et al, 2016), (Lejeune et al, 2017), (Luengo-Oroz et al, 2012), (Maier-Hein et al, 2014a), (Maier-Hein et al, 2016), (O'Neil et al, 2017), (Park et al, 2018),…”
Section: Task Papersmentioning
confidence: 99%
“…Finally, we summarize the wages paid to crowd workers. (Eickhoff, 2014), (Gurari et al, 2015b), (Gurari et al, 2016), (Heim, 2018), (Irshad et al, 2015), (Maier-Hein et al, 2014a), (Maier-Hein et al, 2016), (McKenna et al, 2012), (Mitry et al, 2015), (Nguyen et al, 2012), (Ørting et al, 2017), (Park et al, 2016), (Park et al, 2017), (Park et al, 2018), (Sameki et al, 2016), (Irshad et al, 2017) (Boorboor et al, 2018), (Brady et al, 2014), (Cheplygina et al, 2016), (Della Mea et al, 2014), (Ganz et al, 2017), (Holst et al, 2015), (Lee and Tufail, 2014), (Lee et al, 2016), (Maier-Hein et al, 2014b), (Maier-Hein et al, 2015), (Mitry et al, 2013), (Mitry et al, 2016), (Sharma et al, (Albarqouni et al, 2016b), (Rajchl et al, 2016), (Smittenaar et al, 2018), (Sullivan et al, 2018), (Timmermans et al, 2016) (Albarqouni et al,...…”
Section: Platform Scale and Wagesmentioning
confidence: 99%
“…Rating entire images was the most common interaction and was the main task of 52% of the studies surveyed here. Ratings took many forms, identifying the presence/absence of specific visual features (Sonabend et al, 2017), counting number of cells (Smittenaar et al, 2018), assessing intensity of cell staining (dos Reis et al, 2015), or discriminating healthy samples from diseased (Mavandadi et al, 2012). Most commonly, crowd workers were asked to create new annotations (90% of rating tasks).…”
Section: Interaction Papersmentioning
confidence: 99%