2020
DOI: 10.1101/2020.06.16.153130
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Quantitative comparison ofDrosophilabehavior annotations by human observers and a machine learning algorithm

Abstract: Automated quantification of behavior is increasingly prevalent in neuroscience research. Human judgments can influence machine-learning-based behavior classification at multiple steps in the process, for both supervised and unsupervised approaches. Such steps include the design of the algorithm for machine learning, the methods used for animal tracking, the choice of training images, and the benchmarking of classification outcomes. However, how these design choices contribute to the interpretation of automated… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 56 publications
(88 reference statements)
0
2
0
Order By: Relevance
“…Since ODES was trained to make decisions in a manner similar to human judgment, it encounters the same problems with clumped cells that humans do. Additionally, since the model's training involves human input, any bias or inconsistency in cell annotations can directly influence the model's learning patterns [25].…”
Section: Limitations Of Odes Odesmentioning
confidence: 99%
See 1 more Smart Citation
“…Since ODES was trained to make decisions in a manner similar to human judgment, it encounters the same problems with clumped cells that humans do. Additionally, since the model's training involves human input, any bias or inconsistency in cell annotations can directly influence the model's learning patterns [25].…”
Section: Limitations Of Odes Odesmentioning
confidence: 99%
“…For instance, if the annotations provided by human experts vary due to subjective interpretations of what constitutes a particular cell type or boundary, the model will learn these inconsistencies, potentially reducing its accuracy and generalizability. Additionally, human annotators can have varying levels of confidence and criteria in classification [25]. This variability can lead to a model that is uncertain or inconsistent in its classifications, mirroring the inconsistencies of its training data.…”
Section: Limitations Of Odes Odesmentioning
confidence: 99%