2019
DOI: 10.1002/ece3.5255
|View full text |Cite
|
Sign up to set email alerts
|

Reassessing the success of experts and nonexperts at correctly differentiating between closely related species from camera trap images: A reply to Gooliaff and Hodges

Abstract: We present a reply to a recent article in Ecology and Evolution (“Measuring agreement among experts in classifying camera images of similar species” by Gooliaff and Hodges) that demonstrated a lack of consistency in expert‐based classification of images of similar‐looking species. We disagree with several conclusions from the study, and show that with some training, and use of multiple images that is becoming standard practice in camera‐trapping studies, even nonexperts can identify similar sympatric species w… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
13
1

Year Published

2019
2019
2023
2023

Publication Types

Select...
6

Relationship

2
4

Authors

Journals

citations
Cited by 9 publications
(14 citation statements)
references
References 13 publications
(18 reference statements)
0
13
1
Order By: Relevance
“…Trained personnel identified animals captured in photographs to species, and an expert reviewer verified species (T. W. King or D. Thornton served as the expert). Although the consistency of classifying images of lynx and the similar‐looking bobcat (which also inhabits the study area) from single photographs has been called into question (Gooliaff and Hodges 2018), we found high levels of agreement in identification of lynx from the image bursts used in our study, which provided multiple views of easily distinguishable features such as a short fully black tail tip and large paws (Thornton et al 2019). Thus, the potential for mis‐classification of lynx in this study is low.…”
Section: Methodsmentioning
confidence: 55%
“…Trained personnel identified animals captured in photographs to species, and an expert reviewer verified species (T. W. King or D. Thornton served as the expert). Although the consistency of classifying images of lynx and the similar‐looking bobcat (which also inhabits the study area) from single photographs has been called into question (Gooliaff and Hodges 2018), we found high levels of agreement in identification of lynx from the image bursts used in our study, which provided multiple views of easily distinguishable features such as a short fully black tail tip and large paws (Thornton et al 2019). Thus, the potential for mis‐classification of lynx in this study is low.…”
Section: Methodsmentioning
confidence: 55%
“…Like us, Thornton et al () measured agreement among a group of classifiers in their classifications of bobcat and lynx images, but they found much higher agreement (Fleiss’ Kappa = 0.87, 95% CI = 0.83–0.93, compared to our Fleiss’ Kappa = 0.64, 95% CI = 0.60–0.68). Even more contrasting, none of the images in their experiment were classified as “unknown” by the classifiers; all of the images were classified as either “bobcat” or “lynx.” This result is strikingly different than the >71% of images in our study that were classified by at least one expert as “unknown.”…”
contrasting
confidence: 37%
“…Thus, we are glad to see the results from Thornton et al (), as they show that recently trained novice classifiers working from multiple images can obtain reasonably high agreement with each other, although individual classifications still have a sizeable error rate. Their work reinforces our main points that (a) studying error rates in image classification is important, (b) researchers should document how images were classified and what steps were taken to reduce or manage misclassifications (whether via training or consultation of many experts or novices), and (c) the research or management context in which the work is undertaken will affect how important errors are for subsequent inference and management actions.…”
mentioning
confidence: 89%
See 1 more Smart Citation
“…This seasonal range was chosen as it approximates demographic (i.e., births and deaths) and geographic closure (i.e., dispersal) and is based on species' ecological responses to snowpack and leaf phenology of the region (Sirén et al, 2016; Vashon et al, 2008). We identified species in photographs by their unique morphology and field marks and used consensus from multiple observers when identification was uncertain (Thornton et al, 2019). We organized camera data into weekly occasions using CPW Photo Warehouse (Ivan & Newkirk, 2016) and recorded whether or not each species was detected during the occasion.…”
Section: Methodsmentioning
confidence: 99%