2020
DOI: 10.1111/2041-210x.13504
|View full text |Cite
|
Sign up to set email alerts
|

A deep active learning system for species identification and counting in camera trap images

Abstract: A typical camera trap survey may produce millions of images that require slow, expensive manual review. Consequently, critical conservation questions may be answered too slowly to support decision‐making. Recent studies demonstrated the potential for computer vision to dramatically increase efficiency in image‐based biodiversity surveys; however, the literature has focused on projects with a large set of labelled training images, and hence many projects with a smaller set of labelled images cannot benefit from… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
86
0
12

Year Published

2020
2020
2023
2023

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 122 publications
(116 citation statements)
references
References 28 publications
0
86
0
12
Order By: Relevance
“…Using accurate segmentation of animal body (Brodrick et al., 2019; He et al., 2017) will undoubtedly be a solution against side effects of rectangular cropping. Moreover, this pipeline can be used in an active learning strategy where the machine learning model is assisted by human intervention on some specific cases (Norouzzadeh et al., 2021). Indeed, using the proposed distance threshold in the Euclidean space, one can iteratively enrich the training dataset after manual checking of the most confident Top‐1 candidates (below a small distance threshold, to guarantee optimal TN rate) and re‐run the estimation procedure.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Using accurate segmentation of animal body (Brodrick et al., 2019; He et al., 2017) will undoubtedly be a solution against side effects of rectangular cropping. Moreover, this pipeline can be used in an active learning strategy where the machine learning model is assisted by human intervention on some specific cases (Norouzzadeh et al., 2021). Indeed, using the proposed distance threshold in the Euclidean space, one can iteratively enrich the training dataset after manual checking of the most confident Top‐1 candidates (below a small distance threshold, to guarantee optimal TN rate) and re‐run the estimation procedure.…”
Section: Discussionmentioning
confidence: 99%
“…A range of CNN‐based tools are now available for object detection and already used for animal detection (Parham et al., 2018; Sadegh Norouzzadeh et al., 2019; Schneider et al., 2018). Among other options including YOLO (Bochkovskiy et al., 2020; Redmon et al., 2016) and Mask R‐CNN (He et al., 2017), RetinaNet (Lin et al., 2017) is a CNN‐based object detector able to detect a series of predefined object classes (e.g.…”
Section: Methodsmentioning
confidence: 99%
“…This resulted in long sequences of very similar images, for example showing an animal walking in front of the camera (Figure S1). (Norouzzadeh et al, 2019). This approach posed a challenge for maintaining class balances in the training and validation sets, but it reduced the risk of non-independent training and validation sets.…”
Section: Data Preparationmentioning
confidence: 99%
“…Using accurate segmentation of animal body (He et al ., 2017; Brodrick et al ., 2019) will undoubtedly be a solution against side effects of rectangular cropping. Moreover, this pipeline can be used in an active learning strategy where the machine learning model is assisted by human Norouzzadeh et al . (2021).…”
Section: Discussionmentioning
confidence: 99%
“…A range of CNN-based tools are now available for object detection, already used for animal detection (Parham et al ., 2018; Schneider et al ., 2018; Sadegh Norouzzadeh et al ., 2019). Among other options including YOLO (Redmon et al ., 2016; Bochkovskiy et al ., 2020) and Mask R-CNN (He et al ., 2017), RetinaNet (Lin et al ., 2017) is a CNN-based object detector able to detect a series of predefined object classes ( e .…”
Section: Methodsmentioning
confidence: 99%