2017
DOI: 10.1080/01584197.2017.1298970
|View full text |Cite
|
Sign up to set email alerts
|

Comparing manual and automated species recognition in the detection of four common south-east Australian forest birds from digital field recordings

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
28
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 25 publications
(28 citation statements)
references
References 62 publications
0
28
0
Order By: Relevance
“…Nevertheless, more comparative work is required before the costeffectiveness of these tools can be determined. Joshi et al (2017) showed that such software is cost-effective in some situations but not others. Automatic detection of bittern calls has so far not been found to be cost-effective, either because expensive recording equipment was needed (Frommolt & Tauchert, 2014) or because it gave high false positive rates (21% precision rate; Priyadarshani, 2017).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Nevertheless, more comparative work is required before the costeffectiveness of these tools can be determined. Joshi et al (2017) showed that such software is cost-effective in some situations but not others. Automatic detection of bittern calls has so far not been found to be cost-effective, either because expensive recording equipment was needed (Frommolt & Tauchert, 2014) or because it gave high false positive rates (21% precision rate; Priyadarshani, 2017).…”
Section: Discussionmentioning
confidence: 99%
“…In addition, there are currently several options for processing recordings. Sound files can be: (a) manually processed by listening or visually examining files for evidence of calls on spectrograms; or (b) automatically processed using software trained to identify unique sound shapes associated with calls of the target species (Brandes, 2008;Joshi, Mulder, & Rowe, 2017). Here, we tested four possible manual options for monitoring wildlife calls using recording devices: (a) stereo recordings processed visually (STEREO-VISUAL), (b) mono recordings processed visually (MONO-VISUAL), (c) stereo recordings processed audibly (STEREO-AUDIBLE), and (d) mono recordings processed audibly (MONO-AUDIBLE).…”
mentioning
confidence: 99%
“…, Joshi et al. ). Nevertheless, rapid progress is being made and it is currently possible to rely only on the vocalisations contained within the field recordings to generate classifiers (Ovaskainen et al.…”
Section: Practicalitymentioning
confidence: 99%
“…However, the recordings used for benchmarking are sometimes not representative of realworld, noisier conditions (Priyadarshani et al 2018). The efficiency of automated species detection methods also depends on the method used, the quality of the recordings, and the target species: efficiency compared to manual processing is sometimes equivalent or lower (Digby et al 2013, Joshi et al 2017. Nevertheless, rapid progress is being made and it is currently possible to rely only on the vocalisations contained within the field recordings to generate classifiers (Ovaskainen et al 2018).…”
Section: Practicalitymentioning
confidence: 99%
“…All recognizers misclassify to some extent (Priyadarshani et al 2018), which can have implications for study results (Russo and Voigt 2016, Rydell et al 2017). Typically, each detection reported by a recognizer is visually or aurally reviewed by a human observer (hereafter “manual validation”) to remove false positives; however, the time required for validation can render automated recognition no more efficient than processing recordings aurally (Borker et al 2014, Joshi et al 2017). Thus, there is a need for methods that increase the automation of acoustic recognition (Marques et al 2012, Stowell et al 2016).…”
Section: Introductionmentioning
confidence: 99%