2018 6th International Conference on Control Engineering &Amp; Information Technology (CEIT) 2018
DOI: 10.1109/ceit.2018.8751822
|View full text |Cite
|
Sign up to set email alerts
|

Performance of Deep Neural Networks in Audio Surveillance

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 12 publications
0
5
0
Order By: Relevance
“…The comparison details are shown in Table X. Our proposed method performs better than the method from Sammarco et al [8] and Gatto et al [9] but is unable to beat the methods presented by Arslan et al [11].…”
Section: Comparisonmentioning
confidence: 81%
See 1 more Smart Citation
“…The comparison details are shown in Table X. Our proposed method performs better than the method from Sammarco et al [8] and Gatto et al [9] but is unable to beat the methods presented by Arslan et al [11].…”
Section: Comparisonmentioning
confidence: 81%
“…We compare our proposed method with methods from Sammarco et al [8], Gatto et al [9], and Arslan et al [11] in terms of accuracy. The comparison details are shown in Table X.…”
Section: Comparisonmentioning
confidence: 99%
“…For their research, authors developed deep neural network (DNN) models for scream and traffic accident recognition. Tests of their models showed that they can be reliably used in real-world applications and in transportation [10].…”
Section: Discussionmentioning
confidence: 99%
“…This process is termed Frame by Frame analysis.The audio recordings are sampled at 32 KHz and segmented into frame steps of 75 ms in length. A frame overlapping of 50% of the frame step is implemented (Arslan and Canbolat, 2018). Hamming window technique is employed for dividing the sampled signal into frames using a window length equal to sum of the frame step and frame overlap.…”
Section: Proposed Methodologymentioning
confidence: 99%