Computational Science and Engineering 2016
DOI: 10.1201/9781315375021-15
|View full text |Cite
|
Sign up to set email alerts
|

Speech/music discrimination using perceptual feature

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 1 publication
0
3
0
Order By: Relevance
“…Regarding datasets, both Scheirer and Slaney (S&S) and GTZAN databases had been extensively used by researchers for evaluation [1,5,15,16,19,21]. However, some of the work used their own created database [2,7,22].…”
Section: Related Workmentioning
confidence: 99%
“…Regarding datasets, both Scheirer and Slaney (S&S) and GTZAN databases had been extensively used by researchers for evaluation [1,5,15,16,19,21]. However, some of the work used their own created database [2,7,22].…”
Section: Related Workmentioning
confidence: 99%
“…Due to massive amounts of this data it is impossible to generate classes, la-bels, descriptions, transcriptions manually, or many of the main tasks that are required to take advantage of the information [2]. To answer the demands for handling the data, a field of research, known as audio content analysis (ACA), or machine listening, has recently emerged [1].…”
Section: Introductionmentioning
confidence: 99%
“…Among the main challenges in performing speech/music classification is to obtain high accuracy with characteristics of short-time delay and low complexity [5], due to the vast amount of information that need to be processed and its efficient utilization in the applications. For discriminating the audio content in categories such as music and speech, data two successive stages have to be performed: i) the extraction of features from the input audio data, and ii) the classification of the data into established categories [2].…”
Section: Introductionmentioning
confidence: 99%