2022
DOI: 10.1007/s00521-022-07613-7
|View full text |Cite
|
Sign up to set email alerts
|

Development of novel automated language classification model using pyramid pattern technique with speech signals

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 59 publications
0
0
0
Order By: Relevance
“…The dataset, consisting of 7101 sound segments representing three emotional states, achieved a 93.40% classification accuracy using the proposed 1D-OLBP and NCA-based method. The research [30] describes a novel pyramid structure for feature extraction in speech-language classification, which is applied to two datasets: a new large speech dataset and the VoxForge dataset. The approach selects 1000 interesting features using neighborhood component analysis, which are subsequently classified using a quadratic support vector machine classifier.…”
Section: Back Groundmentioning
confidence: 99%
“…The dataset, consisting of 7101 sound segments representing three emotional states, achieved a 93.40% classification accuracy using the proposed 1D-OLBP and NCA-based method. The research [30] describes a novel pyramid structure for feature extraction in speech-language classification, which is applied to two datasets: a new large speech dataset and the VoxForge dataset. The approach selects 1000 interesting features using neighborhood component analysis, which are subsequently classified using a quadratic support vector machine classifier.…”
Section: Back Groundmentioning
confidence: 99%
“…Community emotion analysis interprets the emotional state of users from visual and auditory cues such as facial expressions, tone of voice, and body language, especially on social media and other platforms [13,14]. For this purpose, different methods have been developed using artificial intelligence and machine learning techniques [15][16][17][18]. This study presents a study on sound-based community emotion recognition (SCED).…”
Section: A Backgroundmentioning
confidence: 99%
“…𝑓𝑓𝑓𝑓 − min (𝑓𝑓𝑓𝑓) max(𝑓𝑓𝑓𝑓) − min (𝑓𝑓𝑓𝑓) (15) 𝑠𝑠𝑒𝑒 = 𝜓𝜓(𝑋𝑋, 𝑙𝑙𝑜𝑜𝑒𝑒) (16) where 𝑋𝑋 represents normalized feature vector after min-max normalization; 𝑠𝑠𝑒𝑒 , qualified index vector; 𝜓𝜓(. ) , neighborhood component analysis feature selector; and 𝑙𝑙𝑜𝑜𝑒𝑒, real output.…”
Section: 𝑋𝑋 =mentioning
confidence: 99%