2014
DOI: 10.22436/jmcs.012.03.01
|View full text |Cite
|
Sign up to set email alerts
|

Speech Emotion Recognition Based On Learning Automata In Fuzzy Petri-net

Abstract: This paper explores how fuzzy features' number and reasoning rules can influence the rate of emotional speech recognition. The speech emotion signal is one of the most effective and neutral methods in individuals' relationships that facilitate communication between man and machine. This paper introduces a novel method based on mind inference and recognition of speech emotion recognition. The foundation of the proposed method is the inference of rules in Fuzzy Petri-net (FPN) and the learning automata. FPN is a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 24 publications
(28 reference statements)
0
2
0
Order By: Relevance
“…At the first stage of our proposed method, preprocessing tasks are performed on the raw speech input signal using windowing techniques (Kowalczyk and van der Wal, 2013). The windowing is done after providing Discrete Fourier Transform (DFT) of each frame to obtain the spectrum scale of speech signal (Motamed, 2014). Then, frequency wrapping is used to convert spectrum of speech to Mel scale where the triangle filter bank at uniform space is achieved (Rahul et al, 2015).…”
Section: Feature Extraction (Mfcc)mentioning
confidence: 99%
“…At the first stage of our proposed method, preprocessing tasks are performed on the raw speech input signal using windowing techniques (Kowalczyk and van der Wal, 2013). The windowing is done after providing Discrete Fourier Transform (DFT) of each frame to obtain the spectrum scale of speech signal (Motamed, 2014). Then, frequency wrapping is used to convert spectrum of speech to Mel scale where the triangle filter bank at uniform space is achieved (Rahul et al, 2015).…”
Section: Feature Extraction (Mfcc)mentioning
confidence: 99%
“…This research uses the already existing Multi-robot-mediated Intervention System (MRIS) model for measuring the joint attention and imitation of children with ASD (Ali et al., 2019). Previously, for children with ASD, HMM has been used for automatically segmenting conversational audio into semantically relevant components (Yu et al., 2018), to redress the attention deficit in autistic children by solving the problem of focus attention (Motamed et al., 2015), influence of Autism on the functioning of the brain by quantifying statistical properties of the time-varying brain states (Dammu & Bapi, 2019). In another research, an attempt to determine a person’s level of autism using HMM was focused.…”
mentioning
confidence: 99%