2013
DOI: 10.1016/j.engappai.2013.04.008
|View full text |Cite
|
Sign up to set email alerts
|

Evaluation of a set of new ORF kernel functions of SVM for speech recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2015
2015
2020
2020

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 23 publications
(10 citation statements)
references
References 16 publications
0
9
0
Order By: Relevance
“…It is also worth noting, that MELC parameter has much more clear geometrical interpretation than SVM's parameter C (which can be seen either as abstract weight of training errors or as an upper bound on the size of Lagrange multipliers). The parameter γ, or in general the formula for V A (v), gives the estimation of optimal kernel width in one dimensional projection of A on v. There are many existing studies [29,30,31] and formulas for such objects, in particular it is possible to perform adaptive kernel width [32] where each point x have its own kernel width σ x .…”
Section: Uci Binary Classificationmentioning
confidence: 99%
“…It is also worth noting, that MELC parameter has much more clear geometrical interpretation than SVM's parameter C (which can be seen either as abstract weight of training errors or as an upper bound on the size of Lagrange multipliers). The parameter γ, or in general the formula for V A (v), gives the estimation of optimal kernel width in one dimensional projection of A on v. There are many existing studies [29,30,31] and formulas for such objects, in particular it is possible to perform adaptive kernel width [32] where each point x have its own kernel width σ x .…”
Section: Uci Binary Classificationmentioning
confidence: 99%
“…Selecting features [8] utilizing ABC and Nearest Neighbor were introduced for steganalysis in images. Support Vector Machine (SVM) stood prevailing in sorting compared to -NN classifier and ANN by reason of its exceptional classification accuracy and generalization act [9] [10].…”
Section: Literature Reviewmentioning
confidence: 99%
“…When all employed bees complete the search for new food sources, the fitness values of new food sources are calculated and compared to the old ones according to the greedy selection mechanism of The operation procedure of ABC algorithm --Initialization phase --(1) Initialize the population of solutions and assign the population to employed bees (2) while (cycle = MAXcycles) do --Employed bee phase -- (3) for = 1 to SN do (4) Produce a new solution V for employed bees and calculate its fitness value (5) Apply the greedy selection mechanism between V and , select the better one (6) If the solution does not update, the non-updated number trial = trial + 1; otherwise trial = 0 (7) end for --Onlooker bee phase -- (8) Calculate the selection probability (9) = 0, = 1 (10) while ( < ) do (11) if random < then (12) = + 1 (13) Produce a new solution V for onlooker bee of the solution and calculate its fitness value (14) Apply the greedy selection mechanism between V and , select the better one (15) If the solution does not update, trial = trial + 1; otherwise trial = 0 (16) end if (17) = + 1 (18) If = + 1, = 1 (19) end while ( = ) --Scout bee phase -- (20) if trial > LIMIT then (21) Replace with a new random solution V (22) end if (23) Memorize the current optimal solution (24) cycle = cycle + 1 (25) end while (cycle = MAXcycles) Algorithm 1: The pseudocode of ABC algorithm. the amounts and the positions of food sources with onlooker bees.…”
Section: Artificial Colony Bee (Abc)mentioning
confidence: 99%
“…In [14], a feature selection technique based on the ABC and -Nearest Neighbor ( -NN) was employed for image steganalysis. Compared with ANN and -NN classifier, support vector machine (SVM) is a more powerful classification method due to its excellent classification accuracy and generalization performance [15,16].…”
Section: Introductionmentioning
confidence: 99%