2002
DOI: 10.1023/a:1012435301888
|View full text |Cite
|
Sign up to set email alerts
|

Untitled

Abstract: Abstract. We describe a new incremental algorithm for training linear threshold functions: the Relaxed Online Maximum Margin Algorithm, or ROMMA. ROMMA can be viewed as an approximation to the algorithm that repeatedly chooses the hyperplane that classifies previously seen examples correctly with the maximum margin. It is known that such a maximum-margin hypothesis can be computed by minimizing the length of the weight vector subject to a number of linear constraints. ROMMA works by maintaining a relatively si… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0
1

Year Published

2005
2005
2018
2018

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 132 publications
(1 citation statement)
references
References 36 publications
0
0
0
1
Order By: Relevance
“…MDLText foi comparado com os seguintes métodos tradicionais de aprendizado online: M.NB[McCallum e Nigam 1998], B.NB[McCallum e Nigam 1998], Perceptron[Freund e Schapire 1999], gradiente descendente estocástico (SGD -Stochastic Gradient Descent)[Zhang 2004], algoritmo de margem larga aproximada (ALMA -Approximate Large Margin Algorithm)[Hoi et al 2014, Gentile 2002, gradiente descendente online (OGD -Online Gradient Descent)[Zinkevich 2003] e algoritmo de margem máxima online relaxada (ROMMA -Relaxed Online Maximum Margin Algorithm)[Li e Long 2002].Os métodos M.NB, B.NB foram utilizados a partir da biblioteca scikit-learn em Python. Os experimentos com os métodos Perceptron, ALMA, OGD, SGD e ROMMA foram realizados usando funções da biblioteca LIBOL 4[Hoi et al 2014] para MATLAB.…”
unclassified
“…MDLText foi comparado com os seguintes métodos tradicionais de aprendizado online: M.NB[McCallum e Nigam 1998], B.NB[McCallum e Nigam 1998], Perceptron[Freund e Schapire 1999], gradiente descendente estocástico (SGD -Stochastic Gradient Descent)[Zhang 2004], algoritmo de margem larga aproximada (ALMA -Approximate Large Margin Algorithm)[Hoi et al 2014, Gentile 2002, gradiente descendente online (OGD -Online Gradient Descent)[Zinkevich 2003] e algoritmo de margem máxima online relaxada (ROMMA -Relaxed Online Maximum Margin Algorithm)[Li e Long 2002].Os métodos M.NB, B.NB foram utilizados a partir da biblioteca scikit-learn em Python. Os experimentos com os métodos Perceptron, ALMA, OGD, SGD e ROMMA foram realizados usando funções da biblioteca LIBOL 4[Hoi et al 2014] para MATLAB.…”
unclassified