Bioinformatics Using Computational Intelligence Paradigms
DOI: 10.1007/10950913_2
|View full text |Cite
|
Sign up to set email alerts
|

Prototype Based Recognition of Splice Sites

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Publication Types

Select...
4
1

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 53 publications
0
5
0
Order By: Relevance
“…The classifier size and the classification time is constant with respect to the training set size. This has been demonstrated in comparison to SVM results for a large benchmark problem in computational biology, for which GRLVQ yields solutions with the same accuracy as a comparable SVM, but of which the size is up to a factor 100 smaller depending on the training set size [18].…”
Section: Discussionmentioning
confidence: 80%
See 2 more Smart Citations
“…The classifier size and the classification time is constant with respect to the training set size. This has been demonstrated in comparison to SVM results for a large benchmark problem in computational biology, for which GRLVQ yields solutions with the same accuracy as a comparable SVM, but of which the size is up to a factor 100 smaller depending on the training set size [18].…”
Section: Discussionmentioning
confidence: 80%
“…LVQ-type classifiers have been designed as simple and intuitive classification methods, and variants show quite good performance in practice [17,18,24]. Since they have not explicitly been designed for large margin optimization, it is interesting to see that large margin bounds can be derived also for these models.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…When we perform the classification based on the two features defined by the first eigendirections of Λ 1 and Λ 2 , we loose almost no performance and still achieve 94.0% ± 0.38% test accuracy. SVM results reported in the literature even lie above 96% [14,20] test accuracy. Note, however, that our classifier is extremely sparse and simple and still achieves a performance which is only slightly worse.…”
Section: (B)(c)mentioning
confidence: 84%
“…Also there was a recent study based on a Support Vector Machine (SVM) [45] and a method combining SVM with a Hidden Markov Model (HMM) [46]. The approach discussed in [47] combines MDD with first order Markov models, i.e. WAMs, at the leafs as compared to a simple WMM.…”
Section: Ss Sensor Design Conceptmentioning
confidence: 98%