2001
DOI: 10.1177/002029400103400802
|View full text |Cite
|
Sign up to set email alerts
|

Data Fusion by Intelligent Classifier Combination

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2004
2004
2017
2017

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 20 publications
(11 citation statements)
references
References 12 publications
0
11
0
Order By: Relevance
“…3. Thirdly, it has been accepted that there is no guarantee that the combined classifier is better than any single one of the component classifiers, or the more classifiers the better [34]. According to the proved relationship between the above classification rules, the more neighbourhoods we use the closer the estimated posterior probability is to the true posterior probability.…”
Section: Discussionmentioning
confidence: 99%
“…3. Thirdly, it has been accepted that there is no guarantee that the combined classifier is better than any single one of the component classifiers, or the more classifiers the better [34]. According to the proved relationship between the above classification rules, the more neighbourhoods we use the closer the estimated posterior probability is to the true posterior probability.…”
Section: Discussionmentioning
confidence: 99%
“…Let \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$F= \{F_{1},F_{2},\ldots ,F_{N}\}$ \end{document} be a set of classifiers and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$ L =\{1, 2, \ldots , \ell \} $ \end{document} be the label set of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$ L $ \end{document} classes as given in [10]. For a given feature vector \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$ d\in R^{n} $ \end{document}, the outcome of the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$ i^{th} $ \end{document} classifier is represented as \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}\begin{equation*} F_{i}(d) = [o_{i1}(d) o_{i2}(d) \cdot \cdot \cdot \cdot o_{ij}(d) \cdot \cdot \cdot o_{i\ell }(d)]^{T}\quad \end{equation*}\end{document} where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$ o_{ij}(d) $ \end{document} is the grade provided by the classifier \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$ F_{i} $ \end{document} to the hypothesis that \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$ d $ \end{document} comes from the class \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$ j $ \end{document}.…”
Section: Multi Classifier Systemmentioning
confidence: 99%
“…An approach that is typically taken in the classification literature to improve classification rates (and has not been attempted with reference to this application) when it has been shown that standard classification algorithms do not meet the required performance is to form a multiclassifier system (MCS) (Kittler et al, 1998). This approach can be thought of as data fusion via MCS (Buxton et al, 2001) and may be applied when data are naturally decomposed and sampled from multiple sources/sensors. A classifier may be applied to features vectors derived from each of the sources,V 1 t ,V 2 t , andV 3 t , and a combination strategy may be used to fuse their output, Y 1 , Y 2 and Y 3 , in a parallel MCS as illustrated in Fig.…”
Section: Multi-classifier Systemmentioning
confidence: 99%