2014
DOI: 10.3923/itj.2015.16.23
|View full text |Cite
|
Sign up to set email alerts
|

Feature Fusion Using Automatic Generated RBF Neural Network

Abstract: A B S T R A C TIn this study, a strategy for feature fusion for team behaviors recognition using automatically generated RBF neural network is proposed, for various features need to be extracted in the course of team behaviors recognition and it is difficult to estimate the contribution of various features for identifying and decrypting team behaviors. The burden of high-level recognition algorithm is use eased by using the underlying features of moving target, such as the trajectory characteristics extracted … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 32 publications
(33 reference statements)
0
1
0
Order By: Relevance
“…In optical text recognition, Zhu et al [152] found that when the attention center of some characters shifted, the model would be overconfident, so they proposed Calibration CNN, which uses a convolutional neural network to predict weights and biases to adjust logits. By learning the relationship between high-level features and classification correctness, Wang et al [153] constructed a detector that predicts the probability of incorrect output on neural networks to improve confidence. Treating the class imbalance problem as the label shift problem, from the perspective of an optimum Bayes classifier, Tian et al [57] proposed a post-training prior rebalancing method that tunes a flexible post-training hyper-parameter and modifies the classifier margin to deal with imbalance problem.…”
Section: E Parametric Methodsmentioning
confidence: 99%
“…In optical text recognition, Zhu et al [152] found that when the attention center of some characters shifted, the model would be overconfident, so they proposed Calibration CNN, which uses a convolutional neural network to predict weights and biases to adjust logits. By learning the relationship between high-level features and classification correctness, Wang et al [153] constructed a detector that predicts the probability of incorrect output on neural networks to improve confidence. Treating the class imbalance problem as the label shift problem, from the perspective of an optimum Bayes classifier, Tian et al [57] proposed a post-training prior rebalancing method that tunes a flexible post-training hyper-parameter and modifies the classifier margin to deal with imbalance problem.…”
Section: E Parametric Methodsmentioning
confidence: 99%