2015
DOI: 10.2991/amcce-15.2015.97
|View full text |Cite
|
Sign up to set email alerts
|

CSF images fast recognition model based on improved convolu-tional Neural Network

Abstract: Abstract. The sparseness of feature is an important characteristic determining feature, which directly affects the accuracy of image recognition [1] . By studying the traditional convolution neural network, we find that the learning of image features of cerebrospinal fluid cell easily overfits, but using rectie activation function instead of sigmoid activation functions, the features extracted are more sparse and have faster convergence rate in the process of training. Then features extracted are classified th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
69
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 74 publications
(112 citation statements)
references
References 8 publications
0
69
0
Order By: Relevance
“…To evaluate our proposed method, we employ two strong baseline methods: ResNet-18 [21] and EfficientNet-b0 [22], both of which have high performance in image recognition and medical applications. In addition, we compare them with the performances of the MVCTNet, using two baseline methods as the feature extractors, on our dataset.…”
Section: B Comparisons With Baseline Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…To evaluate our proposed method, we employ two strong baseline methods: ResNet-18 [21] and EfficientNet-b0 [22], both of which have high performance in image recognition and medical applications. In addition, we compare them with the performances of the MVCTNet, using two baseline methods as the feature extractors, on our dataset.…”
Section: B Comparisons With Baseline Methodsmentioning
confidence: 99%
“…For this, we extract representations g a = G a (x a ) and g b = G b (x b ), which are the outputs of the global average pooling (GAP) layer of the features extractors, which are the inputs of the task layers and our dissimilarity loss. We employ ResNet-18[21] and EfficientNet-b0[22] pre-trained on ImageNet dataset as the feature extractors (G a , G b ) and the two task layers (F a , F b ) in our experiments.For the task layers, which consist of one MLP layer, we train two of them (F a , F b ) using g a and g b , respectively,…”
mentioning
confidence: 99%
“…Although not the focus of this paper, it is also noticeable that folded ResNets with learned weights achieve higher accuracy than previously reported in both datasets, especially with deeper configurations. , MobileNetV2 [89], NasNet [90], EfficientNet [91].…”
Section: ) Imagenetmentioning
confidence: 99%
“…HFN is both the most parameter-efficient and the tiniest. (b) also compares with popular efficient models: DenseNet[88], MobileNetV2[89], NasNet[90], EfficientNet[91].…”
mentioning
confidence: 99%
“…Gao [7] used whitening pretreatment and stochastic pooling based on traditional CNN; it improved the network generalization ability for military image classification by this method. Zhang et al [8] constructed a deep convolutional neural network (DCNN) with seven layers for the vehicle type identification; the recognition rate reached 96.8% based on comparative experiment with different model parameters. Guo [9] constructed a DCNN used on hand-printed character recognition; the experimental results show that the receptive field size has a significant influence on the number of model parameters but has little effect on the recognition rate and the running time is in a reverse trend.…”
Section: Introductionmentioning
confidence: 99%