2020
DOI: 10.1049/iet-ipr.2019.0907
|View full text |Cite
|
Sign up to set email alerts
|

Identification of wool and mohair fibres with texture feature extraction and deep learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 25 publications
(17 citation statements)
references
References 37 publications
(42 reference statements)
0
17
0
Order By: Relevance
“…Obtain high-level semantic information that can reflect the image category, thereby avoiding the tedious preprocessing and complex feature extraction of the original image. Yildiz [29] used deep learning methods to learn the texture features of video images and classify them. Rajagopal et al [30] used a convolutional neural network training method to improve the accuracy of image recognition.…”
Section: Related Workmentioning
confidence: 99%
“…Obtain high-level semantic information that can reflect the image category, thereby avoiding the tedious preprocessing and complex feature extraction of the original image. Yildiz [29] used deep learning methods to learn the texture features of video images and classify them. Rajagopal et al [30] used a convolutional neural network training method to improve the accuracy of image recognition.…”
Section: Related Workmentioning
confidence: 99%
“…A novel local binary pattern-based feature extraction method was proposed by Yildiz et al for fine fibers detection, which achieved the objective, easy, rapid, time, and cost-effective results [19]. Liu et al proposed a structured multi-feature extraction method for spatiotemporal activity recognition, which is an inspiration for defect detection [20].…”
Section: A Feature Extractionmentioning
confidence: 99%
“…Motivated by the recent contribution evaluation methods of surface defect detection, in this paper we used the following evaluation metrics, as shown in (20): 1) False positive rate (FPR), namely, the proportion of pixels falsely detected as defects; 2) False negative rate (FNR), namely, the proportion of pixels falsely detected as non-defects; 3) Mean absolute error (MAE), namely, the difference between the detection result (DR) and ground truth (GT…”
Section: Evaluation Metricsmentioning
confidence: 99%
“…In recent years, methods based on deep learning have made remarkable achievements in image classification. 23,24 Deep learning uses a learning network of perceptrons with multiple hidden layers. It can automatically learn feedback and optimize the appropriate image features, and then combine and abstract from the low-level features to build a high-level representation.…”
Section: Related Workmentioning
confidence: 99%