2017
DOI: 10.1117/1.jei.26.2.023010
|View full text |Cite
|
Sign up to set email alerts
|

Automatic classification of ceramic sherds with relief motifs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 28 publications
0
4
0
Order By: Relevance
“…For example, some scholars use the principal component analysis method (PCA) and CNN together. This method is to first use PCA to reconstruct the image, then use CNN to train the character feature image, and finally get a good result (the error rate is controlled within 10%) [ 17 ]. Some researchers use an improved LCNN for handwritten Chinese characters.…”
Section: Related Workmentioning
confidence: 99%
“…For example, some scholars use the principal component analysis method (PCA) and CNN together. This method is to first use PCA to reconstruct the image, then use CNN to train the character feature image, and finally get a good result (the error rate is controlled within 10%) [ 17 ]. Some researchers use an improved LCNN for handwritten Chinese characters.…”
Section: Related Workmentioning
confidence: 99%
“…To ensure a faster pottery classification at a later stage, each pottery is manually labeled at the pottery laser scanning stage, and the source of the labeled information behind the pottery is directly accessible when extracting the pottery 3D model. Researchers in the literature [ 23 ] were inspired by the pyramid histogram and used SVM models to classify the visual features of pottery scans. Since there is a lot of room for optimization of machine learning methods in terms of accuracy and speed.…”
Section: Related Workmentioning
confidence: 99%
“…Binary patterns were automatically extracted from the 3D scans and the classification was done by training a SVM model with pyramid histograms of visual words (PHOW). The recognition rate was below 85% (Debroutelle et al, 2017). Then, we exploited well-known CNN models : AlexNet (Krizhevsky, 2014), VGG11 (Simonyan and Zisserman, 2014) and ResNet18 (He et al, 2015), with a fine tuning on our dataset.…”
Section: Related Workmentioning
confidence: 99%
“…2. For more details, the reader is referred to (Debroutelle et al, 2017). The dataset is composed of 888 grey-level images resized to 224x224 with a black background.…”
Section: Archaeological Materialsmentioning
confidence: 99%