2016 24th European Signal Processing Conference (EUSIPCO) 2016
DOI: 10.1109/eusipco.2016.7760647
|View full text |Cite
|
Sign up to set email alerts
|

Face photo-sketch recognition using local and global texture descriptors

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
29
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 23 publications
(29 citation statements)
references
References 19 publications
0
29
0
Order By: Relevance
“…2) Comparison to the state-of-the-art methods For the UoM-SGFSv2 set A and B, we report the performance of our DAEN with the EP(+PCA) [47], ET(+PCA) [48], D-RS+CBR [49], LGMS [50], DEEPS [9], SP-Net [36], Identity-aware CycleGAN [29], and Transfer deep feature learning [37] methods in Table 5, LGMS are traditional inter-modality methods, the Identityaware CycleGAN is a deep-learning intra-modality method, the DEEPS, SP-Net, and Transfer deep feature learning methods are deep-learning inter-modality methods. As we can see from Table 5, Table 6 and Figure 10, Figure 11, the proposed DAEN achieves the highest accuracy, and the performance of all methods is lower on the more challenging UoM-SGFSv2 set A.…”
Section: Results and Analysis 1) Ablation Studymentioning
confidence: 99%
“…2) Comparison to the state-of-the-art methods For the UoM-SGFSv2 set A and B, we report the performance of our DAEN with the EP(+PCA) [47], ET(+PCA) [48], D-RS+CBR [49], LGMS [50], DEEPS [9], SP-Net [36], Identity-aware CycleGAN [29], and Transfer deep feature learning [37] methods in Table 5, LGMS are traditional inter-modality methods, the Identityaware CycleGAN is a deep-learning intra-modality method, the DEEPS, SP-Net, and Transfer deep feature learning methods are deep-learning inter-modality methods. As we can see from Table 5, Table 6 and Figure 10, Figure 11, the proposed DAEN achieves the highest accuracy, and the performance of all methods is lower on the more challenging UoM-SGFSv2 set A.…”
Section: Results and Analysis 1) Ablation Studymentioning
confidence: 99%
“…In this database, they built a model to reverse the process. Galea et al [4] proposed the method that extracts multi scale local binary pattern descriptors from the overlapping patches for sketch image. Global and local texture information can be used for face photo sketch recognition.…”
Section: Related Workmentioning
confidence: 99%
“…using a linear combination of images, the Eigen-patches (EP) extension [15] performing synthesis at a local level, and the Bayesian framework in [16] that considers relationships among neighbouring patches for model construction. A more thorough review of FH algorithms may be found in [4], [13], [15]- [17]. State-of-the-art inter-modality methods that learn or extract modality-invariant features include the D-RS approach [2], [18] that compares SIFT and MLBP descriptors extracted from images that are convolved with three filters, the CBR method [19], which compares MLBP features extracted from individual facial components, the FaceSketchID system in [12] which fuses D-RS with CBR, and the recent LGMS method [4] that compares MLBP features extracted from log-Gabor-filtered images using the spearman rank-order correlation coefficient.…”
Section: Related Workmentioning
confidence: 99%
“…A more thorough review of FH algorithms may be found in [4], [13], [15]- [17]. State-of-the-art inter-modality methods that learn or extract modality-invariant features include the D-RS approach [2], [18] that compares SIFT and MLBP descriptors extracted from images that are convolved with three filters, the CBR method [19], which compares MLBP features extracted from individual facial components, the FaceSketchID system in [12] which fuses D-RS with CBR, and the recent LGMS method [4] that compares MLBP features extracted from log-Gabor-filtered images using the spearman rank-order correlation coefficient. Other methods and further information can be found in [4], [6], [12], [13], [15], [19]- [23].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation