2018 IEEE 9th International Conference on Biometrics Theory, Applications and Systems (BTAS) 2018
DOI: 10.1109/btas.2018.8698554
|View full text |Cite
|
Sign up to set email alerts
|

Polarimetric Thermal to Visible Face Verification via Attribute Preserved Synthesis

Abstract: Thermal to visible face verification is a challenging problem due to the large domain discrepancy between the modalities. Existing approaches either attempt to synthesize visible faces from thermal faces or extract robust features from these modalities for cross-modal matching. In this paper, we take a different approach in which we make use of the attributes extracted from the visible image to synthesize the attribute-preserved visible image from the input thermal image for cross-modal matching. A pre-trained… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
25
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3
1

Relationship

2
6

Authors

Journals

citations
Cited by 34 publications
(25 citation statements)
references
References 38 publications
0
25
0
Order By: Relevance
“…The Volume I data consists of images corresponding to 60 subjects. On the other hand, the Volume II data consists of images from 51 subjects (81 subjects in [26] 85.42% 82.49% 21.46% 26.25% AP-GAN [5] 88.93% ± 1.54% 84.16% ± 1.54% 19.02% ± 1.69% 23.90% ± 1.52% Multi-stream GAN [36] 96.03% 85.74% 11.78% 23.18% Ours 93.68% ± 0.97% 89.20% ± 1.56% 13.46% ± 1.92% 18.77% ± 1.36% I and 25 subjects' images from Volume II are used for training, the remaining 26 subjects' images from Volume II are used for evaluation. We repeat this process 5 times and report the average results.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The Volume I data consists of images corresponding to 60 subjects. On the other hand, the Volume II data consists of images from 51 subjects (81 subjects in [26] 85.42% 82.49% 21.46% 26.25% AP-GAN [5] 88.93% ± 1.54% 84.16% ± 1.54% 19.02% ± 1.69% 23.90% ± 1.52% Multi-stream GAN [36] 96.03% 85.74% 11.78% 23.18% Ours 93.68% ± 0.97% 89.20% ± 1.56% 13.46% ± 1.92% 18.77% ± 1.36% I and 25 subjects' images from Volume II are used for training, the remaining 26 subjects' images from Volume II are used for evaluation. We repeat this process 5 times and report the average results.…”
Section: Resultsmentioning
confidence: 99%
“…It has been shown that polarimetric thermal imaging captures additional geometric and textural facial details compared to conventional thermal imaging [10]. Hence, the polarization-state information has been used to improve the performance of cross-spectrum face recognition [10,27,30,35,26,5]. A polarimetric, referred to as Stokes images, is composed of three channels: S0, S1 and S2.…”
Section: Introductionmentioning
confidence: 99%
“…4. In order to learn the discrimination in both image content and semantics, we adopt the triplet matching training strategy [14], [16], [17], [53]. Specifically, given sketch attributes, the discriminator is trained by using the following triplets: (i) real-sketch and real-sketch-attributes, (ii) synthesized-sketch and real-sketch-attributes, and (iii) wrong-sketch (real sketch but mismatching attributes) and same real-sketch-attributes.…”
Section: A Stage 1: Attribute-to-sketchmentioning
confidence: 99%
“…where D f = {D 1 , · · · , D m } and X f = {x 1 f , · · · , x m f } denote real training images at multiple scales 1, · · · , m. In order to preserve the geometric structure of the synthesized sketch from the attribute-to-sketch stage, we adopt the skipconnection architecture from UNet related works [25], [53], [54]. By using skip-connections, the feature maps from the encoding network are concatenated with the feature maps in the decoding network.…”
Section: B Stage 2: Sketch-to-facementioning
confidence: 99%
“…It has been applied in various fields, even achieving better performance than humans in most cases. Recently, more attention has been focus on heterogeneous recognition issue, such as sketch to photo (Tang and Wang 2002;Wang et al 2014), near-infrared to visible (Li et al 2013;Xiao et al 2013), polarimetric thermal to visible (Di et al 2018), and cross resolutions (Biswas et al 2011). Due to the insensitivity to illumination (Zhu et al 2014), near-infrared (NIR) devices are widely used in monitoring and security systems.…”
Section: Introductionmentioning
confidence: 99%