2014
DOI: 10.1007/s11704-014-3295-3
|View full text |Cite
|
Sign up to set email alerts
|

Emotion recognition from thermal infrared images using deep Boltzmann machine

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
27
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 40 publications
(27 citation statements)
references
References 16 publications
0
27
0
Order By: Relevance
“…LGBP-TOP [107], Riemannian manifolds [108], TDHF [109], StaFs [104], [109] Local Static: Mean intensity [110], Eigenimages [75], GLCM [104], [111] Dynamic: BoW Hist. [112] Geometry Global Static: Landmark locations [113], Landmark distances [114], PBVD [115], Candide Facial Grid [116], Geometric distance [117], [118], EDM [92], 3D mesh+Manifolds [119], Depth map [120], [121], LBP [122], [123], Curvature maps [124], [125] Dynamic: Optical flow [126], MHI [127], FFD [127], Level curve deformations [128], FFD+QT [129], LBP-TOP [130] Local Static: Curvature labels [131], Closed curves [132], DMCIC+HOG [125], Depth map+SIFT [120], BFSC [133] Dynamic: MU [134], [135], FAP [136], [137], EDM+Motion vectors [138] Appearance + Geometry Static: Shape+Color [139], Landmark distances+Angles+HOG [140], 3DMM [141], SFAM+LBP [142],…”
Section: Recognition Of Fesmentioning
confidence: 99%
See 1 more Smart Citation
“…LGBP-TOP [107], Riemannian manifolds [108], TDHF [109], StaFs [104], [109] Local Static: Mean intensity [110], Eigenimages [75], GLCM [104], [111] Dynamic: BoW Hist. [112] Geometry Global Static: Landmark locations [113], Landmark distances [114], PBVD [115], Candide Facial Grid [116], Geometric distance [117], [118], EDM [92], 3D mesh+Manifolds [119], Depth map [120], [121], LBP [122], [123], Curvature maps [124], [125] Dynamic: Optical flow [126], MHI [127], FFD [127], Level curve deformations [128], FFD+QT [129], LBP-TOP [130] Local Static: Curvature labels [131], Closed curves [132], DMCIC+HOG [125], Depth map+SIFT [120], BFSC [133] Dynamic: MU [134], [135], FAP [136], [137], EDM+Motion vectors [138] Appearance + Geometry Static: Shape+Color [139], Landmark distances+Angles+HOG [140], 3DMM [141], SFAM+LBP [142],…”
Section: Recognition Of Fesmentioning
confidence: 99%
“…As such, these are also applicable to the RGB case. In [104] a combination of StaFs, 2D-DCT and GLCM features is used, extracting both local and global information.…”
Section: Feature Extractionmentioning
confidence: 99%
“…Here we are taking the tomato and wheat crops which are purely organic materials and with complete nutrition diet as per Chinese standards used for planting, cropping and maintaining. Here for developing of our project [7] virus was provided by the institute of agricultural sciences and that virus was injected to these crops later these crops were separated from the overall crop and separate them into a fertilized area because not to spread the germs/virus to the healthy plants.…”
Section: A Collection Of Plants and Bacteria Separatelymentioning
confidence: 99%
“…emerged as an automatic feature learning tool without manual intervention. While they have achieved reasonably good performance in different applications such as digital recognition and human detection [7][8] [9][10] [11] [12], the depth network will learn the unfavorable feature description when regards the original pixels as input, since these pixels are affected by changing lighting, posture, and so on.. Moreover, in the real situation, the training sample set is usually small and not sufficient to adjust the network weights, the problem also weakens the performance of the network.…”
Section: Introductionmentioning
confidence: 99%
“…[7]used Convolution Neural Network (CNN) with noisy labels, [8]developed Cascaded Deep Auto-Encoder Network (CDAN)approach, [9][10]adopted Deep Belief Networks(DBN), [11][12]used Deep Boltzmann Machine(DBM), [13]built multiple deep ConvNets to learn high-level global features of face and trained RBM for classification. Although the above methods of extracting features are joint and automatic, the extracted features are sensitive to illumination, posture, and other interference in unconstrained environments.…”
Section: Introductionmentioning
confidence: 99%