2020
DOI: 10.1109/tip.2020.3018222
|View full text |Cite
|
Sign up to set email alerts
|

Revealing the Invisible With Model and Data Shrinking for Composite-Database Micro-Expression Recognition

Abstract: Composite-database micro-expression recognition is attracting increasing attention as it is more practical to realworld applications. Though the composite database provides more sample diversity for learning good representation models, the important subtle dynamics are prone to disappearing in the domain shift such that the models greatly degrade their performance, especially for deep models. In this paper, we analyze the influence of learning complexity, including the input complexity and model complexity, an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
67
0
2

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 103 publications
(75 citation statements)
references
References 55 publications
0
67
0
2
Order By: Relevance
“…In the middle layer of the convolutional layer, the output results of the previous layer are trained and learned based on the weight parameters and offsets within the layer. Furthermore, the images are convolved with different receptive fields of the same convolution kernel, reducing the parameter settings and complexity of the network (Vu and Kim, 2018;Xia et al, 2020). In the middle layer of the convolutional network, the high-level features of the image can be obtained through multiple feature extraction (Kamel et al, 2018).…”
Section: Har Algorithm Based On Cnnmentioning
confidence: 99%
“…In the middle layer of the convolutional layer, the output results of the previous layer are trained and learned based on the weight parameters and offsets within the layer. Furthermore, the images are convolved with different receptive fields of the same convolution kernel, reducing the parameter settings and complexity of the network (Vu and Kim, 2018;Xia et al, 2020). In the middle layer of the convolutional network, the high-level features of the image can be obtained through multiple feature extraction (Kamel et al, 2018).…”
Section: Har Algorithm Based On Cnnmentioning
confidence: 99%
“…Ben 等 [27] 通 过最大化类间拉普拉斯散射和最小化类内拉普拉 斯散射寻找张量到张量投影, 直接从原始张量数 据中提取出具有区分性的几何保留特征. Xia 等 [28] 提出一种递归卷积网络, 通过探索浅层网络结构 和较低分辨率的输入样本提高识别性能. 文献 [29] 首次将遗传算法应用到微表情识别任务中, 利用 遗传算法消除那些微表情预测的无关信息来丰富 特征表达, 通过将遗传算法集成到 STSTNet 后, 识 别性能得到了提升.…”
Section: 相关工作unclassified
“…其中, C 为微表情的标签数量. 表 3 全局特征、局部特征和串联特征的识别率 CASME II [32] SMIC [33] SAMM [31] MEGC 2019 [34 CASME II [32] SMIC [33] SAMM [31] MEGC 2019 [34] 方法 UF 1 UAR UF 1 UAR UF 1 UAR UF 1 UAR LBP-TOP [4] 0.7026 0.7429 0.2000 0.5280 0.3954 0.4102 0.5882 0.5785 Bi-WOOF [5] 0.7805 0.8026 0.5727 0.5829 0.5211 0.5139 0.6296 0.6227 OFF-ApexNet [23] 0.8764 0.8681 0.6817 0.6695 0.5409 0.5392 0.7196 0.7096 CapsuleNet [11] 0.7068 0.7018 0.5820 0.5877 0.6209 0.5989 0.6520 0.6506 Dual-Inception [10] 0.8621 0.8560 0.6645 0.6726 0.5868 0.5663 0.7322 0.7278 STSTNet [9] 0.8382 0.8686 0.6801 0.7013 0.6588 0.6810 0.7353 0.7605 EMR [8] 0.8293 0.8209 0.7461 0.7530 0.7754 0.7152 0.7885 0.7824 RCN [28] 0.8087 0.8563 0.5980 0.5991 0.6771 0.6976 0.7052 0.7164 LFM [25] 0.8700 0.8400 0.7200 0.7100 0.6700 0.6000 0.7700 0.7500 STSTNet+GA [25,29] 0…”
Section: Samm 数据集mentioning
confidence: 99%
“…For example, several spatiotemporal descriptors, e.g., local binary patterns from three orthogonal planes (LBP-TOP) [7], [8], histogram of image gradient from three orthogonal planes (HIGO-TOP) [9] and sparse main directional mean optical-flow (MDMO) [10] have been designed for describing micro-expressions. In more recent years, deep learning methods have been also applied to MER and achieve more promising performance than conventional handcrafted methods such as three-stream convolutional neural networks (TSCNNs) [11], spatiotemporal recurrent neural networks (STRNNs) [12], and recurrent convolutional networks (RCNs) [13].…”
Section: Introductionmentioning
confidence: 99%