Relatively small data sets available for expression recognition research make the training of deep networks for expression recognition very challenging. Although fine-tuning can partially alleviate the issue, the performance is still below acceptable levels as the deep features probably contain redundant information from the pre-trained domain. In this paper, we present FaceNet2ExpNet, a novel idea to train an expression recognition network based on static images. We first propose a new distribution function to model the high-level neurons of the expression network. Based on this, a two-stage training algorithm is carefully designed. In the pre-training stage, we train the convolutional layers of the expression net, regularized by the face net; In the refining stage, we append fullyconnected layers to the pre-trained convolutional layers and train the whole network jointly. Visualization shows that the model trained with our method captures improved high-level expression semantics. Evaluations on four public expression databases, CK+, Oulu-CASIA, TFD, and SFEW demonstrate that our method achieves better results than state-of-the-art.
Conductive inks for the future printed electronics should have the following merits: high conductivity, flexibility, low cost, and compatibility with wide range of substrates. However, the state-of-the-art conductive inks based on metal nanoparticles are high in cost and poor in flexibility. Herein, we reported a highly conductive, low cost, and super flexible ink based on graphene nanoplatelets. The graphene ink has been screen-printed on plastic and paper substrates. Combined with postprinting treatments including thermal annealing and compression rolling, the printed graphene pattern shows a high conductivity of 8.81 × 104 S m–1 and good flexibility without significant conductivity loss after 1000 bending cycles. We further demonstrate that the printed highly conductive graphene patterns can act as current collectors for supercapacitors. The supercapacitor with the printed graphene pattern as the current collector and printed activated carbon as the active material shows a good rate capability of up to 200 mV s–1. This work potentially provides a promising route toward the large-scale fabrication of low cost yet flexible printed electronic devices.
The electron field emission performance of screen-printed graphene cathode was studied. High-yield graphene was prepared by a modified Hummers method and hydrazine hydrate reduction process, and screen printing technology was used to prepare the graphene field emission cathode. This cathode structure satisfies the requirements of both good electrical conductivity and a high surface field enhancement factor, leading to excellent and stable field emission properties with a low threshold field ( approximately 1.5 V microm(-1)). Our work introduced a simple and convenient method suitable for large scale on different substrates, paving the way for more applications of graphene films.
Fully printed humidity sensors based on two-dimensional (2D) materials are described. Monolayer graphene oxide (GO) and few-layered black phosphorus (BP) flakes were dispersed in low boiling point solvents suitable for inkjet printing. The humidity sensors were fabricated by printing GO and BP sensing layers on printed silver nanoparticle electrodes. The electrical response of the GO and BP sensors to humidity levels ranges from 11 to 97% relative humidity, which revealed a high capacitance sensitivity of 4.45 × 104 times for the GO sensor and 5.08 × 103 times for the BP sensor at 10 Hz operation frequency. Response/recovery times of the GO and BP sensor were found to be 2.7/4.6 s and 4.7/3.0 s respectively. These sensors also showed sensitive and fast response to a proximal human fingertip, showing potential applications in contactless switching.
We propose to detect Deepfake generated by face manipulation based on one of their fundamental features: images are blended by patches from multiple sources, carrying distinct and persistent source features. In particular, we propose a novel representation learning approach for this task, called patch-wise consistency learning (PCL). It learns by measuring the consistency of image source features, resulting to representation with good interpretability and robustness to multiple forgery methods. We develop an inconsistency image generator (I2G) to generate training data for PCL and boost its robustness. We evaluate our approach on seven popular Deepfake detection datasets. Our model achieves superior detection accuracy and generalizes well to unseen generation methods. On average, our model outperforms the state-of-the-art in terms of AUC by 2% and 8% in the in-and cross-dataset evaluation, respectively.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.