In this paper, we propose an invariant learning method for facial landmark mining in a self-supervised manner. The conventional methods mostly train with raw data of paired facial appearances and landmarks, assuming that they are evenly distributed. However, assumptions like this tend to lead to failures in challenging cases even undergo costly training since they usually don't hold in real-world scenarios. To address this issue, our model achieves to be invariant to facial biases by learning through the landmark-anchored distributions. Specifically, we generate faces from these distributions, then group them based on the appearance sources and the probe facial landmarks into intra-identities and intra-landmarks classes, respectively. Thus, we construct intra-class invariance losses to
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.