The world today is being hit by COVID-19. As opposed to fingerprints and ID cards, facial recognition technology can effectively prevent the spread of viruses in public places because it does not require contact with specific sensors. However, people also need to wear masks when entering public places, and masks will greatly affect the accuracy of facial recognition. Accurately performing facial recognition while people wear masks is a great challenge. In order to solve the problem of low facial recognition accuracy with mask wearers during the COVID-19 epidemic, we propose a masked-face recognition algorithm based on large margin cosine loss (MFCosface). Due to insufficient masked-face data for training, we designed a masked-face image generation algorithm based on the detection of the detection of key facial features. The face is detected and aligned through a multi-task cascaded convolutional network; and then we detect the key features of the face and select the mask template for coverage according to the positional information of the key features. Finally, we generate the corresponding masked-face image. Through analysis of the masked-face images, we found that triplet loss is not applicable to our datasets, because the results of online triplet selection contain fewer mask changes, making it difficult for the model to learn the relationship between mask occlusion and feature mapping. We use a large margin cosine loss as the loss function for training, which can map all the feature samples in a feature space with a smaller intra-class distance and a larger inter-class distance. In order to make the model pay more attention to the area that is not covered by the mask, we designed an Att-inception module that combines the Inception-Resnet module and the convolutional block attention module, which increases the weight of any unoccluded area in the feature map, thereby enlarging the unoccluded area’s contribution to the identification process. Experiments on several masked-face datasets have proved that our algorithm greatly improves the accuracy of masked-face recognition, and can accurately perform facial recognition with masked subjects.
Limited in part by the spatial resolution of typical in vivo magnetic resonance imaging (MRI) data, recent neuroimaging studies have only identified a connectivity-based shell-core-like partitioning of the nucleus accumbens (Acb) in humans. This has hindered the process of making a more refined description of the Acb using non-invasive neuroimaging technologies and approaches. In this study, high-resolution ex vivo macaque brain diffusion MRI data were acquired to investigate the tractography-based parcellation of the Acb. Our results identified a shell-core-like partitioning in macaques that is similar to that in humans as well as an alternative solution that subdivided the Acb into four parcels, the medial shell, the lateral shell, the ventral core, and the dorsal core. Furthermore, we characterized the specific anatomical and functional connectivity profiles of these Acb subregions and generalized their specialized functions to establish a fine-grained macaque Acb brainnetome atlas. This atlas should be helpful in neuroimaging, stereotactic surgery, and comparative neuroimaging studies to reveal the neurophysiological substrates of various diseases and cognitive functions associated with the Acb.
Although the evolutionarily conserved functions of the ventral striatal components have been used as a priori knowledge for further study, whether these functions are conserved between species remains unclear. In particular, whether macroscopic connectivity supports this given the disproportionate volumetric differences between species in the brain regions that project to the ventral striatum, including the prefrontal and limbic areas, has not been established In this study, the human and macaque striatum was first tractographically parcellated to define the ventral striatum and its two subregions, the nucleus accumbens (Acb)-like and the neurochemically unique domains of the Acb and putamen (NUDAPs)-like divisions. Our results revealed a similar topographical distribution of the connectivity-based ventral striatal components in the two primate brains. Successively, a set of targets was extracted to construct a connectivity fingerprint to characterize these parcellation results, enabling crossspecies comparisons. Our results indicated that the connectivity fingerprints of the ventral striatum-like divisions were dissimilar in the two species. We localized this difference to specific targets to analyze possible interspecies functional modifications. Our results also revealed interspecies-convergent connectivity ratio fingerprints of the target group to these two ventral striatum-like subregions. This convergence may suggest synchronous connectional changes of these ventral striatal components during primate evolution.
It is not clear whether the method used in functional brain-network related research can be applied to explore the feature binding mechanism of visual perception. In this study, we investigated feature binding of color and shape in visual perception. Functional magnetic resonance imaging data were collected from 38 healthy volunteers at rest and while performing a visual perception task to construct brain networks active during resting and task states. Results showed that brain regions involved in visual information processing were obviously activated during the task. The components were partitioned using a greedy algorithm, indicating the visual network existed during the resting state. Z-values in the vision-related brain regions were calculated, confirming the dynamic balance of the brain network. Connectivity between brain regions was determined, and the result showed that occipital and lingual gyri were stable brain regions in the visual system network, the parietal lobe played a very important role in the binding process of color features and shape features, and the fusiform and inferior temporal gyri were crucial for processing color and shape information. Experimental findings indicate that understanding visual feature binding and cognitive processes will help establish computational models of vision, improve image recognition technology, and provide a new theoretical mechanism for feature binding in visual perception.
Accurate recognition of tomato diseases is of great significance for agricultural production. Sufficient and insufficient training data of supervised recognition neural network training are symmetry problems. A high precision neural network needs a large number of labeled data, and the difficulty of data sample acquisition is the main challenge to improving the performance of disease recognition. [l.]Moreover, the traditional data augmentation based on geometric transformation can obtain less information, and the generalization is not strong. In order to generate leaves with obvious disease feature and improve the performance of disease recognition, this paper analyzes and solves the problem of insufficient training samples in recognition network training, and proposes a new data augmentation method RAHC_GAN based on GAN, which is used to expand data and identify diseases. First, the proposed hidden variable is used to control the size of the disease area continuously, and the residual attention blocks are used to make the generated adversarial network pay more attention to the disease region in the leaf image, besides, a multi-scale discriminator is used to enrich the detailed texture of the generated image. Then, an expanded data set including original training set images and generated images by RAHC_GAN is established, which is used as the input of four kinds classification networks AlexNet, VGGNet, GoogLeNet and ResNet for performance evaluation. Experimental results show that RAHC_GAN can generate leaves with obvious disease feature, and the generated expanded data set can significantly improve the recognition performance of the classifier. After data augmentation, the recognition effect on the four classifiers is increased by 1.8%, 2.2%, 2.7%, and 0.4% respectively, which are higher than the comparison method. At the same time, the impact of expanded data with different ratio on the recognition performance was evaluated, and the method was extended to apple and grape diseased leaves. The proposed data augmentation method can simulate the distribution of tomato leaf diseases and improve the performance of disease recognition, and it may be extended to solve the problem of insufficient data in other plant research tasks.The tomato leaf data augmented by the traditional data augmentation methods based on geometric transformation usually contain less information, and the generalization is not strong. Therefore, a new data augmentation method, RAHC_GAN, based on generative adversarial networks is proposed in this paper, which is used to expand tomato leaf data and identify diseases. In this method, continuous hidden variables are added at the input of the generator, and the purpose is to continuously control the size of the generated disease area and to supplement the intra class information of the same disease. Additionally, the residual attention block is added to the generator to make it pay more attention to the disease region in the leaf image; a multi-scale discriminator is also used to enrich the detailed texture of the generated image and finally generate leaves with obvious disease features. Then, we use the images generated by RAHC_GAN and the original training images to build an expanded data set, which is used to train four kinds of recognition networks, AlexNet, VGGNet, GoogLeNet, and ResNet, and the performance is evaluated through the test set. Experimental results show that RAHC_GAN can generate leaves with obvious disease features, and the generated expanded data set can significantly improve the recognition performance of the classifier. Furthermore, the results of the apple, grape, and corn data set show that RAHC_GAN can also be used as a method to solve the problem of insufficient data in other plant research tasks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.