The gradual establishment of large-scale distributed camera networks and the rapid development of ''Internet +'' have resulted in the recent popularization of massive video surveillance systems. As pedestrians are the key monitoring targets in video surveillance systems, many studies are focusing on pedestrian re-identification monitoring algorithms across cameras. At present, the pedestrian re-identification model is not only faced with the difficulty of training the network model due to the huge quantity difference between different types of training samples, but also needs to reduce the impact of the large difference in visual performance on the model identification accuracy. To solve these difficulties, this paper proposed a deep learning model and designed a system based on a deep convolutional neural network for pedestrian re-identification. In particular, we determined the difference between the system input neighborhoods in order to derive the local relationship between the two input images, thus reducing the effects of illumination and perspective. Furthermore, we employed focal loss to solve the phenomenon of sample imbalance in the pedestrian re-identification process in order to enhance the actual application potential of the model. The proposed method was implemented in our developed end-to-end monitoring system for pedestrian re-identification. The hardware component of the system design framework was composed of a digital matrix, streaming media storage server and a network high-speed dome, with the ability to extend to additional tasks in the future. Our approach reduces the effects of data imbalances and visual performance differences, with a score of 76.0% for rank-1 and 99.5% for rank-20 on large data sets (CUHK03), which is not only a significant improvement over the previous IDLA, but also superior to other existing approaches.INDEX TERMS Deep convolution, monitoring system, neural network, pedestrian re-identification.
Human posture equipment technology has advanced significantly thanks to advances in deep learning and machine vision. Even the most advanced models may not be able to predict all body joints accurately. This paper proposes an adaptive generative adversarial network to improve the human posture detection algorithm in order to address this issue. GAN is used in the algorithm to detect human posture improvement. The algorithm uses OpenPose to detect and connect keypoints and then generates heat maps in the GAN system model. During the training process, the confidence evaluation mechanism is added to the system model. The generator predicts posture, while the resolver refines human joints over time. And, by using normalization technologies in the confidence evaluation mechanism, the generator can pay more attention to the prominent body joints, improving the algorithm’s body detection accuracy of nodes. In MPII, LSP, and FLIC datasets, the proposed algorithm has shown to have a good detection effect. Its positioning accuracy is about 95.37 percent, and it can accurately locate the joints of the entire body. Several other algorithms are outperformed by this one. The algorithm described in this article has the best simultaneous runtime in the LSP dataset.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.