The abnormal behavior of cockpit pilots during the manipulation process is an important incentive for flight safety, but the complex cockpit environment limits the detection accuracy, with problems such as false detection, missed detection, and insufficient feature extraction capability. This article proposes a method of abnormal pilot driving behavior detection based on the improved YOLOv4 deep learning algorithm and by integrating an attention mechanism. Firstly, the semantic image features are extracted by running the deep neural network structure to complete the image and video recognition of pilot driving behavior. Secondly, the CBAM attention mechanism is introduced into the neural network to solve the problem of gradient disappearance during training. The CBAM mechanism includes both channel and spatial attention processes, meaning the feature extraction capability of the network can be improved. Finally, the features are extracted through the convolutional neural network to monitor the abnormal driving behavior of pilots and for example verification. The conclusion shows that the deep learning algorithm based on the improved YOLOv4 method is practical and feasible for the monitoring of the abnormal driving behavior of pilots during the flight maneuvering phase. The experimental results show that the improved YOLOv4 recognition rate is significantly higher than the unimproved algorithm, and the calling phase has a mAP of 87.35%, an accuracy of 75.76%, and a recall of 87.36%. The smoking phase has a mAP of 87.35%, an accuracy of 85.54%, and a recall of 85.54%. The conclusion shows that the deep learning algorithm based on the improved YOLOv4 method is practical and feasible for the monitoring of the abnormal driving behavior of pilots in the flight maneuvering phase. This method can quickly and accurately identify the abnormal behavior of pilots, providing an important theoretical reference for abnormal behavior detection and risk management.
Traditional speech enhancement algorithms are only suitable for dealing with stationary noise, but the noise in the stage of flight is nonstationary noise, so the traditional method is not suitable for dealing with the noise in the stage of flight. This paper proposes a speech enhancement algorithm based on a generative adversarial network: Deep Convolutional–Wasserstein Generative Adversarial Network (DWGAN). Firstly, the model integrates the deep convolutional generative adversarial network and the Wasserstein distance based on the generative adversarial network. Secondly, it introduces a conditional model to improve the enhanced speech quality, and the spectral constraint layer is used to prevent the model from falling too fast and causing collapse. Finally, the L1 loss term is introduced into the loss function to reduce the number of training times and further improve the enhanced speech quality. The experimental results show that the intrusiveness of background noise and overall processed speech quality of DWGAN are improved by about 7.6 and 9.4%, respectively, compared with WGAN in the acoustic environment of simulated aircraft operation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.