Synthetic aperture radar (SAR) ship detection is a heated and challenging problem. Traditional methods are based on hand-crafted feature extraction or limited shallow-learning features representation. Recently, with the excellent ability of feature representation, deep neural networks such as faster region based convolution neural network (FRCN) have shown great performance in object detection tasks. However, several challenges limit the applications of FRCN in SAR ship detection: (1) FRCN with a fixed receptive field cannot match the scale variability of multiscale SAR ship objects, and the performance degrade when the objects are small; (2) as a two-stage detector, FRCN performs an intensive computation and leads to low-speed detection; (3) when the background is complex, the imbalance of easy and hard examples will lead to a high false detection. To tackle the above issues, we design a multilayer fusion light-head detector (MFLHD) for SAR ship detection. Instead of using a single feature map, shallow high-resolution and deep semantic feature are combined to produce region proposal. In detection subnetwork, we propose a light-head detector with large-kernel separable convolution and position sensitive pooling to improve the detection speed. In addition, we adapt focal loss to loss function and training more hard examples to reduce the false alarm. Extensive experiments on SAR ship detection dataset (SSDD) show that the proposed method achieves superior performance in SAR ship detection both in accuracy and speed.
The structure of an improved wind turbine gearbox is presented for meeting the operation of the optimized wind turbine power‐wind speed curve (P‐v curve). When the wind speed is lower than the cut‐in wind speed, the operation mode of the wind turbine is changed by the extra power, which is supplied by the motor excited source to keep the wind turbine running. Moreover, the transmission principle of the improved wind turbine gearbox is discussed. Various motor power impacts on the transmission characteristic for the improved transmission structure are investigated and results are compared with the professional software. Results indicate that as the motor power increases, the transverse vibration of sun gears and meshing forces of the low‐speed and medium‐speed planetary stages decreases. The transverse vibration for the pinion gear of the high‐speed stage enhances with the increase of the motor power. Load‐sharing coefficients of the planetary gear stages are augmented with the enlargement of the motor power. It is found that meshing forces of the torque‐implement parallel stage are increased with augmentation of the motor power.
Aiming at the problems of the large amount of model parameters and false and missing detections of multi-scale drone targets, we present a novel drone detection method, YOLOv4-MCA, based on the lightweight MobileViT and Coordinate Attention. The proposed approach is improved according to the framework of YOLOv4. Firstly, we use an improved lightweight MobileViT as the feature extraction backbone network, which can fully extract the local and global feature representations of the object and reduce the model’s complexity. Secondly, we adopt Coordinate Attention to improve PANet and to obtain a multi-scale attention called CA-PANet, which can obtain more positional information and promote the fusion of information with low- and high-dimensional features. Thirdly, we utilize the improved K-means++ method to optimize the object anchor box and improve the detection efficiency. At last, we construct a drone dataset and conduct a performance experiment based on the Mosaic data augmentation method. The experimental results show that the mAP of the proposed approach reaches 92.81%, the FPS reaches 40 f/s, and the number of parameters is only 13.47 M, which is better than mainstream algorithms and achieves a high detection accuracy for multi-scale drone targets using a low number of parameters.
Spaceborne synthetic aperture radar (SAR) is a promising remote sensing technique, as it can produce high-resolution imagery over a wide area of surveillance with all-weather and all-day capabilities. However, the spaceborne SAR sensor may suffer from severe radio frequency interference (RFI) from some similar frequency band signals, resulting in image quality degradation, blind spot, and target loss. To remove these RFI features presented on spaceborne SAR images, we propose a multi-dimensional calibration and suppression network (MCSNet) to exploit the features learning of spaceborne SAR images and RFI. In the scheme, a joint model consisting of the spaceborne SAR image and RFI is established based on the relationship between SAR echo and the scattering matrix. Then, to suppress the RFI presented in images, the main structure of MCSNet is constructed by a multi-dimensional and multi-channel strategy, wherein the feature calibration module (FCM) is designed for global depth feature extraction. In addition, MCSNet performs planned mapping on the feature maps repeatedly under the supervision of the SAR interference image, compensating for the discrepancies caused during the RFI suppression. Finally, a detailed restoration module based on the residual network is conceived to maintain the scattering characteristics of the underlying scene in interfered SAR images. The simulation data and Sentinel-1 data experiments, including different landscapes and different forms of RFI, validate the effectiveness of the proposed method. Both the results demonstrate that MCSNet outperforms the state-of-the-art methods and can greatly suppress the RFI in spaceborne SAR.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.