Abstract-Two fusion strategies for target recognition using multiaspect synthetic aperture radar (SAR) images are presented for recognizing ground vehicles in MSTAR database. Due to radar crosssection variability, the ability to discriminate between targets varies greatly with target aspect. Multi-aspect images of a given target are used to support recognition. In this paper, two fusion strategies for target recognition using multi-aspect SAR images are proposed, which are data fusion strategy and decision fusion strategy. The recognition performance sensitivity to the number of images and the aspect separations is analyzed for those two target recognition strategies. The two strategies are also compared with each other in probability of correct classification and operating efficiency. The experimental results indicate that if we have a small number of multi-aspect images of a target and the aspect separations between those images are proper, the probability of correct classification obtained by the two proposed strategies can be advanced significantly compared with that obtained by the method using single image.
A real-time lossless compression technique for ECG signals, which benefits wearable medical devices with stringent low-power requirements, is presented. The real-time ECG waveform is automatically classified into four regions according to its fluctuation features and the most suitable prediction method is adaptively selected from several linear prediction methods for different regions. Further proposed is the use of a modified variable length code to encode the prediction difference for a simpler transmit format. Experimental results based on three publically available test databases show that the proposed method achieves a better compression ratio with a lower prediction difference than existing state-of-the-art approaches. A very large-scale integration implementation is also demonstrated which can be used as an intellectual property core with a core area of 25 809 μm 2 and which achieves a power consumption of 127 μW at 100 MHz in a 0.18 μm CMOS technology.
In this paper, we propose a human action recognition method using HOIRM (histogram of oriented interest region motion) feature fusion and a BOW (bag of words) model based on AP (affinity propagation) clustering. First, a HOIRM feature extraction method based on spatiotemporal interest points ROI is proposed. HOIRM can be regarded as a middle-level feature between local and global features. Then, HOIRM is fused with 3D HOG and 3D HOF local features using a cumulative histogram. The method further improves the robustness of local features to camera view angle and distance variations in complex scenes, which in turn improves the correct rate of action recognition. Finally, a BOW model based on AP clustering is proposed and applied to action classification. It obtains the appropriate visual dictionary capacity and achieves better clustering effect for the joint description of a variety of features. The experimental results demonstrate that by using the fused features with the proposed BOW model, the average recognition rate is 95.75% in the KTH database, and 88.25% in the UCF database, which are both higher than those by using only 3D HOG+3D HOF or HOIRM features. Moreover, the average recognition rate achieved by the proposed method in the two databases is higher than that obtained by other methods.
Synthetic aperture radar (SAR) multi‐target interactive motion recognition classifies the type of interactive motion and generates descriptions of the interactive motions at the semantic level by considering the relevance of multi‐target motions. A method for SAR multi‐target interactive motion recognition is proposed, which includes moving target detection, target type recognition, interactive motion feature extraction, and multi‐target interactive motion type recognition. Wavelet thresholding denoising combined with a convolutional neural network (CNN) is proposed for target type recognition. The method performs wavelet thresholding denoising on SAR target images and then uses an eight‐layer CNN named EilNet to achieve target recognition. After target type recognition, a multi‐target interactive motion type recognition method is proposed. A motion feature matrix is constructed for recognition and a four‐layer CNN named FolNet is designed to perform interactive motion type recognition. A motion simulation dataset based on the MSTAR dataset is built, which includes four kinds of interactive motions by two moving targets. The experimental results show that the recognition performance of the authors’ Wavelet + EilNet method for target type recognition and FolNet for multi‐target interactive motion type recognition are both better than other methods. Thus, the proposed method is an effective method for SAR multi‐target interactive motion recognition.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.