Polytetrafluoroethylene (PTFE) blended with polyimide (PI) and filled with boron nitride (BN) is prepared through cold pressing and sintering for composites with remarkable wear resistance and reduced coefficient of friction (COF). The characterizations show that BN and PI at different levels, improve the hardness, dynamic thermo-mechanical modulus, thermal conductivity, and tribological properties of PTFE. PI boosts the dispersion and bonding of BN in PTFE. In dry sliding friction of a block-on-ring tribometer, the wear rate and COF of 10:10:80 BN/PI/PTFE reduce to almost 1/300 and 80% of those of pure PTFE, respectively, as the wear mechanism transition from being adhesive to partially abrasive. This occurs only when the additives BN and PI induce a synergistic effect, that is, at concentrations that are not higher than ca. 10 wt% and 15 wt%, respectively. The obvious agglomeration at high percentages of added PI and severe conditions (400 N and 400 rpm) induce strong adhesive failure. The variations in the tensile properties, hardness, crystallization, and microstructure of the composites correspond to different effects. The multiple parameters of the plots of wear and friction are transformed into their contour curves. The mechanism transition maps aid in understanding the influence of various test conditions and composite compositions on the contact surfaces in the space-time framework of wear.
In this paper, we propose a human action recognition method using HOIRM (histogram of oriented interest region motion) feature fusion and a BOW (bag of words) model based on AP (affinity propagation) clustering. First, a HOIRM feature extraction method based on spatiotemporal interest points ROI is proposed. HOIRM can be regarded as a middle-level feature between local and global features. Then, HOIRM is fused with 3D HOG and 3D HOF local features using a cumulative histogram. The method further improves the robustness of local features to camera view angle and distance variations in complex scenes, which in turn improves the correct rate of action recognition. Finally, a BOW model based on AP clustering is proposed and applied to action classification. It obtains the appropriate visual dictionary capacity and achieves better clustering effect for the joint description of a variety of features. The experimental results demonstrate that by using the fused features with the proposed BOW model, the average recognition rate is 95.75% in the KTH database, and 88.25% in the UCF database, which are both higher than those by using only 3D HOG+3D HOF or HOIRM features. Moreover, the average recognition rate achieved by the proposed method in the two databases is higher than that obtained by other methods.
Synthetic aperture radar (SAR) multi‐target interactive motion recognition classifies the type of interactive motion and generates descriptions of the interactive motions at the semantic level by considering the relevance of multi‐target motions. A method for SAR multi‐target interactive motion recognition is proposed, which includes moving target detection, target type recognition, interactive motion feature extraction, and multi‐target interactive motion type recognition. Wavelet thresholding denoising combined with a convolutional neural network (CNN) is proposed for target type recognition. The method performs wavelet thresholding denoising on SAR target images and then uses an eight‐layer CNN named EilNet to achieve target recognition. After target type recognition, a multi‐target interactive motion type recognition method is proposed. A motion feature matrix is constructed for recognition and a four‐layer CNN named FolNet is designed to perform interactive motion type recognition. A motion simulation dataset based on the MSTAR dataset is built, which includes four kinds of interactive motions by two moving targets. The experimental results show that the recognition performance of the authors’ Wavelet + EilNet method for target type recognition and FolNet for multi‐target interactive motion type recognition are both better than other methods. Thus, the proposed method is an effective method for SAR multi‐target interactive motion recognition.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.