Monitoring biophysical signals such as body or organ movements and other physical phenomena is necessary for patient rehabilitation. However, stretchable flexible pressure sensors with high sensitivity and a broad range that can meet these requirements are still lacking. Herein, we successfully monitored various vital biophysical features and implemented in-sensor dynamic deep learning for knee rehabilitation using an ultrabroad linear range and high-sensitivity stretchable iontronic pressure sensor (SIPS). We optimized the topological structure and material composition of the electrode to build a fully stretching on-skin sensor. The high sensitivity (12.43 kPa−1), ultrabroad linear sensing range (1 MPa), high pressure resolution (6.4 Pa), long-term durability (no decay after 12000 cycles), and excellent stretchability (up to 20%) allow the sensor to maintain operating stability, even in emergency cases with a high sudden impact force (near 1 MPa) applied to the sensor. As a practical demonstration, the SIPS can positively track biophysical signals such as pulse waves, muscle movements, and plantar pressure. Importantly, with the help of a neuro-inspired fully convolutional network algorithm, the SIPS can accurately predict knee joint postures for better rehabilitation after orthopedic surgery. Our SIPS has potential as a promising candidate for wearable electronics and artificial intelligent medical engineering owing to its unique high signal-to-noise ratio and ultrabroad linear range.
Counting the number of wheat ears in images under natural light is an important way to evaluate the crop yield, thus, it is of great significance to modern intelligent agriculture. However, the distribution of wheat ears is dense, so the occlusion and overlap problem appears in almost every wheat image. It is difficult for traditional image processing methods to solve occlusion problem due to the deficiency of high-level semantic features, while existing deep learning based counting methods did not solve the occlusion efficiently. This article proposes an improved EfficientDet-D0 object detection model for wheat ear counting, and focuses on solving occlusion. First, the transfer learning method is employed in the pre-training of the model backbone network to extract the high-level semantic features of wheat ears. Secondly, an image augmentation method Random-Cutout is proposed, in which some rectangles are selected and erased according to the number and size of the wheat ears in the images to simulate occlusion in real wheat images. Finally, convolutional block attention module (CBAM) is adopted into the EfficientDet-D0 model after the backbone, which makes the model refine the features, pay more attention to the wheat ears and suppress other useless background information. Extensive experiments are done by feeding the features to detection layer, showing that the counting accuracy of the improved EfficientDet-D0 model reaches 94%, which is about 2% higher than the original model, and false detection rate is 5.8%, which is the lowest among comparative methods.
Abstract:In order to automatically evaluate the welding quality during high-power disk laser welding, a real-time monitoring system was developed. The images of laser-induced metal vapor during welding were captured and fifteen features were extracted. A feature selection method based on a sequential forward floating selection algorithm was employed to identify the optimal feature subset, and a support vector machine (SVM) classifier was built to recognize the welding quality. The experiment results demonstrated that this method had satisfactory performance, and could be applied in laser welding monitoring applications.
Datum transformations are a fundamental issue in geodesy, Global Positioning System (GPS) science and technology, geographical information science (GIS), and other research fields. In this study, we establish a general total least squares (TLS) theory which allows the errors-in-variables model with different constraints to formulate all transformation models, including affine, orthogonal, similarity, and rigid transformations. Through the adaptation of the transformation models to the constrained TLS problem, the nonlinear constrained normal equation is analytically derived, and the transformation parameters can be iteratively estimated by fixed-point formulas. We also provide the statistical characteristics of the parameter estimator and the unit of precision of the control points. Two examples are given, as well as an analysis of the results on how the estimated quantities vary when the number of constraints becomes larger.
The effective classification methods of the small target objects in the no-fly zone are of great significance to ensure safety in the no-fly zone. But, due to the differences of the color and texture for the small target objects in the sky, this may be unobvious, such as the birds, unmanned aerial vehicles (UAVs), and kites. In this paper, we introduced the higher layer visualizing feature extraction method based on the hybrid deep network model to obtain the higher layer feature through combining the Sparse Autoencoder (SAE) model, the Convolutional Neural Network (CNN) model, and the regression classifier model to classify the different types of the target object images. In addition, because the sample numbers of the small sample target objects in the sky may be not sufficient, we cannot obtain much more local features directly to realize the classification of the target objects based on the higher layer visualizing feature extraction; we introduced the transfer learning in the SAE model to gain the cross-domain higher layer local visualizing features and sent the cross-domain higher layer local visualizing features and the images of the target-domain small sample object images into the CNN model, to acquire the global visualizing features of the target objects. Experimental results have shown that the higher layer visualizing feature extraction and the transfer learning deep networks are effective for the classification of small sample target objects in the sky.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.