Fixed-wing vertical take-off and landing (VTOL) UAVs have received more and more attention in recent years, because they have the advantages of both fixed-wing UAVs and rotary-wing UAVs. To meet its large flight envelope, the VTOL UAV needs accurate measurement of airflow parameters, including angle of attack, sideslip angle and speed of incoming flow, in a larger range of angle of attack. However, the traditional devices for the measurement of airflow parameters are unsuitable for large-angle measurement. In addition, their performance is unsatisfactory when the UAV is at low speed. Therefore, for tail-sitter VTOL UAVs, we used a 5-hole pressure probe to measure the pressure of these holes and transformed the pressure data into the airflow parameters required in the flight process using an artificial neural network (ANN) method. Through a series of comparative experiments, we achieved a high-performance neural network. Through the processing and analysis of wind-tunnel-experiment data, we verified the feasibility of the method proposed in this paper, which can make more accurate estimates of airflow parameters within a certain range.
Path following is a fundamental problem in skid-steered mobile robots (SSMR). In this study, a Lyapunov stable curved path following controller was designed to generate the steering control command for an SSMR. In contrast to the existing path following controller design methods, either the complete dynamic model of the robot is considered or not, the steering dynamic characteristics approximated by a first-order model are considered in this study. Together with the kinematic model, a steering control law for following a curved path was designed by using the backstepping technique and Lyapunov stability theory. The proposed method was verified on a real SSMR platform to realize following the straight-line, square, and circular paths. Compared with the steering control law without considering the steering dynamics, the proposed method can make the robot converge to the predefined path faster with a smaller error overshoot.
Understanding and analyzing 2D/3D sensor data is crucial for a wide range of machine learning-based applications, including object detection, scene segmentation, and salient object detection. In this context, interactive object segmentation is a vital task in image editing and medical diagnosis, involving the accurate separation of the target object from its background based on user annotation information. However, existing interactive object segmentation methods struggle to effectively leverage such information to guide object-segmentation models. To address these challenges, this paper proposes an interactive image-segmentation technique for static images based on multi-level semantic fusion. Our method utilizes user-guidance information both inside and outside the target object to segment it from the static image, making it applicable to both 2D and 3D sensor data. The proposed method introduces a cross-stage feature aggregation module, enabling the effective propagation of multi-scale features from previous stages to the current stage. This mechanism prevents the loss of semantic information caused by multiple upsampling and downsampling of the network, allowing the current stage to make better use of semantic information from the previous stage. Additionally, we incorporate a feature channel attention mechanism to address the issue of rough network segmentation edges. This mechanism captures richer feature details from the feature channel level, leading to finer segmentation edges. In the experimental evaluation conducted on the PASCAL Visual Object Classes (VOC) 2012 dataset, our proposed interactive image segmentation method based on multi-level semantic fusion demonstrates an intersection over union (IOU) accuracy approximately 2.1% higher than the currently popular interactive image segmentation method in static images. The comparative analysis highlights the improved performance and effectiveness of our method. Furthermore, our method exhibits potential applications in various fields, including medical imaging and robotics. Its compatibility with other machine learning methods for visual semantic analysis allows for integration into existing workflows. These aspects emphasize the significance of our contributions in advancing interactive image-segmentation techniques and their practical utility in real-world applications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.