Bionic undulating fins, inspired by undulations of the median and/or paired fin (MPF) fish, have a bright prospective for underwater missions with higher maneuverability, lower noisy, and higher efficiency. In the present study, a coupled computational fluid dynamics (CFD) model was proposed and implemented to facilitate numerical simulations on hydrodynamic effects of the bionic undulating robots. Hydrodynamic behaviors of underwater robots propelled by two bionic undulating fins were computationally and experimentally studied within the three typical desired movement patterns, i.e., marching, yawing and yawing-while-marching. Moreover, several specific phenomena in the bionic undulation mode were unveiled and discussed by comparison between the CFD and experimental results under the same kinematics parameter sets. The contributed work on the dynamic behavior of the undulating robots is of importance for study on the propulsion mechanism and control algorithms.
bionic underwater robot, CFD, dynamic behavior, undulating fins
Citation:Zhou H, Hu T J, Xie H B, et al. Computational and experimental study on dynamic behavior of underwater robots propelled by bionic undulating fins.
Feedback flow information is of significance to enable underwater locomotion controllers with higher adaptability and efficiency within varying environments. Inspired from fish sensing their external flow via near-body pressure, a computational scheme is proposed and developed in this paper. In conjunction with the scheme, Computational Fluid Dynamics (CFD) is employed to study the bio-inspired fish swimming hydrodynamics. The spatial distribution and temporal variation of the near-body pressure of fish are studied over the whole computational domain. Furthermore, a filtering algorithm is designed and implemented to fuse near-body pressure of one or multiple points for the estimation on the external flow. The simulation results demonstrate that the proposed computational scheme and its corresponding algorithm are both effective to predict the inlet flow velocity by using near-body pressure at distributed spatial points.
Visual localization is of great importance in robotics and computer vision. Recently, scene coordinate regression based methods have shown good performance in visual localization in small static scenes. However, it still estimates camera poses from many inferior scene coordinates. To address this problem, we propose a novel visual localization framework that establishes 2D-to-3D correspondences between the query image and the 3D map with a series of learnable scene-specific landmarks. In the landmark generation stage, the 3D surfaces of the target scene are oversegmented into mosaic patches whose centers are regarded as the scene-specific landmarks. To robustly and accurately recover the scene-specific landmarks, we propose the Voting with Segmentation Network (VS-Net) to segment the pixels into different landmark patches with a segmentation branch and estimate the landmark locations within each patch with a landmark location voting branch. Since the number of landmarks in a scene may reach up to 5000, training a segmentation network with such a large number of classes is both computation and memory costly for the commonly used cross-entropy loss. We propose a novel prototype-based triplet loss with hard negative mining, which is able to train semantic segmentation networks with a large number of labels efficiently. Our proposed VS-Net is extensively tested on multiple public benchmarks and can outperform stateof-the-art visual localization methods. Code and models are available at https://github.com/zju3dv/VS-Net.
In the past decade, biomimetic undulating fin propulsion has been one of the main topics considered by scientists and researchers in the field of robotic fish. This technology is inspired by the biological wave-like propulsion of ribbon-finned fish. The swimming modes have aquatic application potentials with greater manoeuvrability, less detectable noise or wake and better efficiency at low speeds. The present work concentrates on the evaluation of fin-ray trajectory tracking of biorobotic undulating fins at the levels of kinematics and hydrodynamics by using an experimental-numerical approach. Firstly, fin-ray tracking inconsistence between the desired and actual undulating trajectories is embodied with experimental data of the fin prototype. Next, the dynamics' nonlinearity is numerically and analytically unveiled by using the computational fluid dynamics (CFD) method, from the viewpoint of vortex shedding and the hydro-effect. The evaluation of fin-ray tracking performance creates a good basis for control design to improve the fin-ray undulation of prototypes.
Camera extrinsic calibration is an important module for robotic visual tasks. A typical visual task is to use a robot and a color camera to pick an object from a variety of items and place it in a designated area. However, the noise of multi-sensor processing may have a significant impact on the results when running a full-process visual task; in addition, checkerboards are inconvenient or unavailable in pick-andplace scenarios. In this paper, we propose and develop a task-oriented markerless hand-eye calibration method by using nonlinear iterative optimization. The optimization employs a transfer error to construct cost function, which is necessarily observable and estimable for visual tasks. Our method does not require a calibration checkerboard and only uses an available saliency object in the task scene as a marker. It provides an end-to-end method that converts extrinsic parameters into variables that are optimized with the cost function, making it not only robust to sensors with noise but also able to meet the requirements of the tasks' reconstruction accuracy. Different from classic methods detecting a known size calibration pattern, the input of our method is a batch of image points and the corresponding world points. The results show that the accuracy of our extrinsic calibration method is sufficient for the robot's pick-and-place tasks. The experiments of the competition demonstrate that our method is definitely effective in the desired tasks of vision-in-the-loop automatic pick-and-place scenarios.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.