Deflection measurement is the research focus of health monitoring for bridges during the operation period. This study develops a contactless measurement technique to monitor the bridge deflection, leveraging visual information from a team of unmanned aerial vehicles (UAVs). On the basis of the collinearity of the laser spots projected on the plane by the coplanar laser indicator, we can eliminate the motion of UAV, and calculate the vertical displacement of the position to be measured relative to the bridge pier. In the proposed method, the center of the laser spot is extracted through a method based on deep learning, and an algorithm based on scale-invariant features registration was developed to track the feature points of the bridge in the image sequence. According to the algorithm, we demonstrate the accuracy and feasibility of our approach through simulation and simulated bridge experiments. The result shows that the root mean squared error (RMSE) of measurement through our technique is less than 0.5 mm in the laboratory conditions. In addition, the limits and scalability of the presented method have been explored through a field experiment.
Space exploration missions involve significant participation from astronauts. Therefore, it is of great practical importance to assess the astronauts’ performance via various parameters in the cramped and weightless space station. In this paper, we proposed a calibration-free multi-view vision system for astronaut performance capture, including two modules: (1) an alternating iterative optimization of the camera pose and human pose is implemented to calibrate the extrinsic camera parameters with detected 2D keypoints. (2) Scale factors are restricted by the limb length to recover the real-world scale and the shape parameters are refined for subsequent postural reconstruction. These two modules can provide effective and efficient motion capture in a weightless space station. Extensive experiments using public datasets and the ground verification test data demonstrated the accuracy of the estimated camera pose and the effectiveness of the reconstructed human pose.
The multi-camera calibration is an essential step for many spatially aware applications, such as robotic navigation, augmented reality, and 3D human pose estimation. Traditional calibration methods use off-the-shelf checkerboards or triangles as the known world coordinate system and their corresponding corners are set as control points, which heavily depends on specific calibration patterns and is not suitable for calibration patterndenied environments. In this paper, an automatic calibration method is proposed to calibrate the multi-camera system without the aid of a known calibration pattern. The key idea of the proposed method is that the authors consider the human body, which is always available, as the counterpart of the calibration pattern. The authors' approach starts with binocular camera calibration, in which the extrinsic and intrinsic parameters are calculated in order and followed by a joint optimisation. With the results of each pair of binocular camera calibration, the multi-camera system calibration is carried out in three steps: (i) parameters initialisation, (ii) extrinsic parameters optimisation, and (iii) jointly optimising intrinsic and extrinsic parameters. Since the authors' approach does not require additional calibration patterns except for one visible person, it is flexible and easy to be implemented. Real experiments are conducted in different scenes, camera angles, and camera settings. Human pose estimation with the multi-camera system is additionally performed for exhaustive experiments. The experimental results demonstrate that the authors' method shows superior performance than the traditional method with the aid of a specific calibration pattern. | INTRODUCTIONCamera calibration is a fundamental requirement for the multicamera system, which has been widely used in many applications due to its low cost. Flexible and accurate camera calibration algorithm is the key to unlock the extensive deployment of a multi-camera system. Multi-camera calibration requires estimation of intrinsic parameters such as focal length and principal point of a single camera, as well as extrinsic parameters (i.e. rotation and translation) [1]. The intrinsic parameters are calibrated once on the factory floor, but these parameters may not be accurate enough for measurement, and they will be changed over time due to various factors such as heat, mechanical stress etc. Meanwhile, the positions between camera pairs would be changed when the multi-camera system is deployed in a new scene, and the extrinsic parameters need to be calibrated again. One of the important factors affecting the accuracy of the calibration is the number and distribution of common points in the captured scene. However, a multicamera system may share limited public fields of view [2]; thus, the major bottleneck of multi-camera calibration is the difficulty of detecting and matching the sufficient known common points from different cameras.There are many calibration methods based on specific calibration patterns whose elements include point...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.