This paper proposes a practical calibration solution for estimating the boresight and lever-arm parameters of the sensors mounted on a Mobile Mapping System (MMS). On our MMS devised for conducting the calibration experiment, three network video cameras, one mobile laser scanner, and one Global Navigation Satellite System (GNSS)/Inertial Navigation System (INS) were mounted. The geometric relationships between three sensors were solved by the proposed calibration, considering the GNSS/INS as one unit sensor. Our solution basically uses the point cloud generated by a 3-dimensional (3D) terrestrial laser scanner rather than using conventionally obtained 3D ground control features. With the terrestrial laser scanner, accurate and precise reference data could be produced and the plane features corresponding with the sparse mobile laser scanning data could be determined with high precision. Furthermore, corresponding point features could be extracted from the dense terrestrial laser scanning data and the images captured by the video cameras. The parameters of the boresight and the lever-arm were calculated based on the least squares approach and the precision of the boresight and lever-arm could be achieved by 0.1 degrees and 10 mm, respectively.
Aerial images are an outstanding option for observing terrain with their high-resolution (HR) capability. The high operational cost of aerial images makes it difficult to acquire periodic observation of the region of interest. Satellite imagery is an alternative for the problem, but low-resolution is an obstacle. In this study, we proposed a context-based approach to simulate the 10 m resolution of Sentinel-2 imagery to produce 2.5 and 5.0 m prediction images using the aerial orthoimage acquired over the same period. The proposed model was compared with an enhanced deep super-resolution network (EDSR), which has excellent performance among the existing super-resolution (SR) deep learning algorithms, using the peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and root-mean-squared error (RMSE). Our context-based ResU-Net outperformed the EDSR in all three metrics. The inclusion of the 60 m resolution of Sentinel-2 imagery performs better through fine-tuning. When 60 m images were included, RMSE decreased, and PSNR and SSIM increased. The result also validated that the denser the neural network, the higher the quality. Moreover, the accuracy is much higher when both denser feature dimensions and the 60 m images were used.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.