In order to determine camera parameters, a calibration procedure involving the camera recordings of a checkerboard is usually performed. In this paper, we propose an alternative approach that uses Gray-code patterns displayed on an LCD screen. Gray-code patterns allow us to decode 3D location information of points of the LCD screen at every pixel in the camera image. This is in contrast to checkerboard patterns where the number of corresponding locations is limited to the number of checkerboard corners. We show that, for the case of a UEye CMOS camera, the precision of focal-length estimation is 1.5 times more precise than when using a standard calibration with a checkerboard pattern.
The detection of oil spills in water is a frequently researched area, but most of the research has been based on very large patches of crude oil on offshore areas. We present a novel framework for detecting oil spills inside a port environment, while using unmanned areal vehicles (UAV) and a thermal infrared (IR) camera. This framework is split into a training part and an operational part. In the training part, we present a process for automatically annotating RGB images and matching them with the IR images in order to create a dataset. The infrared imaging camera is crucial to be able to detect oil spills during nighttime. This dataset is then used to train on a convolutional neural network (CNN). Seven different CNN segmentation architectures and eight different feature extractors are tested in order to find the best suited combination for this task. In the operational part, we propose a method to have a real-time, onboard UAV oil spill detection using the pre-trained network and a low power interference device. A controlled experiment in the port of Antwerp showed that we are able to achieve an accuracy of 89% while only using the IR camera.
The traditional literature on camera network design focuses on constructing automated algorithms. These require problem-specific input from experts in order to produce their output. The nature of the required input is highly unintuitive, leading to an impractical workflow for human operators. In this work we focus on developing a virtual reality user interface allowing human operators to manually design camera networks in an intuitive manner. From real world practical examples we conclude that the camera networks designed using this interface are highly competitive with, or sometimes even superior to, those generated by automated algorithms, but the associated workflow is more intuitive and simple. The competitiveness of the human-generated camera networks is remarkable because the structure of the optimization problem is a well known combinatorial NP-hard problem. These results indicate that human operators can be used in challenging geometrical combinatorial optimization problems, given an intuitive visualization of the problem.
In this paper we consider the problem of generating inspection paths for robots. These paths should allow an attached measurement device to perform high quality measurements. We formally show that generating robot paths, while maximizing the inspection quality, naturally corresponds to the submodular orienteering problem. Traditional methods that are able to generate solutions with mathematical guarantees do not scale to real world problems. In this work we propose a method that is able to generate near-optimal solutions for real world complex problems. We experimentally test this method in a wide variety of inspection problems and show that it nearly always outperforms traditional methods. We furthermore show that the near-optimality of our approach makes it more robust to changing the inspection problem, and is thus more general.Keywords Robotic inspection ¨Inspection planning ¨Submodular orienteering ¨Wind turbine inspection ¨Drone inspection Figure 1: A 360vr video experience that explains and visualizes this work is available online (https://youtu.be/Fg-ulGRyw2w). This video can be watched on a regular computer, or on a smartphone, but the optimal experience requires a virtual reality headset. Click this figure or scan the QR code to get redirected.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.