Visual inspection of underwater structures by vehicles, e.g. remotely operated vehicles (ROVs), plays an important role in scientific, military, and commercial sectors. However, the automatic extraction of information using software tools is hindered by the characteristics of water which degrade the quality of captured videos. As a contribution for restoring the color of underwater images, Underwater Denoising Autoencoder (UDAE) model is developed using a denoising autoencoder with U-Net architecture. The proposed network takes into consideration the accuracy and the computation cost to enable realtime implementation on underwater visual tasks using end-toend autoencoder network. Underwater vehicles perception is improved by reconstructing captured frames; hence obtaining better performance in underwater tasks. Related learning methods use generative adversarial networks (GANs) to generate color corrected underwater images, and to our knowledge this paper is the first to deal with a single autoencoder capable of producing same or better results. Moreover, image pairs are constructed for training the proposed network, where it is hard to obtain such dataset from underwater scenery. At the end, the proposed model is compared to a state-of-the-art method.
The detection of moving objects in a scene is a well researched but depending on the concrete research still often a challenging computer vision task. Usually it is the first step in a whole pipeline and all following algorithms (tracking, classification etc.) are dependent on the accuracy of the detection. Hence, a good pixel-precise segmentation of the objects of interest is mandatory for many applications. However, the underwater environment has mostly been neglected so far and there exists no common dataset to evaluate different algorithms under the harsh underwater conditions and therefore a comprehensive evaluation is impossible. In this paper, we present an underwater change detection dataset consisting of five videos and hundreds of handsegmented ground truth images as well as a survey of different underwater image enhancement techniques and their impact on segmentation algorithms
Blurring and color cast are two of the most challenging problems for underwater imaging. The poor quality hinders the automatic segmentation or analysis of images. In this paper, we describe an image enhancement method to reduce the blurring and color cast of the underwater medium. It is a two-folded approach; First, a color correction algorithm is applied to correct the color cast and produce a natural appearance of the sub-sea images. Second, a pair of learned dictionaries based on sparse representation are applied to sharpen the image and enhance the details. Our strategy is a single image approach that does not require additional knowledge of environment such as depth, distance object/camera or water quality. The experimental results show that the proposed method can efficiently enhance almost every underwater image; And offers a quality that is typically sufficient for the high level computer vision algorithms
Plant roots influence many ecological and biogeochemical processes, such as carbon, water and nutrient cycling. Because of difficult accessibility, knowledge on plant root dynamics in field conditions, however, is fragmentary at best. Minirhizotrons, i.e. transparent tubes placed in the substrate into which specialized cameras are inserted, facilitate the capture of high-resolution images of root dynamics at the soil-tube interface with little to no disturbance after the initial installation. Their use, especially in field studies with multiple species and heterogeneous substrates, though, is limited by the amount of work that subsequent manual tracing of roots in the images requires. Furthermore, the reproducibility and objectivity of manual root detection is questionable. Here, we use a Convolutional Neural Network (CNN) for the automatic detection of roots in minirhizotron images and compare the performance of our RootDetector with human analysts with different levels of expertise. The minirhizotron data stem from various wetland types on organic soils. RootDetector showed a high capability to correctly segmenting root pixels in minirhizotron images from field observations (F1 = 0.6044; r² compared to a human expert = 0.99). Reproducibility among humans, however, depended strongly on expertise level, with novices showing drastic variation among individual analysts and annotating on average almost 3-times higher root length/cm² per image compared to expert analysts. Analyses with RootDetector save resources, are reproducible and objective, and are as accurate as manual analyses performed by human experts.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.