Methods to detect local features have been made to be invariant to many transformations. So far, the vast majority of feature detectors consider robustness just to over-land effects. However, when capturing pictures in underwater environments, there are media specific properties that can degrade the visual quality the captured images. Besides the shortening of information imaged, a corruption of information also happens. The main phenomena, the Backscattering, happens when reflected sources from outside the captured scene are scattered in a wide angle eventually reaching the image plane. This effect creates a characteristic veil on the image that reduces contrast and suppress fine structures on the image.Little work has been made in order to study the robustness that the popular feature detectors have to underwater environment image conditions. In addition, applications benefited by finding descriptive feature points in underwater environments grows every year. They are essential for many applications like 3D reconstruction [4], visual odometry [6] and tracking [7]. Most of these applications rely on the best over-land feature detectors, without considering the water photometric properties. It is likely that some algorithms have a better behaviour than others when applied on images degraded by specific underwater conditions.As stated before, underwater phenomena also create structural degradation. Further, it tends to eliminate all the finer scale structures, which is equivalent to the scale changing phenomena. Consequently, we propose the invariant points detected by some scale invariant detector can have also a good robustness to turbidity.In this context, to evaluate feature detectors, we propose a new dataset called TURBID. This dataset is based on real underwater scenes photographs. The pictures are placed on the bottom of a tank filled with a milk-water solution and then are re-photographed with the degradation controlled by the amount of milk. The generated dataset for one of the printed photographs is show in Fig. 1. This dataset is an improvement in terms of visual diversity when compared to previous efforts [8] and is one of the main contributions of this work. In order to analyze the robustness to turbidity we use the repeatability criteria on each obtained image. This criteria is proportional to the number of feature points found in the same spot given a error ε. The repeatability towards turbidity is calculated by the ratio of the number of points found in the clean image ( Fig. 1(a)) and the number of points repeated in a turbid image, hence:where N 0 is the number of features on the clear image, and N i is the number of features repeated on the image T i . For three different photos we computed the repeatability results. An example of this computation is show for Fig. 2. As we conjectured earlier, the Harris [9], the Hessian [5] and Laplacian approaches performed poorer than the scale invariant methods. Harris is generally used as very precise detector and is used in underwater tracking applicatio...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.