We present a bistatic, polarimetric and real aperture Marine Radar Simulator (MaRS) producing pseudo-raw radar signal. The simulation takes the main elements of the environment into account (sea temperature, salinity, wind speed). Realistic sea surfaces are generated using a two-scales model on a semi-deterministic basis, so as to be able to incorporate the presence of ship wakes. Then, the radar acquisition chain (antennas, modulation, polarization) is modeled, as well as the movements of the sensors on which uncertainties can be introduced, and ship wakes. The pseudo raw, temporal signals delivered by MaRS are further processed using, for instance, bistatic synthetic aperture beamforming. The scene itself represents the sea surface as well as ship wakes. The main points covered here are the scene discretization, the ship wake modeling and the computational cost aspects. We also present images simulated in various monostatic and bistatic configurations and discuss the results. This paper follows "Bistatic radar imaging of the marine environment. Part I: theoretical background", where much of the theory used here is recalled and developed in detail.
Abshct-A major hindrance to underwater operations using cameras comes from the light absorption and scattering by the marine environment, which llmils the visibility distance up to a few meters in coastal waters when using low-end cameras. We propose a complete preprocessing framework able to handle the entlre spectrum of noises present in underwater images. We show that most, If not all of this pmprocessing can be done with very generic methods that do not need any knowledge of the scene or of the turbidity characteristics of the water, while still remaining coherent with the underwater images formation model. IKTRODUCTION ?he increasing interest in Remotely Operated Vehicles (ROVs) andAutonomous Underwater Vehicles (AUVs) for underwater operations has called for the development of efficient, widely available sensors. Optical cameras meet such requirements and have the additional benefit of having an excellent resolution. However, the major obstacle to their use use is that light, unlike sound, is poorly propagated in the water. The effective range of visibility i s limited to about twenty meters in clear water and less than three meters in turbid, coastal waters.These pwr performances are explained by the p e c u h propagation properties of light in the aquatic medium [9], [IO], [15]. First, a ray of light is exponentially attenuated as it travels in the water so the background of the scene will be poorly contrasted and hazy. The visibility range may indeed be augmented wilh artificial lighting. Unfortunately, water will reflect a significant fraction of the light power towards the camera before it actually reaches the objects in the scene. This process, known as backward scattering, causes a characteristic glowing veLl that superimposes itself on the image and hides the scene. Finally, forward scattering, i.e. randomly deviated light on its way from an object io the camera, causes blurring of the image features. One could also consider macroscopic floating particles ("marine snow") as being unwanted signal, although they belong to the scene. In orders of magnitude, backscattering and marine snow are the greatest degradation factors, attenuation comes second and forward scattering follows closely. Figure 1 is an example of a fairly typical underwater image taken in daylight conditions.When specialized hardware such as lasers [6], range gated light systems [5] or polarized cameras 1131 are not available, image quality must be improved via software processing. These dgonthms deal either via deconvolution or via generic enhancement methods (GEMs) such as contrast enhancement that do not rely on any physical modet. Both approaches have their advantages and flaws. Deconvolution is rigorous but hard to p e r h m in a real situation because the parameters of the model are unknown. In controlled situations, deconvolution can be complete, but in natural environments, only deconvolution of forward scattering with restrictive assumptions on the point of view have been achieved IS], [ 111. GEMs can be used without these li...
Each year, numerous segmentation and classification algorithms are invented or reused to solve problems where machine vision is needed. Generally, the efficiency of these algorithms is compared against the results given by one or many human experts. However, in many situations, the location of the real boundaries of the object as well as their classes are not known with certainty by the human experts. Moreover, only one aspect of the segmentation and classification problem is generally evaluated. In our evaluation method, we take into account both the classification and segmentation results as well as the level of certainty given by the experts. As a concrete example of our method, we evaluate an automatic seabed characterization algorithm based on sonar images.
In this paper, we explore the use of optical correlation-based recognition to identify and position underwater man-made objects (e.g. mines). Correlation techniques can be defined as a simple comparison between an observed image (image to recognize) and a reference image; they can be achieved extremely fast. The result of this comparison is a more or less intense correlation peak, depending on the resemblance degree between the observed image and a reference image coming from a database. However, to perform a good correlation decision, we should compare our observed image with a huge database of references, covering all the appearances of objects we search. Introducing all the appearances of objects can influence speed and/or recognition quality. To overcome this limitation, we propose to use composite filter techniques, which allow the fusion of several references and drastically reduce the number of needed comparisons to identify observed images. These recent techniques have not yet been exploited in the underwater context. In addition, they allow for integrating some preprocessing directly in the correlation filter manufacturing step to enhance the visibility of objects. Applying all the preprocessing in one step reduces the processing by avoiding unnecessary Fourier transforms and their inverse operation. We want to obtain filters that are independent from all noises and contrast problems found in underwater videos. To achieve this and to create a database containing all scales and viewpoints, we use as references 3D computer-generated images.
Abstract. An optimized technique, based on the fringe-adjusted joint transform correlator architecture, is proposed and validated for rotation invariant recognition and tracking of a target in an unknown input scene. To enhance the robustness of the proposed technique, we used a three-step optimization. First, we utilized the fringe-adjusted filter (H FAF ) in the Fourier plane, then we added nonlinear processing in the Fourier plane, and, finally, we used a new decision criterion in the correlation plane by considering the correlation peak energy and the highest peaks outside the desired correlation peak. Several tests were conducted to reduce the number of reference images needed for fast tracking, while ensuring robust discrimination and efficient tracking of the desired target. Test results, obtained using the pointing head pose image database, confirm robust performance of the proposed method for face recognition and tracking applications. Thereafter, we also tested the proposed technique for a challenging application such as underwater mine detection and excellent results were obtained.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.