2012 IEEE/RSJ International Conference on Intelligent Robots and Systems 2012
DOI: 10.1109/iros.2012.6386038
|View full text |Cite
|
Sign up to set email alerts
|

A pipeline for structured light bathymetric mapping

Abstract: This paper details a methodology for using structured light laser imaging to create high resolution bathymetric maps of the sea floor. The system includes a pair of stereo cameras and an inclined 532nm sheet laser mounted to a remotely operated vehicle (ROV). While a structured light system generally requires a single camera, a stereo vision set up is used here for in-situ calibration of the laser system geometry by triangulating points on the laser line. This allows for quick calibration at the survey site an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
37
0

Year Published

2013
2013
2017
2017

Publication Types

Select...
6
2

Relationship

3
5

Authors

Journals

citations
Cited by 46 publications
(38 citation statements)
references
References 26 publications
(42 reference statements)
0
37
0
Order By: Relevance
“…However, these methods suffer from difficulties of matching features in turbid water (Garcia & Gracias, 2011) and over terrain with few visual features, and they lead to variable resolution across the map as a function of the abundance of visual clues. Techniques for mapping bathymetry with consistently high resolution include different types of structured light by employing scanning point lasers (Kocak, Caimi, Das, & Karson, 1999;Moore, Jaffe, & Ochoa, 2000), line lasers (Inglis et al, 2012;Kondo et al, 2004;Tetlow & Spours, 1999), or light pattern projections (Bruno et al, 2011), which make use of the known relative positions of the camera and the projector. Measurements of the time of flight of a light impulse are used in LIDAR (Light Detection and Ranging) (Harsdorf et al, 1999) and serial imaging systems (Dalgleish et al, 2013).…”
Section: Existing Underwater Mapping Methods and Approach Chosenmentioning
confidence: 99%
See 1 more Smart Citation
“…However, these methods suffer from difficulties of matching features in turbid water (Garcia & Gracias, 2011) and over terrain with few visual features, and they lead to variable resolution across the map as a function of the abundance of visual clues. Techniques for mapping bathymetry with consistently high resolution include different types of structured light by employing scanning point lasers (Kocak, Caimi, Das, & Karson, 1999;Moore, Jaffe, & Ochoa, 2000), line lasers (Inglis et al, 2012;Kondo et al, 2004;Tetlow & Spours, 1999), or light pattern projections (Bruno et al, 2011), which make use of the known relative positions of the camera and the projector. Measurements of the time of flight of a light impulse are used in LIDAR (Light Detection and Ranging) (Harsdorf et al, 1999) and serial imaging systems (Dalgleish et al, 2013).…”
Section: Existing Underwater Mapping Methods and Approach Chosenmentioning
confidence: 99%
“…While light sectioning using line lasers leads to regularly resolved shape reconstructions, it does not map any color information of the seafloor. Inglis et al presented a method (Inglis, Smart, Vaughn, & Roman, 2012) in which a bathymetry map generated from laser line scans is used to improve the accuracy of a stereo 3D reconstruction.…”
Section: Introductionmentioning
confidence: 99%
“…Using the stereo camera, the 3D position of the pixels projected by the laser are obtained by triangulation. With those 3D points, The RANSAC algorithm is used to determine the planar parameters of the laser plane [19] ( c M l ). These parameters are referenced to the stereo camera using the previously obtained transformation between the camera and the end-effector ( c M e ), it is possible to reference the plane of the laser respect to it:…”
Section: B Laser Reconstruction Descriptionmentioning
confidence: 99%
“…Inglis et al [10] presented a structured light bathymetric mapping algorithm that accounts for errors in the horizontal component of the robot's position estimate.…”
Section: Introductionmentioning
confidence: 99%