2021
DOI: 10.3390/electronics10222871
|View full text |Cite
|
Sign up to set email alerts
|

A Robust Discontinuous Phase Unwrapping Based on Least-Squares Orientation Estimator

Abstract: Weighted least-squares (WLS) phase unwrapping is widely used in optical engineering. However, this technique still has issues in coping with discontinuity as well as noise. In this paper, a new WLS phase unwrapping algorithm based on the least-squares orientation estimator (LSOE) is proposed to improve phase unwrapping robustness. Specifically, the proposed LSOE employs a quadratic error norm to constrain the distance between gradients and orientation vectors. The estimated orientation is then used to indicate… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 26 publications
0
2
0
Order By: Relevance
“…Assuming that the object is located at a certain point P, the projection point of the target object on its imaging surface in camera 1 is p 1 , and the camera coordinate origin is O c1 ; similarly, the corresponding points in camera 2 and camera 3 are set to p 2 , O c2 and p 3 , O c3 , respectively. Due to camera aberrations, as well as the solution error in the least squares [18] calculation and the noise generated during calibration, the line between the origin and the projection point of the three groups of cameras will be slightly shifted to the real position P of the target object, meaning that the coordinate position P 1 of the target image captured by the binocular vision system composed of camera 1 and camera 2 inevitably cannot coincide with the display position P of the target object. Similarly, the target image positions P 2 and P 3 captured by camera 2 and camera 3 as well as camera 1 and camera 3 will also fail to coincide with each other and the target object.…”
Section: Three-dimensional Visionmentioning
confidence: 99%
“…Assuming that the object is located at a certain point P, the projection point of the target object on its imaging surface in camera 1 is p 1 , and the camera coordinate origin is O c1 ; similarly, the corresponding points in camera 2 and camera 3 are set to p 2 , O c2 and p 3 , O c3 , respectively. Due to camera aberrations, as well as the solution error in the least squares [18] calculation and the noise generated during calibration, the line between the origin and the projection point of the three groups of cameras will be slightly shifted to the real position P of the target object, meaning that the coordinate position P 1 of the target image captured by the binocular vision system composed of camera 1 and camera 2 inevitably cannot coincide with the display position P of the target object. Similarly, the target image positions P 2 and P 3 captured by camera 2 and camera 3 as well as camera 1 and camera 3 will also fail to coincide with each other and the target object.…”
Section: Three-dimensional Visionmentioning
confidence: 99%
“…It should be noted that if the projector's gamma is directly calculated after the 3D reconstruction, the occluded object will modulate the fringe phase. Therefore, the least‐squares orientation estimator method [10] is employed to segment the object and to obtain the object‐removed wrapped phases. To eliminate random error, the average gamma is obtained from multiple object‐removed wrapped phases.…”
Section: Gamma Self‐correction Schemementioning
confidence: 99%