2016
DOI: 10.1109/jdt.2016.2615565
|View full text |Cite
|
Sign up to set email alerts
|

Depth estimation in Integral Imaging based on a maximum voting strategy

Abstract: An approach that uses the scene information acquired by means of a 3D Synthetic Aperture Integral Imaging system is presented. This method generates a depth map of the scene through a voting strategy. In particular, we consider the information given by each camera of the array for each pixel, and also the local information in a neighbourhood of that pixel. The proposed method obtains consistent results for any type of object surfaces as well as very sharp boundaries.In addition, we also contribute in this pape… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
4
1

Relationship

2
3

Authors

Journals

citations
Cited by 8 publications
(7 citation statements)
references
References 47 publications
0
7
0
Order By: Relevance
“…Table 1 summarizes the characteristics of example scenes from the chosen datasets. The raw data from different datasets are of different parameterizations, for example, scene bathroom ( [37]) generated from 3DS max and Chess/Truck ( [39]) captured by camera gantry encoded the LF in the form of horizontal and vertical views, whereas scene Pillars/Bikes ( [38]) captured by Lytro Illum was organized as a sequence of raw 2D lenslet images. Therefore, for consistency of the representation and without losing generality, we simulate the raw sensor image capture using an imaginary plenoptic camera: we parameterize each LF as a 2D representation such that U × V angular views (of size X ×Y pixels) are concatenated into a simulated plenoptic sensor pixel grid.…”
Section: Experiments a Datasetsmentioning
confidence: 99%
“…Table 1 summarizes the characteristics of example scenes from the chosen datasets. The raw data from different datasets are of different parameterizations, for example, scene bathroom ( [37]) generated from 3DS max and Chess/Truck ( [39]) captured by camera gantry encoded the LF in the form of horizontal and vertical views, whereas scene Pillars/Bikes ( [38]) captured by Lytro Illum was organized as a sequence of raw 2D lenslet images. Therefore, for consistency of the representation and without losing generality, we simulate the raw sensor image capture using an imaginary plenoptic camera: we parameterize each LF as a 2D representation such that U × V angular views (of size X ×Y pixels) are concatenated into a simulated plenoptic sensor pixel grid.…”
Section: Experiments a Datasetsmentioning
confidence: 99%
“…This 3D reconstruction scheme with SAII can be carried out in the case where the cameras are located on a flat surface [25], but also in the case where the positions of those cameras are spread or in a free pose configuration in 3D space [33]- [35].…”
Section: Photo-consistency Based On Median Distancesmentioning
confidence: 99%
“…For instance, the Normalised Cross-Correlation (NCC), the Sum of Squared Differences (SSD), Mutual Information based measures, etc. Other measures try to explicitly deal with, for instance, occlusions and highlights [23], [25].…”
Section: Introductionmentioning
confidence: 99%
“…Recently, a depth estimation method through a photoconsistency criterion based on a voting strategy has been presented in [80]. The proposed approach, (hereafter called Max-Voting method) is based on a soft-voting procedure that takes into account the level of agreement (similarity) among the different camera views, using a similar strategy to those presented in [81] [82].…”
Section: ) Depth Estimation Using a Photoconsistency -Based Criterionmentioning
confidence: 99%
“…At a certain depth range z ∈ Zrange, the pixel at the position (i, j) of the image I and its square surrounding window W are defined as Wij), we proposed in [80] a criterion based on a voting procedure where each camera votes in favor of the pixel (i, j) at depth level z depending on the similarity of the pixel intensities of each camera as compared to camera R. A threshold value (THR) is also assigned that denotes whether this similarity is good enough. Similarity is measured using the Euclidean distance d between the a*b* values (from the L*a*b* color space) for each pixel.…”
Section: ) Depth Estimation Using a Photoconsistency -Based Criterionmentioning
confidence: 99%