2019
DOI: 10.1109/tpami.2018.2832628
|View full text |Cite
|
Sign up to set email alerts
|

Occlusion-Aware Method for Temporally Consistent Superpixels

Abstract: A wide variety of computer vision applications rely on superpixel or supervoxel algorithms as a preprocessing step. This underlines the overall importance that these approaches have gained in recent years. However, most methods show a lack of temporal consistency or fail in producing temporally stable superpixels. In this paper, we present an approach to generate temporally consistent superpixels for video content. Our method is formulated as a contour-evolving expectation-maximization framework, which utilize… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(3 citation statements)
references
References 47 publications
0
3
0
Order By: Relevance
“…However, even stateof-the-art optical flow estimation methods [36] are still imperfect and may introduce extra errors into supervoxel computation. Recently, Reso et al [31] propose a novel formulation specifically designed for handling occlusions.…”
Section: Video Manifold M and Cssmentioning
confidence: 99%
See 1 more Smart Citation
“…However, even stateof-the-art optical flow estimation methods [36] are still imperfect and may introduce extra errors into supervoxel computation. Recently, Reso et al [31] propose a novel formulation specifically designed for handling occlusions.…”
Section: Video Manifold M and Cssmentioning
confidence: 99%
“…Many methods have been proposed for computing supervoxels, including energy minimization by graph cut [38], non-parametric feature-space analysis [28], graphbased merging [9], [13], [42], contour-evolving optimization [17], [21], [31], optimization of normalized cuts [33], [7], generative probabilistic framework [5] and hybrid clustering [30], [43], etc. These methods can be classified according • R. Yi to different representation formats: (1) temporal superpixels [5], [4], [17], [21], [30], [31], [39]: supervoxels are represented in each frame and their labels are temporally consistent in adjacent frames, and (2) supervoxels [7], [9], [13], [28], [33], [38], [42], [43]: they are 3D primitive volumes whose union forms the video volume. Note that these two representations can be transferred to each other.…”
Section: Introductionmentioning
confidence: 99%
“…Recently, superpixel segmentation methods are becoming more and more popular. These methods can be mainly divided into two categories: graph-based methods and gradient ascent methods [5][6][7][8][9][10][11][12].…”
Section: Related Workmentioning
confidence: 99%