2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.00267
|View full text |Cite
|
Sign up to set email alerts
|

Learning Correspondence From the Cycle-Consistency of Time

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
379
1

Year Published

2019
2019
2020
2020

Publication Types

Select...
4
2
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 451 publications
(380 citation statements)
references
References 78 publications
0
379
1
Order By: Relevance
“…1, we propose CycleVAE, which is capable of recycling the converted spectra back into the system, so that the conversion flow is indirectly considered in the parameter optimization. A similar idea has also been proposed as a cycle-consistent flow in a self-supervised method for visual correspondence [24].…”
Section: Proposed Cyclevae-based Vcmentioning
confidence: 99%
See 1 more Smart Citation
“…1, we propose CycleVAE, which is capable of recycling the converted spectra back into the system, so that the conversion flow is indirectly considered in the parameter optimization. A similar idea has also been proposed as a cycle-consistent flow in a self-supervised method for visual correspondence [24].…”
Section: Proposed Cyclevae-based Vcmentioning
confidence: 99%
“…In this paper, to improve VAE-based VC, we propose to use cycle-consistent mapping flow [24], i.e., CycleVAE-based VC, that indirectly optimizes the conversion flow by recycling the converted spectral features. Specifically, in the proposed CycleVAE, the converted features are fed-back into the system to generate corresponding cyclic reconstructed spectra that can be directly optimized.…”
Section: Introductionmentioning
confidence: 99%
“…Li et al [19] proposed a framework with adaptive feature propagation for high-level features to reduce the latency of video semantic segmentation. Wang et al [20] used an unsupervised method to learn feature representations for identifying correspondences across frames. Lee et al [21] attempted to derive semantic correspondences by objectaware losses.…”
Section: B Semantics Sharingmentioning
confidence: 99%
“…It purely uses the input data to create auxiliary tasks and enables deep networks to learn effective latent features by solving these auxiliary tasks. Various strategies have been proposed to construct auxiliary tasks, based on temporal correspondence (Li et al, 2019b;Wang et al, 2019a), cross-modal consistency , etc. In computer vision, examples of auxiliary tasks include rotation prediction (Gidaris et al, 2018a), image inpainting (Pathak et al, 2016a), automatic colorization (Zhang et al, 2016b), instance discrimination (Wu et al, 2018), to name a few.…”
Section: Introductionmentioning
confidence: 99%