2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.01230
|View full text |Cite
|
Sign up to set email alerts
|

CoCoNets: Continuous Contrastive 3D Scene Representations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(3 citation statements)
references
References 28 publications
0
3
0
Order By: Relevance
“…To make full use of the point cloud data, Contrast Context (Hou et al 2021a) proposes to adopt a ShapeContext descriptor to divide the scene, which provides more negative pairs for contrastive learning and improve the effectiveness of pre-training models. CoCoNets (Lal et al 2021) further explores self-supervised learning of amodal 3D feature representations agnostic to object and scene semantic content. The above methods focus on indoor RGB-D data.…”
Section: D Self-supervised Representation Learningmentioning
confidence: 99%
“…To make full use of the point cloud data, Contrast Context (Hou et al 2021a) proposes to adopt a ShapeContext descriptor to divide the scene, which provides more negative pairs for contrastive learning and improve the effectiveness of pre-training models. CoCoNets (Lal et al 2021) further explores self-supervised learning of amodal 3D feature representations agnostic to object and scene semantic content. The above methods focus on indoor RGB-D data.…”
Section: D Self-supervised Representation Learningmentioning
confidence: 99%
“…Contrastive Learning for 3D. Contrastive learning [11,12,26], initially designed for 2D, was extended to 3D using similar points from different views in [27,36,81,91]. Limited training data is considered in [27].…”
Section: Related Workmentioning
confidence: 99%
“…A large part of recent work focuses on the case of reconstructing a single 3D scene given dense observations [16][17][18][19][20], enabling high-quality novel view synthesis with exciting applications in computer graphics. Alternatively, differentiable neural rendering may be used to supervise encoders to enable 3D reconstruction from incomplete image observations [21][22][23][24][25][26][27][28][29][30]. Fu and Zhang et al [31] use neural rendering as a tool to recover high-quality 2D panoptic segmentation annotations from a set of sparse images and noisy 3D bounding primitives and 2D predictions.…”
Section: Related Workmentioning
confidence: 99%