“…OV-3DETIC [277] OV-3D-OD Pseduo Labels from 2D Detector CLIP-text 3DDETR [278] OV-3DETIC explois information from two modalities to achieve 3D open vocabulary object detection. PLA [124] OV-3D-SS/IS 3D segmentation masks (base) CLIP-text sparse-conv UNet [279] PLA first tackles the 3D open vocabulary scene understanding problem. OpenScene [116] OV-3D-SS None CLIP-text 3D Encoder + LSeg [14] OpenScene train a 3D Encoder yielding dense features co-embedded with text and image pixels for open vocabulary semantic segmentation.…”
Section: Open Vocabulary Video Understandingmentioning
confidence: 99%
“…In the understanding of the 3D scene, the same obstacle, lacking training data, also limits the capability of generalization. Sharing a similar motivation that there is a lack of 3D-text pairs for point-language contrastive training, PLA [124] proposes to extract features from multi-view images sampled from a scene to generate descriptions with pre-trained language models. The descriptions can then be used to extract language features to train 3D backbone-language alignment.…”
Section: Open Vocabulary Video Understandingmentioning
confidence: 99%
“…It leverages the feature map extracted by the CLIP visual encoder and directly transfers CLIP's knowledge without any extra annotation. A recent work, RegionPLC [297] on 3D open vocabulary semantic segmentation, considers the benefits of both PLA [124] and OpenScene [116]. It proposes to use the dense caption of any random region to extract more information in CLIP for 3D backbone distillation.…”
Section: Open Vocabulary Video Understandingmentioning
confidence: 99%
“…It proposes to use the dense caption of any random region to extract more information in CLIP for 3D backbone distillation. Fine-grained dense supervision in Region-PLC [297] beyond view-level or instance-level further improves performance compared to PLA [124]. Different from training 3D backbones like in [116], [124], [296], [297], PartSLIP [298] and SATR [299] directly projecting the 3D point cloud and mesh to 2D plane and uses GLIP [231] to localize and segment.…”
Section: Open Vocabulary Video Understandingmentioning
In the field of visual scene understanding, deep neural networks have made impressive advancements in various core tasks like segmentation, tracking, and detection. However, most approaches operate on the close-set assumption, meaning that the model can only identify pre-defined categories that are present in the training set. Recently, open vocabulary settings were proposed due to the rapid progress of vision language pre-training. These new approaches seek to locate and recognize categories beyond the annotated label space. The open vocabulary approach is more general, practical, and effective than weakly supervised and zero-shot settings. This paper thoroughly reviews open vocabulary learning, summarizing and analyzing recent developments in the field. In particular, we begin by juxtaposing open vocabulary learning with analogous concepts such as zero-shot learning, open-set recognition, and out-of-distribution detection. Subsequently, we examine several pertinent tasks within the realms of segmentation and detection, encompassing long-tail problems, few-shot, and zero-shot settings. As a foundation for our method survey, we first elucidate the fundamental principles of detection and segmentation in close-set scenarios. Next, we examine various contexts where open vocabulary learning is employed, pinpointing recurring design elements and central themes. This is followed by a comparative analysis of recent detection and segmentation methodologies in commonly used datasets and benchmarks. Our review culminates with a synthesis of insights, challenges, and discourse on prospective research trajectories. To our knowledge, this constitutes the inaugural exhaustive literature review on open vocabulary learning. We keep tracing related works at https://github.com/jianzongwu/Awesome-Open-Vocabulary.
“…OV-3DETIC [277] OV-3D-OD Pseduo Labels from 2D Detector CLIP-text 3DDETR [278] OV-3DETIC explois information from two modalities to achieve 3D open vocabulary object detection. PLA [124] OV-3D-SS/IS 3D segmentation masks (base) CLIP-text sparse-conv UNet [279] PLA first tackles the 3D open vocabulary scene understanding problem. OpenScene [116] OV-3D-SS None CLIP-text 3D Encoder + LSeg [14] OpenScene train a 3D Encoder yielding dense features co-embedded with text and image pixels for open vocabulary semantic segmentation.…”
Section: Open Vocabulary Video Understandingmentioning
confidence: 99%
“…In the understanding of the 3D scene, the same obstacle, lacking training data, also limits the capability of generalization. Sharing a similar motivation that there is a lack of 3D-text pairs for point-language contrastive training, PLA [124] proposes to extract features from multi-view images sampled from a scene to generate descriptions with pre-trained language models. The descriptions can then be used to extract language features to train 3D backbone-language alignment.…”
Section: Open Vocabulary Video Understandingmentioning
confidence: 99%
“…It leverages the feature map extracted by the CLIP visual encoder and directly transfers CLIP's knowledge without any extra annotation. A recent work, RegionPLC [297] on 3D open vocabulary semantic segmentation, considers the benefits of both PLA [124] and OpenScene [116]. It proposes to use the dense caption of any random region to extract more information in CLIP for 3D backbone distillation.…”
Section: Open Vocabulary Video Understandingmentioning
confidence: 99%
“…It proposes to use the dense caption of any random region to extract more information in CLIP for 3D backbone distillation. Fine-grained dense supervision in Region-PLC [297] beyond view-level or instance-level further improves performance compared to PLA [124]. Different from training 3D backbones like in [116], [124], [296], [297], PartSLIP [298] and SATR [299] directly projecting the 3D point cloud and mesh to 2D plane and uses GLIP [231] to localize and segment.…”
Section: Open Vocabulary Video Understandingmentioning
In the field of visual scene understanding, deep neural networks have made impressive advancements in various core tasks like segmentation, tracking, and detection. However, most approaches operate on the close-set assumption, meaning that the model can only identify pre-defined categories that are present in the training set. Recently, open vocabulary settings were proposed due to the rapid progress of vision language pre-training. These new approaches seek to locate and recognize categories beyond the annotated label space. The open vocabulary approach is more general, practical, and effective than weakly supervised and zero-shot settings. This paper thoroughly reviews open vocabulary learning, summarizing and analyzing recent developments in the field. In particular, we begin by juxtaposing open vocabulary learning with analogous concepts such as zero-shot learning, open-set recognition, and out-of-distribution detection. Subsequently, we examine several pertinent tasks within the realms of segmentation and detection, encompassing long-tail problems, few-shot, and zero-shot settings. As a foundation for our method survey, we first elucidate the fundamental principles of detection and segmentation in close-set scenarios. Next, we examine various contexts where open vocabulary learning is employed, pinpointing recurring design elements and central themes. This is followed by a comparative analysis of recent detection and segmentation methodologies in commonly used datasets and benchmarks. Our review culminates with a synthesis of insights, challenges, and discourse on prospective research trajectories. To our knowledge, this constitutes the inaugural exhaustive literature review on open vocabulary learning. We keep tracing related works at https://github.com/jianzongwu/Awesome-Open-Vocabulary.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.