2022
DOI: 10.2352/ei.2022.34.16.avm-147
|View full text |Cite
|
Sign up to set email alerts
|

FisheyePixPro: Self-supervised pretraining using Fisheye images for semantic segmentation

Abstract: Self-supervised learning has been an active area of research in the past few years. Contrastive learning is a type of selfsupervised learning method that has achieved a significant performance improvement on image classification task. However, there has been no work done in its application to fisheye images for autonomous driving. In this paper, we propose FisheyePixPro, which is an adaption of pixel level contrastive learning method PixPro [1] for fisheye images. This is the first attempt to pretrain a contra… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3

Relationship

2
4

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 31 publications
(37 reference statements)
0
4
0
Order By: Relevance
“…Traditional approaches like [25], [26], [27] all choose positive instances by augmenting an image through some transformation and treating all other instances in the batch as the negative set. Within the domain of fisheye, [28] has utilized existing contrastive learning approaches on fisheye data for the task of semantic segmentation. This work differs fundamentally from ours in the sense that we are proposing a contrastive learning approach specifically geared towards creating a fisheye specific representation space, rather than a generic space based on previous learning approaches.…”
Section: Contrastive Learningmentioning
confidence: 99%
“…Traditional approaches like [25], [26], [27] all choose positive instances by augmenting an image through some transformation and treating all other instances in the batch as the negative set. Within the domain of fisheye, [28] has utilized existing contrastive learning approaches on fisheye data for the task of semantic segmentation. This work differs fundamentally from ours in the sense that we are proposing a contrastive learning approach specifically geared towards creating a fisheye specific representation space, rather than a generic space based on previous learning approaches.…”
Section: Contrastive Learningmentioning
confidence: 99%
“…DenseCL [15] implements self-supervision by optimizing a pairwise contrastive (dis)similarity loss between two views of input images, whereas pixel-level pretext tasks are introduced for learning dense feature representations in [72]. FisheyePix-Pro [73] attempts to pretrain a contrastive learning based model directly on fisheye images. Cross-image pixel contrast has been leveraged for semantic segmentation by looking beyond single images [74], [75], [76] and enforcing pixel embeddings belonging to the same semantic class to be more similar than embeddings from different classes.…”
Section: Unsupervised Dense Contrastive Learningmentioning
confidence: 99%
“…Numerous pretext task techniques have been suggested to advance the field of self-supervised learning in the domains of natural images [15,16,17,18,19,20] and self-supervised learning methods have been used in a wide range of applications [21,22,23,24,25,26,27]. A common theme in the majority of these initiatives is that the use of the pretext results is limited to use on downstream tasks using the same dataset.…”
Section: Introductionmentioning
confidence: 99%