2022
DOI: 10.1109/tgrs.2022.3198187
|View full text |Cite
|
Sign up to set email alerts
|

Context-Self Contrastive Pretraining for Crop Type Semantic Segmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 10 publications
(5 citation statements)
references
References 42 publications
0
5
0
Order By: Relevance
“…In a similar spirit, [7] use a FPN [31] feature extractor, coupled with a CLSTM temporal model (FPN-CLSTM). The UNET3Df architecture [60] follows from UNET3D but uses a different decoder head more suited to contrastive learning. The U-TAE architecture [14] follows a different approach, in that it employs the encoder part of a UNET2D, applied on parallel on all images, and a subsequent temporal attention mechanism which collapses the temporal feature dimension.…”
Section: Related Work 21 Crop Type Recognitionmentioning
confidence: 99%
See 1 more Smart Citation
“…In a similar spirit, [7] use a FPN [31] feature extractor, coupled with a CLSTM temporal model (FPN-CLSTM). The UNET3Df architecture [60] follows from UNET3D but uses a different decoder head more suited to contrastive learning. The U-TAE architecture [14] follows a different approach, in that it employs the encoder part of a UNET2D, applied on parallel on all images, and a subsequent temporal attention mechanism which collapses the temporal feature dimension.…”
Section: Related Work 21 Crop Type Recognitionmentioning
confidence: 99%
“…To accommodate a large set of experiments we only use fold 1 among the five folds provided in PASTIS. Finally, we use the T31TFM-1618 dataset [60] which covers a densely cultivated S2 tile in France for years 2016-18 and includes 20 distinct classes. In total, it includes 140k samples of size 48 × 48, each containing 14-33 acquisitions and 13 image bands.…”
Section: Training and Evaluationmentioning
confidence: 99%
“…With the increasing prevalence of high-resolution remote sensing satellites in global Earth observation missions, high-resolution remote sensing data has become abundant and the primary source for Earth observation. Semantic segmentation of high-resolution remote sensing images [1][2][3][4][5] plays a vital role in understanding the distribution of ground object features, enabling refined urban management, environmental monitoring, natural resource assessment, crop analysis, precise surveying, and mapping. High-resolution remote sensing images possess distinct characteristics, including complex background information, dense targets, and rich ground object features.…”
Section: Introductionmentioning
confidence: 99%
“…Pixel-based classification tasks typically start by training a CNN classifier on small image patches, then using a sliding window method to predict the category of the central pixel [13][14][15], of which the drawback is that the trained network only predicts the central pixel of the input image, leading to low classification efficiency. Semantic segmentation, which aims to assign a specific class label to each pixel in an image with high process efficiency, is gradually gaining attention in the crop-mapping field [16]. For example, Zhang et al [17] combined a pyramid scene parsing network (PSPNet) [18] and GaoFen satellite images for cropland mapping.…”
Section: Introductionmentioning
confidence: 99%