2021 IEEE Winter Conference on Applications of Computer Vision (WACV) 2021
DOI: 10.1109/wacv48630.2021.00272
|View full text |Cite
|
Sign up to set email alerts
|

On the Texture Bias for Few-Shot CNN Segmentation

Abstract: Despite the initial belief that Convolutional Neural Networks (CNNs) are driven by shapes to perform visual recognition tasks, recent evidence suggests that texture bias in CNNs provides higher performing and more robust models. This contrasts with the perceptual bias in the human visual cortex, which has a stronger preference towards shape components. Perceptual differences may explain why CNNs achieve humanlevel performance when large labeled datasets are available, but their performance significantly degrad… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
29
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3
3

Relationship

1
9

Authors

Journals

citations
Cited by 55 publications
(30 citation statements)
references
References 33 publications
(59 reference statements)
1
29
0
Order By: Relevance
“…As shown in Figure 1, images from both support and query sets are first sent to the backbone network to extract features. Feature processing can be accomplished by generating weights for the classifier [33], [41], cosinesimilarity calculation [5], [45], [23], or convolutions [15], [54], [49], [9], [1] to generate the final prediction.…”
Section: Introductionmentioning
confidence: 99%
“…As shown in Figure 1, images from both support and query sets are first sent to the backbone network to extract features. Feature processing can be accomplished by generating weights for the classifier [33], [41], cosinesimilarity calculation [5], [45], [23], or convolutions [15], [54], [49], [9], [1] to generate the final prediction.…”
Section: Introductionmentioning
confidence: 99%
“…Feature extraction and mask preparation. First of all, both the query and support images are input to a pretrained feature extractor to obtain the collections of their multi-scale multi-layer feature maps {F q i,l } and {F s i,l }, where i is the scale of the feature maps with respect to the input images and i ∈ { 1 4 , 1 8 , 1 16 , 1 32 } for the feature extractor we use, and l ∈ {1, . .…”
Section: Dcama Framework For 1-shot Learningmentioning
confidence: 99%
“…Few-shot Segmentation. The few-shot segmentation [39,59,51,41,47,8,26,48,3,33,22,1] has received considerable attention very recently. Inspired by the fewshot learning, Shaban et al [39] contributes the first fewshot segmentation work, whose segmentation parameters are generated by a conditioning branch on the supports.…”
Section: Related Workmentioning
confidence: 99%