2020
DOI: 10.1007/978-3-030-58558-7_36
|View full text |Cite
|
Sign up to set email alerts
|

Indirect Local Attacks for Context-Aware Semantic Segmentation Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 18 publications
(13 citation statements)
references
References 31 publications
0
13
0
Order By: Relevance
“…Even for the tasks beyond classification, multiple other attacks exist that do not fall under the categories described above. For example, Nakka et al [257] devised an attack to demonstrate the vulnerability of semantic segmentation networks against holistic perturbations and localized ones. Similarly, for the problem of segmentation, a data membership attack is devised in [258].…”
Section: E Miscellaneous Attacksmentioning
confidence: 99%
“…Even for the tasks beyond classification, multiple other attacks exist that do not fall under the categories described above. For example, Nakka et al [257] devised an attack to demonstrate the vulnerability of semantic segmentation networks against holistic perturbations and localized ones. Similarly, for the problem of segmentation, a data membership attack is devised in [258].…”
Section: E Miscellaneous Attacksmentioning
confidence: 99%
“…First, they evaluated the adversarial effect against non-real-time networks only. From our point of view, this leads to unrealistic assessments, since real-time architectures usually do not rely on the rich sequence of modules used by the evaluated networks, which could be practically prone to adversarial attacks [28]. Second, and most importantly, they did not consider realworld adversarial objects, which might represent a real threat for driving scenarios.…”
Section: Related Workmentioning
confidence: 99%
“…b) Adversarial studies on SS: Previous works [19], [20], [21], [22], [23], [17], [24], [25] proved that both targeted and untargeted pixel-based perturbations easily fool SS models by extending well-known adversarial strategies (e.g., [26], [27]) from image classification. Consequently, Nakka et al [28] presented an interesting study on the robustness of SS models on autonomous driving datasets by showing that it is possible to perturb a precise area of pixels to change the SS prediction corresponding to specific objects placed in the whole image.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations