2022
DOI: 10.1371/journal.pone.0266060
|View full text |Cite
|
Sign up to set email alerts
|

Transferability of features for neural networks links to adversarial attacks and defences

Abstract: The reason for the existence of adversarial samples is still barely understood. Here, we explore the transferability of learned features to Out-of-Distribution (OoD) classes. We do this by assessing neural networks’ capability to encode the existing features, revealing an intriguing connection with adversarial attacks and defences. The principal idea is that, “if an algorithm learns rich features, such features should represent Out-of-Distribution classes as a combination of previously learned In-Distribution … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 33 publications
0
1
0
Order By: Relevance
“…This research enhances the robustness of graph neural networks, ensuring their security and reliability in practical applications. Adversarial attacks have been extensively studied in the field of deep learning [ 13 , 14 ].…”
Section: Introductionmentioning
confidence: 99%
“…This research enhances the robustness of graph neural networks, ensuring their security and reliability in practical applications. Adversarial attacks have been extensively studied in the field of deep learning [ 13 , 14 ].…”
Section: Introductionmentioning
confidence: 99%