2022
DOI: 10.48550/arxiv.2202.06862
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Threats to Pre-trained Language Models: Survey and Taxonomy

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 0 publications
0
5
0
Order By: Relevance
“…The scale of used data is much larger than traditional methods, but it's still limited. The pursuit Vision-Language Intelligence: Tasks, Representation Learning, and Large Models [38] 2022 arXiv MM DC, 19 A survey on vision transformer [39] 2022 TPAMI CV DC, 23 Transformers in vision: A survey [40] 2021 CSUR CV SC, 38 A Survey of Visual Transformers [41] 2021 arXiv CV DC, 21 Video Transformers: A Survey [42] 2022 arXiv CV DC, 24 Threats to Pre-trained Language Models: Survey and Taxonomy [43] 2022 arXiv NLP DC, 8 A survey on bias in deep NLP [44] 2021 AS NLP SC, 26…”
Section: Conventional Deep Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…The scale of used data is much larger than traditional methods, but it's still limited. The pursuit Vision-Language Intelligence: Tasks, Representation Learning, and Large Models [38] 2022 arXiv MM DC, 19 A survey on vision transformer [39] 2022 TPAMI CV DC, 23 Transformers in vision: A survey [40] 2021 CSUR CV SC, 38 A Survey of Visual Transformers [41] 2021 arXiv CV DC, 21 Video Transformers: A Survey [42] 2022 arXiv CV DC, 24 Threats to Pre-trained Language Models: Survey and Taxonomy [43] 2022 arXiv NLP DC, 8 A survey on bias in deep NLP [44] 2021 AS NLP SC, 26…”
Section: Conventional Deep Learningmentioning
confidence: 99%
“…The large-scale pre-trained models [29,43,44,[53][54][55][56] first appeared in the NLP field. Their success is mainly attributed to self-supervised learning and network structures like Transformer [9].…”
Section: Pre-training In Naturalmentioning
confidence: 99%
“…Privacy Protection To address privacy risks in NLP models, various privacy-preserving methods have been proposed, which can be categorized into three main stages of application (Guo et al, 2022;Sousa and Kern, 2023): data processing stage, pre-training and/or fine-tuning stage, and post-processing stage. In the data processing stage, methods involve removing or replacing sensitive information in the original data (Liu et al, 2017;El Emam et al, 2009;Zhou et al, 2008;García-Pablos et al, 2020).…”
Section: Related Workmentioning
confidence: 99%
“…Remarkable progress has been made in large language models (LLMs) in recent years (Brown et al, 2020;Liu et al, 2021;Ouyang et al, 2022;Lee et al, 2023). However,despite this success, LLMs are confronted with privacy and security concerns in real-world applications (Guo et al, 2022;Brown et al, 2022;Li et al, 2023). The primary cause of privacy and security risks is the inherent nature of large pretrained language models.…”
Section: Introductionmentioning
confidence: 99%
“…Language-based VLMs inherit the risks of the underlying LLMs and vision models, such as gender and racial biases when prompted with images [52]. Several surveys on the ethics of LLMs are available [52,53]. Some work studies the robustness of VLMs against both natural distribution shifts [54] and adversarial robustness [55].…”
Section: Responsible Ai Considerations Of Promptingmentioning
confidence: 99%