Proceedings of the Conference Recent Advances in Natural Language Processing - Deep Learning for Natural Language Processing Me 2021
DOI: 10.26615/978-954-452-072-4_181
|View full text |Cite
|
Sign up to set email alerts
|

Knowledge Distillation with BERT for Image Tag-Based Privacy Prediction

Abstract: Text in the form of tags associated with online images is often informative for predicting private or sensitive content from images. When using privacy prediction systems running on social networking sites that decide whether each uploaded image should get posted or be protected, users may be reluctant to share real images that may reveal their identity, but may share image tags. In such cases, privacy-aware tags become good indicators of image privacy and can be utilized to generate privacy decisions. In this… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 26 publications
(32 reference statements)
0
2
0
Order By: Relevance
“…Following works continue to use Knowledge Distillation to improve generalization (Arani et al, 2019) and robustness (Goldblum et al (2020)). Knowledge Distillation is also used to improve models on privacy protection (Shejwalkar and Houmansadr (2019), Zhao and Caragea (2021)). Moreover, pruning can improve model robustness according to the following studies (Jordão and Pedrini (2021), Pang et al (2021), Hendrycks and Dietterich (2019) ).…”
Section: Compressionmentioning
confidence: 99%
“…Following works continue to use Knowledge Distillation to improve generalization (Arani et al, 2019) and robustness (Goldblum et al (2020)). Knowledge Distillation is also used to improve models on privacy protection (Shejwalkar and Houmansadr (2019), Zhao and Caragea (2021)). Moreover, pruning can improve model robustness according to the following studies (Jordão and Pedrini (2021), Pang et al (2021), Hendrycks and Dietterich (2019) ).…”
Section: Compressionmentioning
confidence: 99%
“…They used SVM for the classification into private or public classes. In contrast, Zhao et al (2021) fine-tuned BERT to model images based on their user tags. (Zhao et al 2022) investigated the image privacy prediction by fine-tuning ResNet pre-trained models on object and scene recognition on PrivacyAlert.…”
Section: Ml-based Image Sensitivity Predictionmentioning
confidence: 99%