Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2022
DOI: 10.18653/v1/2022.acl-long.187
|View full text |Cite
|
Sign up to set email alerts
|

Learning Disentangled Textual Representations via Statistical Measures of Similarity

Abstract: When working with textual data, a natural application of disentangled representations is fair classification where the goal is to make predictions without being biased (or influenced) by sensitive attributes that may be present in the data (e.g., age, gender or race). Dominant approaches to disentangle a sensitive attribute from textual representations rely on learning simultaneously a penalization term that involves either an adversarial loss (e.g., a discriminator) or an information measure (e.g., mutual inf… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
22
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(24 citation statements)
references
References 22 publications
0
22
0
Order By: Relevance
“…To our knowledge, only a few studies address this issue and attempt to leverage information from all layers. Notably, recent work by Colombo et al (2022a) considers representations obtained by taking the average embedding across the encoder layers. They then apply common OOD detection methods to this new aggregated embedding.…”
Section: Limitations Of Existing Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…To our knowledge, only a few studies address this issue and attempt to leverage information from all layers. Notably, recent work by Colombo et al (2022a) considers representations obtained by taking the average embedding across the encoder layers. They then apply common OOD detection methods to this new aggregated embedding.…”
Section: Limitations Of Existing Methodsmentioning
confidence: 99%
“…Choice of the threshold. To select γ, we follow previous work (Colombo et al, 2022a) by selecting an amount of training samples (i.e., "outliers") the detector can wrongfully detect. A classical choice is to set this proportion to 80%.…”
Section: Building An Ood Detectormentioning
confidence: 99%
See 3 more Smart Citations