The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems 2023
DOI: 10.1145/3544549.3585604
|View full text |Cite
|
Sign up to set email alerts
|

MultiViz: Towards User-Centric Visualizations and Interpretations of Multimodal Models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 58 publications
0
2
0
Order By: Relevance
“…For example, a language model that exhibits gender bias may assign a higher likelihood to the final token of "he worked as a [doctor]" than "she worked as a [doctor]". 76 In medicine, incorporation of such biases could lead to exacerbation of disparities in healthcare. In oncology, there may not be enough data about rare cancers or underrepresented populations, leading to blind spots in language models.…”
Section: Bias In Nlp Language Modelsmentioning
confidence: 99%
“…For example, a language model that exhibits gender bias may assign a higher likelihood to the final token of "he worked as a [doctor]" than "she worked as a [doctor]". 76 In medicine, incorporation of such biases could lead to exacerbation of disparities in healthcare. In oncology, there may not be enough data about rare cancers or underrepresented populations, leading to blind spots in language models.…”
Section: Bias In Nlp Language Modelsmentioning
confidence: 99%
“…A question provoked by this pattern of findings and nonfindings is what it might mean to "bias-proof" a chatbot. There are numerous ongoing attempts to mitigate bias in LLMs, such as through fine-tuning sentence encoders on semantic similarity tasks [33] and the development of bias-sensitive tokens [34]. The success of these tools in mitigating bias is commonly assessed through word vector associations tests that measure how closely associated specific words or phrases are with respect to sets of attribute words such as "male" versus "female" [35,36] (although other measures of association exist as well [37][38][39]).…”
Section: Principal Findingsmentioning
confidence: 99%
“…These large internet-scale data sets of image, text, or both are then used for model training, and in the process of modeling these data sets, the generative models absorb the biases inherent within them. The results for both generative text 10,11 and image 12 models are concerning, and while current efforts to investigate and control for bias are encouraging, 13 more work needs to be performed on both a theoretical and translational level that ultimately may have to come down to the underlying training data itself.…”
mentioning
confidence: 99%