2022 ACM Conference on Fairness, Accountability, and Transparency 2022
DOI: 10.1145/3531146.3533088
|View full text |Cite
|
Sign up to set email alerts
|

Taxonomy of Risks posed by Language Models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
120
0
2

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 226 publications
(188 citation statements)
references
References 49 publications
0
120
0
2
Order By: Relevance
“…Another common approach is to orient harm taxonomies around specific algorithmic or model functions (e.g., [73,208]). Model-focused taxonomies have been developed for large language models [208], image captioning systems [104,202], and so-called "foundational models, " such as GPT-3 and BERT, which are applied in a wide range of downstream tasks [32]. Organizing harm by model function is highly useful when practitioners' focus is on a singular model because it draws attention to relevant computational harms.…”
Section: Taxonomies Of Sociotechnical Harms Risk and Failurementioning
confidence: 99%
See 3 more Smart Citations
“…Another common approach is to orient harm taxonomies around specific algorithmic or model functions (e.g., [73,208]). Model-focused taxonomies have been developed for large language models [208], image captioning systems [104,202], and so-called "foundational models, " such as GPT-3 and BERT, which are applied in a wide range of downstream tasks [32]. Organizing harm by model function is highly useful when practitioners' focus is on a singular model because it draws attention to relevant computational harms.…”
Section: Taxonomies Of Sociotechnical Harms Risk and Failurementioning
confidence: 99%
“…this may undermine the profitability of creative or innovative work. " [208] 3.2.1 Opportunity loss. Opportunity loss occurs when algorithmic systems enable disparate access to information and resources needed to equitably participate in society, including withholding of housing [10] and services [74].…”
Section: Allocative Harms: Inequitable Distribution Of Resourcesmentioning
confidence: 99%
See 2 more Smart Citations
“…There are a number of other ethical dimensions that have been discussed [34]. While TEAL does not assess these dimensions outof-the-box, if a user can define a function that quantifies these risks as a function of generated language, they can be assessed.…”
Section: Other Dimensionsmentioning
confidence: 99%