2022
DOI: 10.1038/s42256-022-00458-8
|View full text |Cite
|
Sign up to set email alerts
|

Large pre-trained language models contain human-like biases of what is right and wrong to do

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

1
50
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 126 publications
(80 citation statements)
references
References 33 publications
1
50
0
1
Order By: Relevance
“…Summarised, these observations already confirm the results of [14] in a larger multilingual setting and indicate that multilingual LMs indeed capture moral norms. To what extent they differ, however, is still unclear.…”
supporting
confidence: 75%
See 4 more Smart Citations
“…Summarised, these observations already confirm the results of [14] in a larger multilingual setting and indicate that multilingual LMs indeed capture moral norms. To what extent they differ, however, is still unclear.…”
supporting
confidence: 75%
“…From initial experiments, a version of XLM-R tuned with the S-BERT framework [13] 1 shows good correlation with the global user study conducted by [14] when used with their MoralDirection (MD) framework (see Table 1). Simply mean-pooling representations from XLM-R [4], mBERT [6] 1.…”
mentioning
confidence: 97%
See 3 more Smart Citations