2021
DOI: 10.48550/arxiv.2105.13471
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Inspecting the concept knowledge graph encoded by modern language models

Abstract: The field of natural language understanding has experienced exponential progress in the last few years, with impressive results in several tasks. This success has motivated researchers to study the underlying knowledge encoded by these models. Despite this, attempts to understand their semantic capabilities have not been successful, often leading to non-conclusive, or contradictory conclusions among different works. Via a probing classifier, we extract the underlying knowledge graph of nine of the most influen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 15 publications
0
1
0
Order By: Relevance
“…• semantic WordNet [Fel98] relations between words [AMS21] The LRH has also been used to control models. One application is erasing concepts from trained models [BCZ + 16, VC20, RTGC22, RGC23, BSJR + 23], by projecting the internal representation orthogonal to the direction in which the concept is represented.…”
Section: Extensions and Future Directionsmentioning
confidence: 99%
“…• semantic WordNet [Fel98] relations between words [AMS21] The LRH has also been used to control models. One application is erasing concepts from trained models [BCZ + 16, VC20, RTGC22, RGC23, BSJR + 23], by projecting the internal representation orthogonal to the direction in which the concept is represented.…”
Section: Extensions and Future Directionsmentioning
confidence: 99%