2024
DOI: 10.1609/aaai.v38i17.29818
|View full text |Cite
|
Sign up to set email alerts
|

PMET: Precise Model Editing in a Transformer

Xiaopeng Li,
Shasha Li,
Shezheng Song
et al.

Abstract: Model editing techniques modify a minor proportion of knowledge in Large Language Models (LLMs) at a relatively low cost, which have demonstrated notable success. Existing methods assume Transformer Layer (TL) hidden states are values of key-value memories of the Feed-Forward Network (FFN). They usually optimize the TL hidden states to memorize target knowledge and use it to update the weights of the FFN in LLMs. However, the information flow of TL hidden states comes from three parts: Multi-Head Self-Attentio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
references
References 20 publications
(36 reference statements)
0
0
0
Order By: Relevance