2021
DOI: 10.3892/ol.2021.12752
|View full text |Cite
|
Sign up to set email alerts
|

Expression pattern of histone lysine‑specific demethylase 6B in gastric cancer

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 44 publications
0
4
0
Order By: Relevance
“…The differences between PMET and existing methods are illustrated in Figure 1. Our experiments demonstrate that PMET exhibits state-of-the-art comprehensive performance in editing GPT-J (6B) (Wang and Komatsuzaki 2021) and GPT-NeoX (20B) (Black et al 2022) on the zsRE and COUNTERFACT datasets. Specifically, in COUNTERFACT dataset, PMET shows a 3.3% average reliability enhancement over the state-of-the-art method, while in zsRE dataset, it achieves a 0.4% average improvement.…”
Section: Introductionmentioning
confidence: 86%
See 1 more Smart Citation
“…The differences between PMET and existing methods are illustrated in Figure 1. Our experiments demonstrate that PMET exhibits state-of-the-art comprehensive performance in editing GPT-J (6B) (Wang and Komatsuzaki 2021) and GPT-NeoX (20B) (Black et al 2022) on the zsRE and COUNTERFACT datasets. Specifically, in COUNTERFACT dataset, PMET shows a 3.3% average reliability enhancement over the state-of-the-art method, while in zsRE dataset, it achieves a 0.4% average improvement.…”
Section: Introductionmentioning
confidence: 86%
“…Here, W E and γ represent the embedding matrix and layernorm, respectively, and a L z and m L z are the TC hidden states of the MHSA and FFN of the L-th layer, respectively. Note that the MHSA and FFN in (1) are parallel (Wang and Komatsuzaki 2021;Black et al 2022). The general forms of the MHSA and FFN at the l-th layer and the j-th token x l j are given by:…”
Section: Methodology Preliminariesmentioning
confidence: 99%
“…Language Model to be Edited Following setup of previous work (Meng et al 2022a,b;Zhong et al 2023), we use GPT-J (6B) (Wang and Komatsuzaki 2021) as the base LLM to be edited with above methods.…”
Section: Methodsmentioning
confidence: 99%
“…In Section 3, we mentioned that different LMs possess different capacities for generating FL expressions given their pretraining data/objectives. We employ two LM classes for experimentation: Vicuna 13B (instruction-tuned version of the Llama 13B model (Touvron et al 2023)) to generate Python, and GPT-J 6B (Wang and Komatsuzaki 2021) to generate pseudocode as intermediate FL expressions. We employ SymPy as the symbolic solver on top of Vicuna.…”
Section: Methodsmentioning
confidence: 99%