Findings of the Association for Computational Linguistics: EMNLP 2022 2022
DOI: 10.18653/v1/2022.findings-emnlp.329
|View full text |Cite
|
Sign up to set email alerts
|

Thinking about GPT-3 In-Context Learning for Biomedical IE? Think Again

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
3
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 31 publications
(11 citation statements)
references
References 0 publications
0
5
0
Order By: Relevance
“…This signals the need to improve the span extraction which currently limits the overall performance of our system with a lost of 10.3 points (0.82 -0.717) as shown in Table 2 when comparing the performances achieved by PhenoID -Noisy-BIO and by PhenoID (with gold truth spans given). We are exploring the benefit of large language models (LLMs) to replace our sequence labeler with a generative model, whereby we present an observation in a prompt and ask the model to list all HPO terms mentioned in the observation as suggested in previous studies on different tasks [18].…”
Section: Resultsmentioning
confidence: 99%
“…This signals the need to improve the span extraction which currently limits the overall performance of our system with a lost of 10.3 points (0.82 -0.717) as shown in Table 2 when comparing the performances achieved by PhenoID -Noisy-BIO and by PhenoID (with gold truth spans given). We are exploring the benefit of large language models (LLMs) to replace our sequence labeler with a generative model, whereby we present an observation in a prompt and ask the model to list all HPO terms mentioned in the observation as suggested in previous studies on different tasks [18].…”
Section: Resultsmentioning
confidence: 99%
“…A solution could be to retrieve the paragraphs documenting a GOC discussion with a generative model like chat-GPT Ouyang et al (2022). However, we left the evaluation of this approach as future work since large language models need to be fined-tuned on clinical notes to achieve competing performance Jimenez Gutierrez et al (2022), a process that is still an economic and technical challenge Scao et al (2022).…”
Section: Discussionmentioning
confidence: 99%
“…The objective was to ensure that the extracted candidate PHI text could be matched with the original input text represented by the "<SENTENCE>" placeholder. Following the prompt design procedure [49], we organized our prompt into 4 main parts:…”
Section: Chatgpt-based Deidentification Frameworkmentioning
confidence: 99%