2023
DOI: 10.1609/aaai.v37i12.26649
|View full text |Cite
|
Sign up to set email alerts
|

Med-EASi: Finely Annotated Dataset and Models for Controllable Simplification of Medical Texts

Abstract: Automatic medical text simplification can assist providers with patient-friendly communication and make medical texts more accessible, thereby improving health literacy. But curating a quality corpus for this task requires the supervision of medical experts. In this work, we present Med-EASi (Medical dataset for Elaborative and Abstractive Simplification), a uniquely crowdsourced and finely annotated dataset for supervised simplification of short medical texts. Its expert-layman-AI collaborative annotations fa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 47 publications
0
2
0
Order By: Relevance
“…For these reasons, we evaluated our outputs and those of the models presented by Cao et al (2020) using several other metrics, referred to both input and target sentences. Computing these metrics for the gold source and target texts allowed us to highlight the degree of content changes in the test set, as suggested in previous works (Basu, Vasu, Yasunaga, Kim, & Yang, 2021;Cao et al, 2020;Vásquez-Rodríguez, Shardlow, Przybyła, & Ananiadou, 2021), and confirmed through our human evaluations. This issue may stem from the loss of contextual information when working at a sentence level.…”
Section: Related Worksupporting
confidence: 75%
“…For these reasons, we evaluated our outputs and those of the models presented by Cao et al (2020) using several other metrics, referred to both input and target sentences. Computing these metrics for the gold source and target texts allowed us to highlight the degree of content changes in the test set, as suggested in previous works (Basu, Vasu, Yasunaga, Kim, & Yang, 2021;Cao et al, 2020;Vásquez-Rodríguez, Shardlow, Przybyła, & Ananiadou, 2021), and confirmed through our human evaluations. This issue may stem from the loss of contextual information when working at a sentence level.…”
Section: Related Worksupporting
confidence: 75%
“…New LLMs since 2018 (e.g., chatGPT, GPT 4, T5, BART) can generate a wide variety of text analyses and dialogues with an impressive level of fluency out‐of‐the‐box. Through fine‐tuning, LLMs become specialized at particular tasks, such as analyzing social determinants of health from clinical notes, 15 answering disease‐specific questions based on medical literature, 16,17 simplifying medical concepts and texts for patients, 18 and more 14 . Using end‐user‐facing LLM interfaces (e.g., Open AI Playground) with and without AI technical training can improve LLM outputs by prepending prompts—textual instructions and examples of their desired interactions—to LLM inputs.…”
Section: The Potential Of Knowledge Organizing Technologies For Trans...mentioning
confidence: 99%
“…Similarly, building a system that explains LBP‐related concepts to out‐of‐domain scientists or stakeholders can take little to no training data. One might only need to adapt such explanation systems from other medical domains with sets of LBP examples 18 …”
Section: The Potential Of Knowledge Organizing Technologies For Trans...mentioning
confidence: 99%