Proceedings of the First Workshop on Natural Language Processing for Medical Conversations 2020
DOI: 10.18653/v1/2020.nlpmc-1.4
|View full text |Cite
|
Sign up to set email alerts
|

Generating Medical Reports from Patient-Doctor Conversations Using Sequence-to-Sequence Models

Abstract: We discuss automatic creation of medical reports from ASR-generated patient-doctor conversational transcripts using an end-to-end neural summarization approach.We explore both recurrent neural network (RNN) and Transformer-based sequence-to-sequence architectures for summarizing medical conversations. We have incorporated enhancements to these architectures, such as the pointer-generator network that facilitates copying parts of the conversations to the reports, and a hierarchical RNN encoder that makes RNN tr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
38
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3
2

Relationship

1
9

Authors

Journals

citations
Cited by 35 publications
(38 citation statements)
references
References 26 publications
0
38
0
Order By: Relevance
“…After screening the titles and abstracts of these articles, we assessed 144 full-text articles for eligibility. We included 20 articles [19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35][36][37][38] for our analysis (Fig. 1 and Supplementary Table 2).…”
Section: Study Selectionmentioning
confidence: 99%
“…After screening the titles and abstracts of these articles, we assessed 144 full-text articles for eligibility. We included 20 articles [19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35][36][37][38] for our analysis (Fig. 1 and Supplementary Table 2).…”
Section: Study Selectionmentioning
confidence: 99%
“…Summaries predicted by our model are evaluated with ROUGE scores. Besides the baselines mention above, the base pre-trained model BART-base and transformerPGN [19] are also evaluated on the SAMsum dataset. Then we performed main experiments and an ablation experiment on our model.…”
Section: Experiments Setupsmentioning
confidence: 99%
“…They experimented with different LSTM sequence-tosequence methods, various attention mechanisms, pointer generator mechanisms, and topic information additions. (Enarvi et al, 2020) performed similar work with sequence-to-sequence methods on a corpus of 800K orthopaedic ASR generated transcripts and notes; (Krishna et al, 2020) on a corpus of 6862 visits of transcripts annotated with clinical note summary sentences. Unlike most of previous works, our task generates clinical note sentences from labeled transcript snippets, which are at times overlapping and discontinuous.…”
Section: Clinic Visit Dialogue2note Sentence Alignmentmentioning
confidence: 99%