2022
DOI: 10.1109/tcbb.2020.3020016
|View full text |Cite
|
Sign up to set email alerts
|

Relation Extraction From Biomedical and Clinical Text: Unified Multitask Learning Framework

Abstract: To minimize the accelerating amount of time invested on the biomedical literature search, numerous approaches for automated knowledge extraction have been proposed. Relation extraction is one such task where semantic relations between the entities are identified from the free text. In the biomedical domain, extraction of regulatory pathways, metabolic processes, adverse drug reaction or disease models necessitates knowledge from the individual relations, for example, physical or regulatory interactions between… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
16
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
2

Relationship

2
7

Authors

Journals

citations
Cited by 24 publications
(20 citation statements)
references
References 68 publications
0
16
0
Order By: Relevance
“…Our proposed approach to identify the depressive symptoms, is assisted by the Bidirectional Representation from Transformers (BERT) and multi-task learning (Yadav et al, 2020a) with soft-parameter sharing. This section describes the proposed methodology for identifying the depression symptoms from user tweets.…”
Section: Methodsmentioning
confidence: 99%
“…Our proposed approach to identify the depressive symptoms, is assisted by the Bidirectional Representation from Transformers (BERT) and multi-task learning (Yadav et al, 2020a) with soft-parameter sharing. This section describes the proposed methodology for identifying the depression symptoms from user tweets.…”
Section: Methodsmentioning
confidence: 99%
“…The second configuration is designed to deal with data imbalance (cf. Table 1), following recent studies that show that jointly learning common characteristics shared across multiple tasks can have a strong impact on RE performances [34,29]. To this end, we jointly train two classifiers using multitask objectives.…”
Section: Data and Annotationmentioning
confidence: 99%
“…Majority of the systems developed for the TE task adopts the multi-task learning (MTL) framework (Zhu et al, 2019;Bhaskar et al, 2019;Kumar et al, 2019;Xu et al, 2019), ensemble method (Sharma and Roychowdhury, 2019), and transfer learning (Bhaskar et al, 2019) for achieving better accuracy. Xu et al (2019) employed the MTL approach Yadav et al, 2018;Yadav et al, 2019;Yadav et al, 2020) in TE task to learn from the auxiliary tasks of question answering (QA) and NLI. The best performing system at MedQA 2019-RQE shared task (Zhu et al, 2019) utilized the MTL approach to learn from intermediate NLI task.…”
Section: E N T a I L Smentioning
confidence: 99%