Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016) 2016
DOI: 10.18653/v1/s16-1201
|View full text |Cite
|
Sign up to set email alerts
|

UTHealth at SemEval-2016 Task 12: an End-to-End System for Temporal Information Extraction from Clinical Notes

Abstract: The 2016 Clinical TempEval challenge addresses temporal information extraction from clinical notes. The challenge is composed of six sub-tasks, each of which is to identify: (1) event mention spans, (2) time expression spans, (3) event attributes, (4) time attributes, (5) events' temporal relations to the document creation times (DocTimeRel), and (6) narrative container relations among events and times. In this article, we present an end-to-end system that addresses all six sub-tasks. Our system achieved the b… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
54
1

Year Published

2017
2017
2024
2024

Publication Types

Select...
4
2
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 39 publications
(55 citation statements)
references
References 15 publications
(14 reference statements)
0
54
1
Order By: Relevance
“…All three of our models perform better in terms of F1-measure than Lee et al (2016) and Lin et al (2016). Our two best models also outperform Leeuwenberg and Moens (2017), who report an F-measure of .608 using a structured perceptron.…”
Section: Experiments and Discussionmentioning
confidence: 71%
See 2 more Smart Citations
“…All three of our models perform better in terms of F1-measure than Lee et al (2016) and Lin et al (2016). Our two best models also outperform Leeuwenberg and Moens (2017), who report an F-measure of .608 using a structured perceptron.…”
Section: Experiments and Discussionmentioning
confidence: 71%
“…Results of the experiments are presented in Table 4. For comparison, we report the baseline provided as reference during the Clinical TempEval shared tasks, P R F1 baseline (closest) 0.459 0.154 0.231 Lee et al (2016) 0.588 0.559 0.573 Lin et al (2016) 0 Table 4: Experimentation results. We report precision (P), recall (R) and F1-measure (F1) for each configuration of our model, for the best system of the Clinical TempEval 2016 challenge (Lee et al, 2016) and for the best result obtained so far on the corpus (Lin et al, 2016).…”
Section: Experiments and Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Such tools include MedLEE [19], MPLUS [20], MetaMap [21], KMCI [22], SPIN [23], HITEX [24], MCVS [25], ONYX [26], MedEx [27], cTAKES [28], pyConTextNLP [29], Topaz [30], TextHunter [31], NOBLE [32], and CLAMP [33]. ML downstream of the methods above requires featurization (concept extraction into columns and subsequent feature selection) in order to characterize text narratives in a machine-processable way.…”
Section: Background and Significancementioning
confidence: 99%
“…In Clinical TempEval 2016, the top-performing system employed structural support vector machines (SVM) for entity span extraction and linear support vector machines for attribute and relation extraction (Lee et al, 2016). For the previous iteration, Velupillai et al (2015) developed a pipeline based on ClearTK and SVM with lexical and rule-based features to extract TIMEX3 and EVENT mentions.…”
Section: Introductionmentioning
confidence: 99%