2018
DOI: 10.1093/nar/gky428
|View full text |Cite
|
Sign up to set email alerts
|

ezTag: tagging biomedical concepts via interactive learning

Abstract: Recently, advanced text-mining techniques have been shown to speed up manual data curation by providing human annotators with automated pre-annotations generated by rules or machine learning models. Due to the limited training data available, however, current annotation systems primarily focus only on common concept types such as genes or diseases. To support annotating a wide variety of biological concepts with or without pre-existing training data, we developed ezTag, a web-based annotation tool that allows … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
25
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 31 publications
(25 citation statements)
references
References 25 publications
0
25
0
Order By: Relevance
“…For example, In Michalopoulos et al [ 42 ], an F1-score threshold was determined by humans to accept the ML results, otherwise, the ML method should be refined to get desired F1-score value. Also, F-Score was the evaluation measure used in Kwan et al [ 90 ] to test the performance of the presented HILML approach in tagging biomedical texts in text mining.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…For example, In Michalopoulos et al [ 42 ], an F1-score threshold was determined by humans to accept the ML results, otherwise, the ML method should be refined to get desired F1-score value. Also, F-Score was the evaluation measure used in Kwan et al [ 90 ] to test the performance of the presented HILML approach in tagging biomedical texts in text mining.…”
Section: Resultsmentioning
confidence: 99%
“…For example, in Tkahama et al [ 61 ], the number of experiments for collaborating humans with the ML method was specified by researchers. Also, In Kwon et al [ 90 ], the role of human to refine the ML results was tested in five iterations and this number of iterations was determined by humans to get acceptable results.…”
Section: Resultsmentioning
confidence: 99%
“…ezTag [ 82 ] ( http://eztag.bioqrator.org ) This is a tool that allows curators to perform manual annotation and provides training data using the human-in-the-loop process. The tool is available online but can also be locally installed because the source code is available ( http://github.com/ncbi-nlp/eztag ).…”
Section: Resultsmentioning
confidence: 99%
“…Currently, PubTerm relies on PubTator annotations. However, novel tools recently proposed, such as ezTag (23), could, in principle and with a properly trained model, annotate all abstracts and make them available for PubTerm. This could be an interesting future direction.…”
Section: Discussionmentioning
confidence: 99%
“…In the context of biomedical text analysis, a review of 24 alternative tools for browsing PubMed (3, 5) also highlights the fact that searching, retrieval and analysis of PubMed records is an important issue in biomedicine and research. Of these tools reviewed, there are proposals for visualization in networks [HubMed (6), RefMed (7), PubNet (8), KNALIJ (9)], searching in different ways or modalities [ask MEDLINE (10), Quertle (9), iPubMed (11), PMinstant (3), Allie (12), BabelMeSH (13), Biblimed (14), Biotext (15), GoPubMed (16), PICO (13)], expert finding [Anne O’Tate (17), eTBLAST (18), GoPubMed (16), PubFocus (19), MEDSUM (20)], identifying similar publications [eTBLAST (18), Arrowsmith (21), Dejavu (22)] or even creating models to annotate abstracts for novel concepts [ezTag (23)].…”
Section: Introductionmentioning
confidence: 99%