2017
DOI: 10.1049/cje.2017.09.020
|View full text |Cite
|
Sign up to set email alerts
|

Annotating the Literature with Disease Ontology

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
1

Relationship

2
4

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 6 publications
0
3
0
Order By: Relevance
“…Data were divided into two types: a) Entity, including ID, name, symbol, definition, xref, synonyms, parentId, isParent, and other attributes. Next, we updated the isParent attribute to an entity that had an inheritance (is-a) hierarchy, such as Disease Ontology [30] , to determine whether the current node contained child nodes; b) For the entity relations, we used the form of triples to store data, that is, entity-relationship-entity, part of the entity-relationship data, including inferScore attribute, was used to measure the credibility of this relationship.…”
Section: Data Fusionmentioning
confidence: 99%
“…Data were divided into two types: a) Entity, including ID, name, symbol, definition, xref, synonyms, parentId, isParent, and other attributes. Next, we updated the isParent attribute to an entity that had an inheritance (is-a) hierarchy, such as Disease Ontology [30] , to determine whether the current node contained child nodes; b) For the entity relations, we used the form of triples to store data, that is, entity-relationship-entity, part of the entity-relationship data, including inferScore attribute, was used to measure the credibility of this relationship.…”
Section: Data Fusionmentioning
confidence: 99%
“…This method usually requires human intervention to obtain the sentiment category of the input text. Traditional machine learning methods commonly used include naive bayes, support vector machine, maximum entropy, random forest and conditional random fields model [18], [19].…”
Section: Introductionmentioning
confidence: 99%
“…In the text error detection subtask, we divide the text characters to be detected into 0 and 1 categories, 0 means that the character is correct, 1 means the character is wrong, and then combined with LSTM and CRF [30] technology to train the Bi-LSTM model to handle the text error detection task. In the text error correction subtask, we first construct the replacement character set of the wrong character, then construct the Bi-LSTM model, input the replacement character of the wrong character detected in the text detection subtask into the Bi-LSTM model, and output the optimal corrected character.…”
Section: Introductionmentioning
confidence: 99%