2011
DOI: 10.1016/j.ins.2011.01.014
|View full text |Cite
|
Sign up to set email alerts
|

A trigram hidden Markov model for metadata extraction from heterogeneous references

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
21
0
1

Year Published

2011
2011
2022
2022

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 28 publications
(22 citation statements)
references
References 19 publications
0
21
0
1
Order By: Relevance
“…Yin et al [24] employ a modification of a traditional HMM called a bigram HMM, which considers words' bigram sequential relation and position information. Finally, Ojokoh et al [25] explore a trigram version of HMM, reporting overall accuracy, precision, recall and F1 measure of over 95%.…”
Section: Figure 4: An Example Of a Reference String Represented In Xmentioning
confidence: 99%
“…Yin et al [24] employ a modification of a traditional HMM called a bigram HMM, which considers words' bigram sequential relation and position information. Finally, Ojokoh et al [25] explore a trigram version of HMM, reporting overall accuracy, precision, recall and F1 measure of over 95%.…”
Section: Figure 4: An Example Of a Reference String Represented In Xmentioning
confidence: 99%
“…Az informatikai módszereket és szövegbányászati eljárásokat tekintve több metódus is szóba jöhet, a szakirodalom alapján az egyik legrelevánsabb megoldás a rejtett Markovmodell alkalmazása (Hetzner, 2008;Ojokoh, Zhang, & Tang, 2011). E módszer mellett a szakemberek más, a mesterséges intelligencia alapján kidolgozott megközelítéseket is alkalmaznak, melyeket általában különféle gépi tanulási algoritmusok segítségével érnek el (Tkaczyk, Bolikowski, Czeczko, & Rusek, 2012;Tkaczyk, Szostek, Fedoryszak, Dendek, & Bolikowski, 2015).…”
Section: Elméleti Háttérunclassified
“…Two main approaches to reference parsing-related problems are: regular expressions and knowledge-based approaches ( [15], [16]) and machinelearning techniques ( [17], [18], [19]).…”
Section: ) Previous Workmentioning
confidence: 99%
“…Yin et al [18] parse references with the aid of a bigram HMM, in which the emission probability is composed of "beginning" (emitted as first word) and "inner" (inner word) probability. Ojokoh et al [19] propose a full second order HMM with modified Viterbi algorithm and a new smoothing technique for transition probabilities.…”
Section: ) Previous Workmentioning
confidence: 99%