2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2014
DOI: 10.1109/icassp.2014.6854368
|View full text |Cite
|
Sign up to set email alerts
|

Recurrent conditional random field for language understanding

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
76
0
1

Year Published

2016
2016
2019
2019

Publication Types

Select...
5
3
2

Relationship

2
8

Authors

Journals

citations
Cited by 109 publications
(78 citation statements)
references
References 23 publications
1
76
0
1
Order By: Relevance
“…Peng et al (2009) add a non-linear neural network layer to a linear-chain CRF and Do and Artires (2010) apply a similar approach to more general Markov network structures. Yao et al (2014) and Zheng et al (2015) introduce recurrence into the model and finally combine CRFs and LSTMs. These neural CRF models are limited to sequence labeling tasks where exact inference is possible, while our model works well when exact inference is intractable.…”
Section: Related Neural Crf Workmentioning
confidence: 99%
“…Peng et al (2009) add a non-linear neural network layer to a linear-chain CRF and Do and Artires (2010) apply a similar approach to more general Markov network structures. Yao et al (2014) and Zheng et al (2015) introduce recurrence into the model and finally combine CRFs and LSTMs. These neural CRF models are limited to sequence labeling tasks where exact inference is possible, while our model works well when exact inference is intractable.…”
Section: Related Neural Crf Workmentioning
confidence: 99%
“…The other direction is to optimize the sequence-level discrimination criterion. For example, recurrent conditional random fields (RCRFs) (Yao et al, 2014b) is trained to optimize the sequence conditional likelihood rather than minimizing frame level cross-entropy applied in conventional RNN based sequence labeling (Prez-ortiz et al, 2001;Yao et al, 2013;Mikolov et al, 2010;Shi et al, 2015;Mesnil et al, 2015).…”
Section: Introductionmentioning
confidence: 99%
“…Our approach is an advance over previous research on recurrent conditional random fields (R-CRF) for language comprehension [10]. R-CRF models use RNN to exploit long-range dependencies in the sequence of words.…”
Section: Introductionmentioning
confidence: 99%