2013
DOI: 10.1109/tasl.2013.2256894
|View full text |Cite
|
Sign up to set email alerts
|

Joint Discriminative Decoding of Words and Semantic Tags for Spoken Language Understanding

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 16 publications
(15 citation statements)
references
References 26 publications
0
15
0
Order By: Relevance
“…Another interesting work (Deoras et al, 2013) is around joint decoding of words and semantic tags on word lattices. They demonstrated significant improvements in both recognition and semantic tagging accuracy over cascade approach.…”
Section: Using Graphs Of Words For Monolingual Slumentioning
confidence: 99%
“…Another interesting work (Deoras et al, 2013) is around joint decoding of words and semantic tags on word lattices. They demonstrated significant improvements in both recognition and semantic tagging accuracy over cascade approach.…”
Section: Using Graphs Of Words For Monolingual Slumentioning
confidence: 99%
“…Then an SLU module that uses the output of the ASR is trained and optimized for the understanding performance. However, as it has been pointed out many times, the hypothesis that gives a better recognition performance does not always yield a better understanding performance [108,127,38]. If the end goal of SLS is to understand what the user means and respond accordingly, both modules can be optimized jointly such that the system "understands better what it recognizes" and "recognizes better what it understands".…”
Section: Motivationmentioning
confidence: 99%
“…Word confusion networks that are extracted from ASR lattice have been used in [54,123]. A joint decoding algorithm is proposed for jointly performing recognition and understanding in [38]. Re-ranking models that re-ranks the multiple hypotheses of a generative SLU system by using support vector machines is given in [40].…”
Section: Using Multiple Hypothesesmentioning
confidence: 99%
See 1 more Smart Citation
“…In (Deoras et al 2012;Deoras et al 2013), the authors propose a method for working with CRF on word lattices via the construction of an expanded lattice where the nodes represent the left and right context in which a word appears. This expanded lattice is much bigger than the original one, both in terms of number of nodes and arcs.…”
Section: Linear Chain Conditional Random Fieldsmentioning
confidence: 99%