2020
DOI: 10.48550/arxiv.2011.04884
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A low latency ASR-free end to end spoken language understanding system

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…RNN+Pre-training [9] 98.80 CNN+Segment pooling [1] 97.80 CNN+GRU(SotA) [21] 99.10 3D-CNN+LSTM+CE 99.26 kernels are used in the first layer, followed by 32 in the second layer. As depicted in Figure 3, the temporal dynamics are preserved.…”
Section: Model Intentmentioning
confidence: 99%
See 1 more Smart Citation
“…RNN+Pre-training [9] 98.80 CNN+Segment pooling [1] 97.80 CNN+GRU(SotA) [21] 99.10 3D-CNN+LSTM+CE 99.26 kernels are used in the first layer, followed by 32 in the second layer. As depicted in Figure 3, the temporal dynamics are preserved.…”
Section: Model Intentmentioning
confidence: 99%
“…Spoken language understanding (SLU) aims at extracting structured semantic representations, such as intent and slots, from the speech signal [1]. These representations are crucial to enable speech as the primary mode of human-computer interaction (HCI) [2].…”
Section: Introductionmentioning
confidence: 99%