Findings of the Association for Computational Linguistics: EMNLP 2021 2021
DOI: 10.18653/v1/2021.findings-emnlp.161
|View full text |Cite
|
Sign up to set email alerts
|

Span Pointer Networks for Non-Autoregressive Task-Oriented Semantic Parsing

Abstract: An effective recipe for building seq2seq, nonautoregressive, task-oriented parsers to map utterances to semantic frames proceeds in three steps: encoding an utterance x, predicting a frame's length |y|, and decoding a |y|sized frame with utterance and ontology tokens. Though empirically strong, these models are typically bottlenecked by length prediction, as even small inaccuracies change the syntactic and semantic characteristics of resulting frames. In our work, we propose span pointer networks, non-autoregr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
21
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 11 publications
(21 citation statements)
references
References 31 publications
0
21
0
Order By: Relevance
“…The final model contains 4.8M parameters and achieves an EM of 83.47% on TOPv2 test. This result is competitive given the model's size, as it is only 3.52% worse than a 134Mparameter RoBERTa-bootstrapped version [23].…”
Section: Pipeline Nlu Baselinementioning
confidence: 97%
See 3 more Smart Citations
“…The final model contains 4.8M parameters and achieves an EM of 83.47% on TOPv2 test. This result is competitive given the model's size, as it is only 3.52% worse than a 134Mparameter RoBERTa-bootstrapped version [23].…”
Section: Pipeline Nlu Baselinementioning
confidence: 97%
“…Our pipeline text-based NLU baseline adopts the encoderdecoder non-autoregressive architecture [22] with span pointer network [23]. This setup demonstrated superior results over non-autoregressive and autoregressive alternatives at smaller sizes on TOPv2 [23].…”
Section: Pipeline Nlu Baselinementioning
confidence: 99%
See 2 more Smart Citations
“…
Semantic parsing (SP) is a core component of modern virtual assistants like Google Assistant and Amazon Alexa. While sequenceto-sequence based auto-regressive (AR) approaches are common for conversational SP, recent studies (Shrivastava et al, 2021) employ non-autoregressive (NAR) decoders and reduce inference latency while maintaining competitive parsing quality. However, a major drawback of NAR decoders is the difficulty of generating top-k (i.e., k-best) outputs with approaches such as beam search.
…”
mentioning
confidence: 99%