2019
DOI: 10.1007/978-981-13-9443-0_3
|View full text |Cite
|
Sign up to set email alerts
|

“I Think It Might Help If We Multiply, and Not Add”: Detecting Indirectness in Conversation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
4
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(6 citation statements)
references
References 30 publications
2
4
0
Order By: Relevance
“…This is consistent with the feature analysis in Table 5, suggesting that tutoring moves do not significantly improve the performance of the classifier; (v) Nonverbal behaviors do not appear as important features for the classification. This is coherent with results from (Goel et al, 2019). Note that prosody might play a role in detecting instructions that trail off, but, as described, paraverbal features were not available; (vi) Would plays an important role in the production of hedges, as it is strongly associated to Propositional hedges (n=2).…”
Section: In-depth Analysis Of the Informative Featuressupporting
confidence: 82%
See 4 more Smart Citations
“…This is consistent with the feature analysis in Table 5, suggesting that tutoring moves do not significantly improve the performance of the classifier; (v) Nonverbal behaviors do not appear as important features for the classification. This is coherent with results from (Goel et al, 2019). Note that prosody might play a role in detecting instructions that trail off, but, as described, paraverbal features were not available; (vi) Would plays an important role in the production of hedges, as it is strongly associated to Propositional hedges (n=2).…”
Section: In-depth Analysis Of the Informative Featuressupporting
confidence: 82%
“…In the appendix (see Table 8 and Table 9) we indicate the confidence intervals to represent the significance of the differences between the models. First, and perhaps surprisingly, we notice that the use of "Knowledge-Driven" features based on rules built from linguistic knowledge of hedges in the LightGBM model outperforms the use of pre-trained embeddings within a fine-tuned BERT model (79.0 vs. 70.6), and in the neural baseline from (Goel et al, 2019) Table 4: Averaged weighted F1-scores (and standard deviation) for the three minority classes and for the 4 classes, for all models. "KD" stands for "Knowledge-Driven", meaning that the features are derived from lexicon, n-gram models and annotations.…”
Section: Model Comparison and Feature Analysismentioning
confidence: 86%
See 3 more Smart Citations