2003
DOI: 10.1007/978-94-010-0019-2_11
|View full text |Cite
|
Sign up to set email alerts
|

On the Means for Clarification in Dialogue

Abstract: If citing, it is advised that you check and use the publisher's definitive version for pagination, volume/issue, and date of publication details. And where the final published version is provided on the Research Portal, if citing you are again advised to check the publisher's website for any subsequent corrections.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
59
0
1

Year Published

2005
2005
2018
2018

Publication Types

Select...
3
3
2

Relationship

2
6

Authors

Journals

citations
Cited by 44 publications
(62 citation statements)
references
References 6 publications
2
59
0
1
Order By: Relevance
“…Empirical work such as Ginzburg (1998) and Purver et al (2002) indicates that questioners do not usually refer back to questions which are very distant, and this was consistent with our data. In particular Purver et al analyzed the English dialogue transcripts of the British National Corpus, finding that clarification request source separation (CSS, the distance between a question and the question or answer which it is attempting to clarify) was at most 15 sentences and usually less than 10 sentences.…”
Section: Clarification Recognition Algorithmsupporting
confidence: 94%
“…Empirical work such as Ginzburg (1998) and Purver et al (2002) indicates that questioners do not usually refer back to questions which are very distant, and this was consistent with our data. In particular Purver et al analyzed the English dialogue transcripts of the British National Corpus, finding that clarification request source separation (CSS, the distance between a question and the question or answer which it is attempting to clarify) was at most 15 sentences and usually less than 10 sentences.…”
Section: Clarification Recognition Algorithmsupporting
confidence: 94%
“…Even these categories, however, can take a variety of surface forms. P2NTRIs (or clarification requests (CRs), see e.g., Ginzburg & Cooper, 2004) can appear not only as whwords as in (4), but short fragments (6), longer reprises or echoes (but not necessarily verbatim) (7), and more explicit or conventional indicators (8)- (9) (Purver, Ginzburg, & Healey, 2003): (6) Healey et al (2005) present a protocol for coding repair in interaction which identifies the different CA types of repair described above. Reliability of the protocol was shown to be encouraging-in an exercise re-coding a corpus of examples from the CA literature, 75% were assigned the same category as in the original-although detection agreement rates were not reported.…”
Section: Types Of Repairmentioning
confidence: 99%
“…Colman and Healey (2011) and McCabe et al (2013) report inter-annotator agreement of c. 75% kappa. BNC-PGH is annotated only for other-repair initiation P2NTRIs (Purver et al, 2003 report 75-95% kappa); MAPTASK similarly provides information on P2NTRIs (via check tags) but not self-repair. SWBD, BNC, and MAPTASK provide gold-standard part-of-speech (POS) tags; we tagged the PCC using the Stanford POS tagger (Toutanova, Klein, Manning, & Singer, 2003).…”
Section: Annotationmentioning
confidence: 99%
See 1 more Smart Citation
“…Misunderstandings and non-understandings are much less frequent in human-human dialogue than in human-machine dialogue [11]. When they do occur, it is less often because a dialogue participant has misheard an utterance and more often that she has confusions about her conversational partner's underlying intent [12].…”
Section: A Aptness Of An Embedded Woz Corpusmentioning
confidence: 99%