2019
DOI: 10.1111/cogs.12807
|View full text |Cite
|
Sign up to set email alerts
|

Contextual Integration in Multiparty Audience Design

Abstract: Communicating with multiple addressees poses a problem for speakers: Each addressee necessarily comes to the conversation with a different perspective-different knowledge, different beliefs, and a distinct physical context. Despite the ubiquity of multiparty conversation in everyday life, little is known about the processes by which speakers design language in multiparty conversation. While prior evidence demonstrates that speakers design utterances to accommodate addressee knowledge in multiparty conversation… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
15
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 13 publications
(16 citation statements)
references
References 40 publications
1
15
0
Order By: Relevance
“…Here we analyse a corpus of triadic human interaction. Triads offer an opportunity to tap into the potential mechanisms underlying a variety of discourse processes, such as turn-taking, reference and conceptual pacts [ 39 42 ]. This prior research motivates the present focus on triads as a source of information about body synchrony itself.…”
Section: Introductionmentioning
confidence: 99%
“…Here we analyse a corpus of triadic human interaction. Triads offer an opportunity to tap into the potential mechanisms underlying a variety of discourse processes, such as turn-taking, reference and conceptual pacts [ 39 42 ]. This prior research motivates the present focus on triads as a source of information about body synchrony itself.…”
Section: Introductionmentioning
confidence: 99%
“…For example, conversational partners can integrate different sources of information from the discourse context, such as the type of discourse prompt and the partner's feedback; such information may be combined to model common ground in a gradient fashion (see Brown-Schmidt, 2012) rather than all-or-nothing, perhaps even tracking the strength of evidence with which information has been grounded (Clark & Schaefer, 1989). Recent work (Yoon & Brown-Schmidt, 2019) demonstrated that speakers can combine at least two different kinds of information (prior knowledge and visual perspective) in order to design referring expressions appropriate for two addressees at once during multi-party conversation. In that study, one addressee was always the knowledgeable one, and the speaker viewed a screen showing what each addressee's visual perspective was (so did not have to track that information in memory), lessening the burden on the speaker.…”
Section: Keeping Track Of Co-presencementioning
confidence: 99%
“…Despite the potential for interference in this particular task (the same item was shared in different ways with each partner), each addressee's identity was remarkably successful in cueing the appropriate co-presence status. In previous studies, the conversational partner's informational needs could be summarized as single, global constraint that could be applied over the entire interaction with that partner-for instance, for cuing category associations for subsets of items (e.g., with this partner, I have matched cards of dogs, but not of turtles, e.g., Horton & Gerrig, 2002, 2005b or for cuing the status of information of a more extended stretch of discourse (e.g., this partner has heard this entire story before, e.g., Galati & Brennan, 2010, or is knowledgeable about these objects, e.g., Yoon & Brown-Schmidt, 2019). In the conversational context of Phase 2, speakers described individual items appropriately to specific partners, even though the items' information status varied and could not be reconstructed from a single categorical constraint.…”
Section: Co-presence Conditions Are Retained In Memorymentioning
confidence: 99%
See 1 more Smart Citation
“…In this view, common ground can be characterized in terms of the conditions of co-presence under which information is shared: namely, linguistic copresence (sharing information through spoken utterances), and visual co-presence (sharing information through the physical environment) (Clark & Marshall, 1981). There is evidence that linguistic and visual co-presence each shape how speakers refer to entities (e.g., Brennan, 2005;Clark & Krych, 2004;Gergle, Kraut, & Fussell, 2004), that speakers can keep track of what they've discussed with whom (e.g., Brennan & Clark, 1996;Galati & Brennan, 2010;Horton & Gerrig, 2005b;Yoon & Brown-Schmidt, 2018), and that speakers can design referring expressions understandable to two addressees (with differing knowledge and individual perspectives) in the same multiparty conversation (Yoon & Brown-Schmidt, 2019). However, it is still largely unknown whether speakers can keep track of the co-presence conditions (or perceptual modalities) under which common ground was established with different conversational partners, and subsequently adapt the referring expressions they address to each partner appropriately.…”
Section: Introductionmentioning
confidence: 99%